Build Tutorial — A conversation-driven development narrative
The solar system in your pocket
sceneRadius = (km/1000)^(1/3) × 0.06 compresses the 30x real range (Mercury to Jupiter) into a visually useful 3x range while preserving ordering. Later refined to sqrt scaling for better ratios.blendMode = .add with writesToDepthBuffer = false to avoid z-fighting. The radial gradient texture is generated via CGGradient.drawRadialGradient() and applied as both diffuse and transparent contents.SCNSphere automatically via the default UV mapping. Later, Jupiter and Pluto were upgraded to direct NASA public-domain sources for cleaner licensing.pow(realRatio, 0.4) × 1.5 preserves relative ordering while keeping moons visible near their parent.extent × 3.7 to fill roughly half the screen (derived from FOV 60°, half-screen subtending 15°, 1/tan(15°) ≈ 3.7).SCNText nodes to SwiftUI overlay labels. Each body's 3D position is projected to screen coordinates via SCNView.projectPoint(), then rendered as a SwiftUI Text view at a constant pixel size. Planets get 12pt medium weight, moons get 10pt slightly dimmer. Labels behind the camera or off-screen are culled.projectPoint() method returns a SCNVector3 where z > 1.0 indicates the point is behind the camera. The overlay uses GeometryReader with .position() modifier for absolute positioning, and .allowsHitTesting(true) with .onTapGesture for interaction.allowsCameraControl maintaining internal state that overrides programmatic camera changes. Replaced it entirely with a custom Coordinator that manages explicit camera state (target, distance, azimuth, elevation) with spherical-to-cartesian conversion. Swapped gestures: one-finger pan translates, two-finger pan orbits, pinch zooms.cameraTarget, cameraDistance, orbitAngleX (azimuth), and orbitAngleY (elevation). The updateCamera() method converts to cartesian: x = target.x + dist × cos(elev) × sin(azim). Planet shortcuts call setCamera(target:distance:) directly, bypassing any gesture state. The pendingFocus mechanism handles launch-arg focus by deferring until the coordinator's didSet fires.SCNTube cap UVs map linearly across the disc, not radially. Built custom ring geometry from scratch: a flat disc with 72 radial segments × 4 ring segments, with explicit UV coordinates where u maps from inner radius (0) to outer radius (1) radially, and v maps around the circumference. Applied Cassini colour map and transparency pattern.SCNGeometrySource for vertices, normals, and texture coordinates, with SCNGeometryElement for triangle indices. The key insight: UV u coordinate runs radially (0 at inner edge, 1 at outer edge) so the 915×64 strip texture wraps correctly from inner to outer ring.ringNode.eulerAngles.y = -spin each frame to cancel the parent's rotation while preserving the axial tilt.Slider caused frame drops by publishing changes every pixel. Replaced with a custom DragGesture control, and label projection is throttled to every 3rd frame. Labels are hidden entirely during zoom drags via an isZooming flag. Licence research led to replacing Jupiter and Pluto textures with direct NASA public-domain sources, keeping only Saturn's Planet Pixel Emporium textures (the best available for ring detail).eulerAngles in Y-X-Z order, so (tilt, spin, 0) means: yaw by spin, THEN pitch by tilt. This causes the tilt axis itself to rotate with each spin cycle — the rings appeared to oscillate at the planet's rotation frequency. At 10,000x, Saturn spins once every 3.8 seconds, making the wobble obvious.tiltQuat * spinQuat. The tilt quaternion is applied in world space (fixed), and spin happens around the tilted pole axis (local). Saturn's rings now hold perfectly steady while the cloud bands rotate underneath. All planets benefit — Earth's 23.4° tilt, Uranus's 97.8° sideways roll, all rock-solid at any time scale.tiltQuat * spinQuat applies spin first (in the body's pre-tilt frame), then tilt (in world space). The ring's counter-rotation also uses quaternions: simd_quatf(angle: -spin, axis: (0,1,0)) in the local frame cancels the parent's spin while inheriting the stable tilt. This is a classic 3D rotation pitfall — Euler angles are intuitive to write but compose incorrectly when axes depend on each other.../solarsystem-web with space missions, an ISS, and a bunch of other features. Would you be up for porting those enhancements back into iOS? I'd love to do it in stages and apply everything we've learned about unit testing and simulator verification along the way.SceneBuilder.moonDistExponent moved from 0.4 to 0.6, giving the Moon a scene distance of 17.6 Earth radii (real 60.3) instead of 8.8. Pure-math statics were marked nonisolated so unit tests don't need to jump to the main actor. The camera coordinator's setCamera gained optional azimuth and elevation parameters plus read-only accessors — groundwork for the Sun-side mission-framing camera in Phase 6.SceneBuilder is @MainActor-isolated for SceneKit access, which bled into every static call site. The fix was to mark the pure-math statics (sceneDistance, sceneRadius, moonSceneDistance, and their constants) nonisolated. This lets tests call them freely from Swift Testing's default non-main context while preserving main-actor safety for everything that actually touches SceneKit state.Mission.swift for the data shape (Mission / Vehicle / Waypoint / MissionEvent / MoonOrbit / MoonLanding), MissionData.swift with Apollo 11's three-vehicle timeline (Saturn V, Columbia, Eagle), CatmullRom.swift as a pure-Swift port of Three.js's centripetal spline (alpha = 0.5), and MissionManager.swift for scene-graph construction, per-frame updates, telemetry, and event detection. The trajectory line uses SCNGeometryElement with primitive type .line and a per-vertex colour gradient via the .color semantic, matching how the starfield renders its B-V colours. Moon-aligned waypoints rotate into the ecliptic frame at init using atan2(moonPos.y, moonPos.x) at flyby time. anchorMoon waypoints snap to the Moon's actual direction but at its semi-major-axis distance, so they land on the rendered Moon mesh instead of its eccentrically-offset true position. During runtime, Columbia orbits the Moon with cos/sin motion in a plane perpendicular to the Earth-Moon line, while Eagle walks a three-phase lifecycle: orbit, descent-to-surface snap, post-ascent orbit.-mission apollo11 -focus earth jumps to 16 July 1969, auto-selects 10,000x replay speed, and renders the trajectory arcing from Earth to the Moon and back. Columbia's marker tracks the trajectory, snaps into a lunar orbit around T+76h, and exits cleanly on the return leg.cameraCoordinator.didSet was calling focusOnBody before updatePositions had ever run, so the camera locked onto Earth's default-initialised position at the origin — which is where the Sun is. The fix was to call updatePositions(projectLabels: false) synchronously inside the didSet, so every node is at its simulated-date position before focus maths read from it. Worth remembering for any feature that sets an initial camera target from node positions during app boot.-mission launch arg, and added six new gotchas to the Known Gotchas list (main-actor isolation for pure math, Moon SMA alignment, anchorMoon scale, event rewind reset, CatmullRom knot clamp, line primitive indexing). Added a Missions Pipeline Mermaid diagram to architecture.html and wove the mission group into the scene-graph diagram. Updated README.md with the missions feature callout and the -mission launch arg. Recorded the feedback as a persistent rule so future phases don't skip the doc-refresh step.tools/export-missions.mjs: a small Node 22 extractor that slices ../solarsystem-web/js/missions.js up to class MissionManager, stubs out its imports (THREE, sceneBuilder, orbitalMechanics, solarSystemData, textureGenerator), evaluates the remaining data declarations in a node:vm sandbox, and serialises ALL_MISSIONS to SolarSystem/Resources/Missions.json. 2,481 lines of JSON: 11 missions, 11 vehicles, 58 events, 213 waypoints. Rewrote MissionData.swift as a loader over a small DTO layer (MissionJSON / VehicleJSON / WaypointJSON) so the domain types stay free of Codable boilerplate. Added Vehicle.autoTrajectory and MissionManager.generateTransferArc — a port of the web's Hohmann-style arc generator — so Perseverance's two anchor points (Earth launch, Mars landing) expand into a smooth elliptical transfer. Registered Missions.json as a bundled resource via four new pbxproj entries.autoTrajectory: "transfer", and the transfer arc generates a monotonically increasing timeline. Simulator verification showed each trajectory clearly: Cassini's full VVEJGA gravity-assist loop, Voyager 1 heading to Jupiter, Parker's multi-loop solar approach, BepiColombo's Mercury orbit, Apollo 13's free-return swing, and Artemis II's crewed lunar flyby arc.Mission / Vehicle have custom initialisers with ergonomic defaults, which precludes synthesised Decodable. Adding a thin DTO layer with toDomain() conversion keeps both sides clean: the JSON side can grow fields over time without touching the domain layer, and the domain layer keeps whatever shape is most useful to the rest of the app. The node:vm sandbox approach also beat every alternative for the one-shot extraction — no ESM loader hackery, no npm install, just run the file in a context where its imports resolve to empty stubs.MissionUIViews.swift file so the mission UI lives together: a MissionsMenu dropdown in the toolbar (airplane-departure icon, lists all 11 missions with checkmarks, plus a Stop replay item), a MissionTimelineSlider pinned above the zoom slider with an orange thumb and MET readout (dragging pauses playback via a one-way timelineScrubbing flag that restores prior pause state on release), a MissionTelemetryPanel glass-morphism HUD showing Mission Elapsed Time, distance (auto km or AU based on reference frame), and speed, and a MissionEventBannerView that slides up for four seconds whenever the simulation first crosses an event timestamp. The view model grew an updateMissionUIState() step that runs every third frame (same cadence as label projection) to compute telemetry, publish the event banner, and arm a one-shot end-of-mission speed reset so the simulation stops racing past splashdown.UUID() as its SwiftUI identity rather than the event name, so SwiftUI re-runs the slide-in animation when the same event fires twice (rewind + replay). Second, the timeline slider's timelineScrubbing Published property toggles isPaused through a saved wasPausedBeforeScrub flag, so the view model can't fight the drag gesture with per-frame elapsed-hours sync. Third, one test expression tripped Swift's type-checker timeout (2 * 24 + 14 + 32.0 / 60.0 + 8.0 / 3600.0) — the fix was splitting into explicitly-typed Double intermediates. Worth noting for any test file that mixes Int and Double literals in long arithmetic.SolarSystemData with real orbital elements (408 km altitude, 51.6° inclination, 92.7-minute period) as a regular moon of Earth, so the existing moon-positioning, label-projection, and rotation pipelines apply for free. SceneBuilder.createBodyNode was special-cased for body.id == "iss" to return a procedural cross-shaped model instead of a sphere: a central truss (long horizontal box), pressurised modules (shorter cross-bar), four pairs of blue solar panels laid flat in the orbital plane, and two white radiators. All constant-lit, zero external assets. Added showISS to the view model (UserDefaults-backed, off by default) and an antenna.radiowaves Satellites menu button in the toolbar. The label-projection path skips ISS when the toggle is off so "ISS" doesn't float next to Earth with no mesh beneath it.createBodyNode (procedural geometry) and one label-skip when hidden. This pattern will extend naturally to other satellites like Hubble or JWST: add the orbital elements, add to the relevant planet's moons array, and (optionally) define a procedural model.MissionManager.missionBounds) and snapping the camera to Earth + that local centre with a Sun-side azimuth (Earth's ecliptic direction plus 0.55 radians for a terminator-on-the-far-side view) and elevation 0.3 radians. Distance sized to fit the trajectory's local radius with 1.4× padding for portrait viewports. A per-frame lerp at 0.02 keeps the camera target glued to earthScenePos + localCentre so the trajectory stays centred as Earth drifts heliocentrically. Gesture handlers in the scene coordinator now fire a userInteractionHandler on .began, which the view model uses to clear lazyFollowActive — the user gets full manual control the moment they drag. Heliocentric missions bypass framing and call resetToOverview instead. Event labels (TLI, Lunar Flyby, Saturn Arrival, ...) project into the SwiftUI overlay within a ±3% window of the mission duration around their timestamp (clamped 1–500 h), so at the default auto-speed each label appears for about two seconds of screen time then fades.-focus earth. Cassini stays in the full overview so the VVEJGA trajectory is clear. Dragging breaks the follow instantly — no fight between the gesture and the lerp. Thirty-seven unit tests green; build and device install clean.userInteractionHandler on .began rather than every .changed delta avoids spamming the view model each frame of a long drag. One call is enough to flip lazyFollowActive off for the rest of the session.systemExtent × 3.7 multiplier now framed planets from roughly twice the required distance. Ported the web's tuning (0.8× for moon-hosting bodies, 6.0× for moonless ones, each scaled by a portrait-aware factor 0.5 + 0.5 × min(aspect, 1)). Jupiter now frames tight with Io and Ganymede in shot; Mercury and Venus stay visible instead of becoming pinpricks.RocketIcon SwiftUI view that draws the silhouette — nose cone, fuselage, side fins, exhaust flame — with a Canvas and three Paths. Accepts .foregroundColor like an SF Symbol, so it tints cleanly alongside the other toolbar icons.−earthPos / |earthPos|. The original code used atan2(x, z) which pointed the offset away from the Sun — camera ended up on the anti-Sun side, terminator facing the camera. Flipping the signs to atan2(−x, −z) + 0.55 rad puts the camera between the Sun and Earth for a two-thirds-lit view. Also extended the same formula to regular planet presets so Jupiter and Saturn pick up the same Sun-side framing.readsFromDepthBuffer = false on the line material (on top of the existing writesToDepthBuffer = false) makes the full loop visible regardless of what it passes behind.−target / |target|, not +target / |target|. Easy to get wrong, easy to miss when the simulator happens to show the target at a time when it was Sun-side of Earth anyway. The fix is an unconditional sign flip, applied once in each of the two focus paths.MissionEventBannerView out of the bottom control stack and gave it its own top-centre overlay VStack in ContentView, with an 88pt top spacer so it sits just below the date/info bar. Card width bumped from 300 to 320pt and horizontal alignment switched from leading to centred (text inside remains leading-aligned so it stays readable). Transition flipped from move(edge: .bottom) to move(edge: .top) so the banner now slides down into view from above, matching its new position.Color.clear, not a Spacer(), because Spacer() would expand to fill available vertical space and the banner would drift to the bottom. A fixed clear rectangle reserves exactly the room the info panel needs, no more. (2) The inner VStack(alignment: .leading) text stays leading-aligned while the outer card uses default (centre) horizontal alignment in its parent row — two independent alignment decisions at two layers of the hierarchy.Extensions/Platform.swift with four typealiases (PlatformColor, PlatformImage, PlatformView, PlatformViewRepresentable) plus two CGImage bridging helpers that hide the UIImage / NSImage construction-signature divergence.sed pass across SceneBuilder, MissionManager, ViewModel, MissionUIViews. Deleted import UIKit from ViewModel (no longer needed). Rewrote TextureGenerator from UIGraphicsImageRenderer (iOS-only) to a direct CGContext path that renders identically on both platforms with a shared bitmap helper.#if canImport(UIKit) / #else branches in SolarSystemSceneView: iOS keeps UIPanGestureRecognizer (1-finger translate, 2-finger orbit), UIPinchGestureRecognizer, UITap (single + double). macOS gets NSPanGestureRecognizer with buttonMask = 0x1 (left drag = translate) and buttonMask = 0x2 (right drag = orbit), NSMagnificationGestureRecognizer (trackpad pinch), NSClickGestureRecognizer (single + double). Scroll-wheel zoom needed a ScrollZoomSCNView subclass overriding scrollWheel(with:) because NSGestureRecognizer doesn't cover scroll events. Gesture handlers all funnel into shared applyPan / applyOrbit / applyPinchZoom methods so the camera maths stays single-sourced.SDKROOT from iphoneos to auto, added SUPPORTED_PLATFORMS = "iphoneos iphonesimulator macosx" plus MACOSX_DEPLOYMENT_TARGET = 14.0 (Sonoma). Dropped the target-level SUPPORTED_PLATFORMS override so the target inherits both iOS and macOS from the project. Set SUPPORTS_MAC_DESIGNED_FOR_IPHONE_IPAD = NO to get native AppKit rather than Catalyst. One target, two destinations.xcodebuild -destination 'platform=macOS' test). Fixed three cross-platform gotchas the first macOS build flushed out: SCNVector3 component type is Float on iOS but CGFloat on macOS (added SCNVector3(Double, Double, Double) and .adding(Double, Double, Double) helpers), CADisplayLink.init(target:selector:) is iOS-only (macOS uses a Timer at 60 Hz on the common run-loop mode), and a closure in MissionManager's moon-orbit maths was too complex for the macOS Swift type-checker (extracted into a named orbitAroundMoon method).platform=iOS and platform=macOS from the same target, same sources. Launch arguments — -mission apollo11, -focus jupiter, -timeScale 5000 — work identically on both via xcrun simctl launch … -- -mission apollo11 on iOS and open -n SolarSystem.app --args -mission apollo11 on Mac. Standard macOS navigation: left-drag to pan, right-drag to orbit, scroll / pinch to zoom, click to select, double-click to reset.PlatformColor / PlatformImage / etc. are in place, most of the search-and-replace is zero-risk. Second, SCNVector3's component type (Float vs CGFloat) is the single most common per-platform error; build a Double-based helper early and you'll dodge dozens of small breakages. Third, the macOS Swift type-checker has shorter timeouts for complex expressions than iOS — heavyweight closures that compile on iOS may time out on macOS and need extracting into named methods. All of these are one-time costs; once done, the same source tree builds cleanly forever.-frameLog launch argument that prints any frame whose tick gap exceeds 20 ms (dropped frame at 60 Hz) or whose work exceeds 5 ms, plus a once-per-second summary of fps and worst-times. First run showed the baseline was a healthy 60 fps / 3 ms worst-work, punctuated by spikes where pos=265ms ui=yes — updatePositions was the culprit, specifically on UI-update frames. Split that phase into five sub-timers (bodies, stars, deconflict, mission-update, mission-UI) and ran again: every stutter was entirely in the bodies phase, with everything else sub-millisecond. Drilled one level deeper by wrapping individual projectLabel calls and the output was damning — slow projectLabel Earth (16.6ms), slow projectLabel ISS (16.5ms), repeating. 16.5 ms is exactly one 60 Hz display frame. SCNView.projectPoint was blocking the main thread waiting for a render-thread sync on every single call. Twenty-six bodies × 16.5 ms per call = 430 ms of stalls crammed into every UI-update frame. Replaced the projectPoint calls with manual matrix projection: cache projection × inverse(camera.simdWorldTransform) once per frame as a simd_float4x4, and project each body with a pure-SIMD multiply plus perspective divide plus NDC-to-screen mapping. About 100 ns per label, sub-millisecond for all twenty-six combined, and the output is already in SwiftUI top-left-origin coords so no per-platform Y flip is needed.fps~46 worst-work=266ms. After: fps~60 worst-work=2ms. Glassy smooth at 10,000× playback. Both platforms still pass all 34 unit tests. The -frameLog flag stays in the codebase so the same diagnostic path can pick apart any future regression in one command. Nice side effect: the iOS label projection got the same speedup even though it wasn't stuttering visibly.summary line made it obvious that normal frames were 3 ms, so a 265 ms frame was 80× off, not 2× off. Third, exactly-one-display-frame timings are a signature — when you see durations matching 16.7 ms / 33.3 ms / 50.0 ms at 60 Hz, you're usually looking at render-thread sync, not CPU work. Fourth, bypass framework conveniences you can reproduce cheaply — projectPoint is opaque; a ten-line SIMD matrix multiply does the same job, is provably synchronous, and happens to also be about a hundred times faster. Whenever the framework method is called in a hot loop and does more work than arithmetic, that's a prompt to consider replacing it.projectPoint-bypass fix, both traced back to the projection matrix itself. First, labels drifted horizontally on macOS while vertical tracking was fine. Single-axis bugs point straight at a single matrix term — the [0][0] aspect component — so the fix was to stop reading camera.projectionTransform (which on macOS returns a matrix whose aspect doesn't track the live viewport) and build the matrix ourselves from camera.fieldOfView + view.bounds each frame. Hit one gotcha on the way: SCNCameraProjectionDirection only has .horizontal and .vertical cases, no .automatic despite the convention suggesting one. macOS defaults to horizontal, iOS to vertical; the two use mirror-image xScale / yScale formulae.-16 pt vertical offset. Jupiter and the Sun at close zoom span hundreds of points — sixteen is nowhere near enough. Solution: offset each label by the body's actual on-screen radius plus a small margin, derived from the same cached projection matrix. A first attempt used an empirical r / clip.w * 300 formula (borrowed from the pre-existing star-occlusion code) but that undershot on widescreen Mac windows by 3–4×. Replaced with the correct worldRadius * pixelsPerUnit / clip.w, where pixelsPerUnit = yScale * (viewportHeight / 2) comes directly from the projection matrix. Now labels float just above every body at every zoom, from Mercury at overview to the Sun filling the screen.8 pt floor handles stars and the procedural ISS model (no SCNSphere geometry, so nominal zero radius) — they still get a readable gap.[0][0] / [1][1] values rather than fiddling with empirical constants. Second, derive visual constants from the matrix you're rendering with. The 300 in r / clip.w * 300 felt plausible on iOS but was just an eyeball fit for one particular viewport. The correct yScale * height / 2 is the same factor SceneKit uses internally to size the sphere on screen, so it's guaranteed to match the rendering — no fudging per platform or per window size.