SolarSystem

Architecture — real orbital mechanics, GPU-accelerated rendering, and the techniques behind each part

Saturn with Cassini ring textures, moons, and real star names

System Architecture

graph TD A["Current Date/Time"] --> B["Julian Date Converter"] B --> C["Orbital Mechanics Engine
(CPU, Keplerian)"] C --> D["Planet Positions
(Heliocentric Ecliptic)"] D --> E["Log Distance +
Sqrt Radius Scaler"] B --> R["IAU Rotation
(Axial Tilt + Spin)"] E --> F["SceneKit Scene Graph
(GPU, PBR)"] R --> F F --> G["SCNView
(UIViewRepresentable)"] G --> H["SwiftUI Overlay
(Labels, HUD, Zoom)"] H --> I["Custom Camera
Coordinator"] I --> G A --> MM["MissionManager
(trajectories, telemetry, events)"] C --> MM MM --> F J["JPL J2000.0
Elements"] --> C K["NASA/Cassini
Textures"] --> F L["Time Controls
(Speed/Pause/Reset)"] --> A M["HYG Star
Catalogue (8,920)"] --> F N["UserDefaults"] --> H MD["Mission Data
(waypoints, events, vehicles)"] --> MM style A fill:#1a1a3e,stroke:#ffaa33 style C fill:#1a2a1a,stroke:#44aa44 style R fill:#1a2a1a,stroke:#44aa44 style F fill:#1a1a3e,stroke:#4488ff style H fill:#2a1a2a,stroke:#aa44ff style I fill:#1a2a2a,stroke:#44aaaa style M fill:#1a1a3e,stroke:#88bbff style MM fill:#2a1a2a,stroke:#aa44ff style MD fill:#1a1a3e,stroke:#88bbff

Custom Camera Controller

graph TD A["UIGestureRecognizers"] --> B["1-Finger Pan"] A --> C["2-Finger Pan"] A --> D["Pinch"] A --> E["Single Tap"] A --> F["Double Tap"] B --> G["Translate Target
(camera-local right + up)"] C --> H["Orbit
(azimuth + elevation)"] D --> I["Zoom
(scale distance)"] E --> J["Hit Test
→ Select Body"] F --> K["Reset to Overview"] G --> L["updateCamera()
Spherical → Cartesian"] H --> L I --> L K --> L L --> M["cameraNode.position
cameraNode.look(at:)"] style A fill:#1a1a3e,stroke:#ffaa33 style L fill:#1a2a1a,stroke:#44aa44 style J fill:#2a1a2a,stroke:#aa44ff

Camera Spherical Coordinates

The camera orbits a target point at a given distance, controlled by azimuth and elevation angles:

x y (up) z target d distance cam θ azimuth φ elevation Position from spherical coords: x = d · cos(φ) · sin(θ) y = d · sin(φ) z = d · cos(φ) · cos(θ) 1-finger drag → moves target 2-finger drag 2-finger drag → changes θ and φ Pinch → changes d

Coordinate Systems & Axis Flip

Orbital mechanics and SceneKit use different conventions for which axis points “up”. Every position computed by the physics engine has to be remapped once before it reaches the scene graph. The swap is a constant, so it's cheap — but getting it wrong puts the planets underneath each other or flips the direction of rotation.

Ecliptic (orbital mechanics) frame x, y — in the ecliptic plane (Earth's orbital plane around the Sun)
z — perpendicular to the ecliptic, pointing north (right-handed)

SceneKit (GPU rendering) frame x — right on screen
y — up on screen // SceneKit is Y-up, left-handed
z — toward the camera

The mapping (all three places in the codebase) scene.x = ecliptic.x
scene.y = ecliptic.z // ecliptic up becomes SceneKit up
scene.z = ecliptic.y // negate for right→left handed flip

Why negate y? Without the sign flip, orbits would run the wrong way round — planets would appear to orbit clockwise when viewed from the north ecliptic pole, which is the opposite of reality. The single sign flip converts right-handed ecliptic coordinates to SceneKit's left-handed frame.

This same mapping is applied to every position: planet centres, moon offsets, mission waypoints, trajectory samples, event label positions. Centralising it in SceneBuilder.updateNodePosition and MissionManager.toLocalGeocentricScene means we only have to get it right once.

Cross-Platform Abstraction (iOS + macOS)

The same source tree builds a native iOS app and a native macOS app. One Xcode target, two destinations, no Catalyst. The strategy is: make every file platform-neutral by default, and funnel the handful of genuine divergences through a single abstraction file plus a few narrowly-scoped #if blocks.

The Typealias Layer (Extensions/Platform.swift)

Four typealiases resolve differently per platform. Every other file in the project references these names instead of writing UIColor / NSColor directly:

Platform.swift #if canImport(UIKit)
  public typealias PlatformColor = UIColor
  public typealias PlatformImage = UIImage
  public typealias PlatformView = UIView
  public typealias PlatformViewRepresentable = UIViewRepresentable
#elseif canImport(AppKit)
  public typealias PlatformColor = NSColor
  public typealias PlatformImage = NSImage
  public typealias PlatformView = NSView
  public typealias PlatformViewRepresentable = NSViewRepresentable
#endif

The rule is: outside this file, no direct reference to UIKit or AppKit types. Anything that would have been UIColor(red:…) is now PlatformColor(red:…), the result compiling as UIColor(red:…) on iOS and NSColor(red:…) on macOS.

Narrowly-Scoped #if Divergences

Four places need more than a typealias swap, because the API shape itself differs between platforms:

  1. SceneView representable protocol (SolarSystemSceneView.swift). UIViewRepresentable needs makeUIView / updateUIView; NSViewRepresentable wants makeNSView / updateNSView. Both methods exist behind #if canImport(UIKit) in the struct, delegating to a shared configureSharedSceneView(_:context:) helper that does the 90% of setup that's identical.
  2. Gesture recognisers. UIKit and AppKit each define their own class hierarchy (UIPanGestureRecognizer vs NSPanGestureRecognizer). Both apps install five recognisers on the SCNView, but the sets differ by platform input idiom — see the Gesture Map below. The gesture handlers call shared applyPan, applyOrbit, applyPinchZoom methods so the camera maths is only written once.
  3. Frame-tick loop (SolarSystemViewModel.swift). Both platforms use CADisplayLink, just constructed differently: iOS takes the direct CADisplayLink(target:selector:) initialiser, macOS acquires the link from the SCNView via scnView.displayLink(target:selector:) (macOS 14+) so ticks stay synced to whichever display the window is on. An earlier Timer-based macOS path produced a visible ~1 Hz stutter because Timer cadence drifts in and out of phase with the 60 Hz VBlank — switched to the NSView-bound display link as soon as that was diagnosed.
  4. Platform-only SwiftUI modifiers (ContentView.swift, CreditsView.swift). .statusBarHidden is iOS-only (macOS has no equivalent concept); navigationBarTitleDisplayMode and .topBarTrailing are iOS-only (macOS uses .confirmationAction); systemGroupedBackground is iOS-only (macOS has windowBackgroundColor). Guarded one-liner each.

SCNVector3 Component Type (the sneaky one)

The most subtle platform difference: SCNVector3.x is Float on iOS but CGFloat on macOS. Arithmetic like target.x -= right.x * dx * speed works on iOS (all Float) but fails on macOS (CGFloat can't subtract a Float without an explicit cast). Two helpers in SCNVector3+Math.swift hide the gap:

Cross-platform SCNVector3 helpers extension SCNVector3 {
  // Construct from Double; widens or narrows per platform.
  init(_ x: Double, _ y: Double, _ z: Double)
  // Offset by Double deltas without Float/CGFloat mismatches.
  func adding(_ dx: Double, _ dy: Double, _ dz: Double) -> SCNVector3
}

All the orbital maths already works in Double (SIMD3<Double>) for precision, so the Double-typed helpers drop in naturally. Call sites like target.adding(dx, dy, dz) or SCNVector3(x, y, z) compile unchanged on both platforms.

Gesture Map

Action iOS input macOS input
Pan (translate)1-finger dragLeft-mouse drag
Orbit2-finger dragRight-mouse drag
ZoomPinchScroll wheel / trackpad pinch
Select bodySingle tapSingle click
ResetDouble tapDouble click

AppKit's Y axis is inverted relative to UIKit's (origin bottom-left vs top-left), so the macOS pan / orbit handlers flip dy before feeding it into the shared maths — this preserves the "drag up = look up" convention both platforms expect. Scroll-wheel zoom needs a custom NSView subclass (ScrollZoomSCNView) because NSGestureRecognizer doesn't cover scroll events, so we override scrollWheel(with:) directly.

Integration Tests via Launch Arguments

The launch-argument path (-focus, -mission, -showISS, -timeScale, -date, etc.) is platform-neutral — it reads ProcessInfo.processInfo.arguments, which exists identically on both. On iOS you invoke with xcrun simctl launch booted com.pwilliams.SolarSystem -- -mission apollo11; on macOS you use open -n SolarSystem.app --args -mission apollo11. Same arg syntax, same behaviour, same mission playback.

Scene Graph

graph TD A["SCNScene"] --> B["Camera
(Custom Spherical Controller)"] A --> C["Starfield
(8,920 HYG stars, 4 tiers,
B-V colour, r=500)"] A --> D["Sun Node"] A --> E["Planet Nodes ×9
(IAU Rotation)"] A --> F["Moon Nodes ×16
(Tidally Locked)"] A --> G["Orbit Paths ×9"] A --> H2["Lights
(Omni + Ambient)"] A --> I["ISS Node
(procedural truss + panels,
Earth orbit, off by default)"] A --> M["Mission Groups
(per mission, positioned at
Earth or origin each frame)"] M --> M1["Trajectory line
(polyline, vertex gradient)"] M --> M2["Vehicle markers
(SCNSphere + halo)"] D --> D1["Emissive Sphere
(Procedural Granulation)"] D --> D2["4 Glow Layers
(Radial Gradient, Additive)"] E --> E1["PBR Sphere
(NASA/Cassini Textures)"] E --> E2["Saturn: Custom Ring Disc
(Radial UVs, Colour + Alpha)"] F --> F1["PBR Sphere
(NASA/Galileo Textures)"] C --> C1["~120 Named Stars
(Label Projection)"] style A fill:#1a1a3e,stroke:#ffaa33 style D fill:#3a2a0a,stroke:#ffaa33 style E fill:#1a1a3e,stroke:#4488ff style F fill:#1a1a3e,stroke:#888888 style C fill:#1a1a3e,stroke:#88bbff style I fill:#1a2a3a,stroke:#88bbff style M fill:#2a1a2a,stroke:#aa44ff

Performance Debugging (lessons from a macOS stutter)

The app ran buttery-smooth on iOS but stuttered once a second on macOS at high time-scales. The hunt for the cause illustrates a reusable pattern: add cheap instrumentation behind a launch flag, look at the numbers, let the numbers point at the culprit. It's much faster than guessing.

Step 1 — Measure the frame loop

Add a per-frame logger behind a -frameLog launch arg. Two things worth printing: the delta-t between ticks (is the display link stuttering?) and the work-time inside the handler (is our code slow?). Anomaly threshold: 20 ms (dropped frame at 60 Hz). Plus a once-per-second summary so the healthy baseline is visible alongside the anomalies.

What the first run showed // Healthy baseline between stutters:
[frame] summary fps~60 worst-dt=16.8ms worst-work=3.3ms

// Stutters:
[frame] STUTTER dt=58.4ms pos=196.9ms ui=yes
[frame] STUTTER dt=256.7ms pos=0.1ms

Two patterns: spikes where work balloons (pos=196ms) and spikes where the tick gap is huge with no work (dt=256ms). The second shape is usually external (OS scheduler pause); the first is something we own.

Step 2 — Split the work phase

updatePositions has five big blocks. Instrument each with a CACurrentMediaTime() stopwatch and print a sub-phase breakdown when a stutter fires. Output:

STUTTER pos=265.7ms (bodies=264.7 stars=0.6 decon=0.3 mm=0.0 mui=0.0)

The bodies phase — the loop that updates planet positions and projects their labels — is consuming the entire budget. The other four phases are sub-millisecond. Drill deeper.

Step 3 — Per-call timing

Wrap each projectLabel call with a stopwatch and log anything over 5 ms:

[frame] slow projectLabel Earth (16.6ms)
[frame] slow projectLabel ISS (16.5ms)
[frame] slow projectLabel Earth (17.0ms)
[frame] slow projectLabel ISS (33.3ms)

16.5 ms is exactly one 60 Hz display frame. Each SCNView.projectPoint call is blocking the main thread waiting for a render-thread sync — every single call. Twenty-six bodies × 16.5 ms = 400 ms of stalls per frame. That's the stutter.

Step 4 — The fix: bypass the render thread

projectPoint internally flushes pending SceneKit transform updates to make sure its answer is up-to-date. On iOS that flush is cheap; on macOS under heavy scene activity it waits for a full VBlank. Since we're projecting our own scene whose transforms we just set ourselves, we know they're up to date — we can compute the projection directly and skip the flush:

Cache view × projection once per frame viewMatrix = simd_inverse(cameraNode.simdWorldTransform)
projection = toSimd4x4(camera.projectionTransform)
cachedViewProj = projection * viewMatrix

Per-label projection (pure SIMD arithmetic) clip = cachedViewProj * simd_float4(worldPos, 1)
// behind camera when clip.w ≤ 0
ndcX = clip.x / clip.w // normalised device coord [-1, 1]
ndcY = clip.y / clip.w
screenX = (ndcX + 1) * 0.5 * viewWidth
screenY = (1 ndcY) * 0.5 * viewHeight // SwiftUI top-left Y, both platforms

Per-label cost: ~100 ns. All 26 bodies projected in under a millisecond — effectively free. And the output is already in SwiftUI's top-left-origin coordinate system on both platforms, which also fixes the label-tracking inversion bug that was originally separate. Two bugs, one fix.

The measurement result

Before summary fps~46 worst-dt=272.6ms worst-work=265.7ms

After summary fps~60 worst-dt=17.6ms worst-work=2.0ms

The -frameLog flag stays in the codebase — if anything similar ever creeps back, one command-line flag gives an instant diagnosis.

Follow-up 1: construct the projection matrix ourselves

Once labels were fast, they turned out to be horizontally drifting on macOS — the Moon label would land tens of pixels to the left or right of the Moon, further from the centre of the view. Vertical tracking was fine. A single-axis error points straight at the aspect-ratio term in the projection matrix.

Root cause: camera.projectionTransform on macOS returns a matrix whose [0][0] doesn't track the current viewport aspect. Fix: build the matrix ourselves from camera.fieldOfView and view.bounds each frame. The conventional perspective matrix, with the convention difference between horizontal-FOV cameras (macOS default) and vertical-FOV cameras (iOS default):

Perspective matrix construction f = 1 / tan(fov / 2)
aspect = width / height

// horizontal FOV (macOS default):
xScale = f, yScale = f * aspect

// vertical FOV (iOS default):
xScale = f / aspect, yScale = f

One gotcha along the way: SCNCameraProjectionDirection only has .horizontal and .vertical cases — there is no .automatic despite the docs/convention suggesting one. Just switch on the two cases directly.

Follow-up 2: offset labels by actual on-screen radius

Second issue: labels were landing centred on each planet rather than above it, because the fixed 16 pt vertical offset in the SwiftUI overlay was dwarfed by Jupiter / the Sun at close zoom. Fix: offset each label upward by the body's real on-screen radius + a small margin, computed from the same cached projection matrix.

On-screen radius of a sphere at depth clip.w pixelsPerUnit = yScale * (height / 2) // cached once per frame
screenR = worldRadius * pixelsPerUnit / clip.w

Label offset above the body offsetY = max(8, screenR + 4)

The pixelsPerUnit factor comes directly from the projection matrix — it's the same quantity SceneKit uses internally to render the sphere as a circle of that radius. The 8 pt floor handles nodes with no SCNSphere geometry (stars, the ISS procedural model) so they still get a readable gap.

An earlier attempt used an empirical r / clip.w * 300 formula borrowed from the star-occlusion code. That constant was tuned for iOS portrait viewports and under-shoots widescreen Mac windows by 3–4×, so label offsets were too small and still landed on the disc. Lesson: don't fudge projection maths with empirical constants — derive them from the matrix.

Reusable lessons

Graphics Rendering Techniques

Everything on screen comes from SceneKit primitives combined in specific ways. This section catalogues the techniques so you can reuse them.

Physically-Based Rendering (planet surfaces)

Every planet and moon uses SCNMaterial with lightingModel = .physicallyBased. The key properties are:

PBR material for planets material.diffuse.contents = nasaTextureImage // equirectangular map
material.lightingModel = .physicallyBased
material.roughness.contents = 0.85 // terrestrial bodies are rough (low specular)
material.metalness.contents = 0.0 // no planets are metallic

Roughness near 1.0 gives a matt, powder-like appearance (the Moon, Mars, Mercury). Gas giants use slightly lower roughness (~0.7) for a subtle sheen from the cloud tops. A single point light at the Sun's scene position (plus low ambient to stop the night side going fully black) drives the PBR response.

Constant Lighting (Sun, stars, orbit lines, trajectory lines)

Anything that shouldn't respond to scene lighting uses lightingModel = .constant. This tells SceneKit to display the diffuse colour directly, unaffected by angle or light position. Used for:

Additive Blending (Sun corona, vehicle halos)

Bright glows are built with nested translucent billboards and blendMode = .add. Additive blending sums each layer's RGB into the framebuffer rather than alpha-blending, so overlapping glow layers build up brightness without ever darkening. Combined with writesToDepthBuffer = false (so the glow doesn't occlude what's behind it), this gives the soft corona look around the Sun and the mission vehicle markers.

Sun corona (4 nested spheres) // Layer radii: 1.3×, 1.8×, 2.8×, 4.0× the Sun's own radius
glowMaterial.diffuse.contents = radialGradientTexture // bright centre → transparent edge
glowMaterial.lightingModel = .constant
glowMaterial.blendMode = .add
glowMaterial.writesToDepthBuffer = false // stacked glows don't fight each other

Vertex Colours on Line Geometry (trajectory gradient)

The trajectory line is one connected polyline with a colour that varies along its length — bright white at launch, saturates into orange during coast, brightens at the flyby, dims on the return leg. We can't do this with a single uniform colour, so we use a per-vertex colour attribute via SCNGeometrySource(semantic: .color, ...).

SceneKit's .line primitive takes pairs of indices rather than a line-strip array, so a polyline with N samples needs 2(N−1) indices in the form [0,1, 1,2, 2,3, ...]. The vertex colour is linearly interpolated along each segment, giving a smooth gradient end-to-end for free.

Line geometry with a per-vertex gradient vertexSource = SCNGeometrySource(vertices: scenePoints)
colorSource = SCNGeometrySource(data: colorData, semantic: .color, ...)
indices = [0,1, 1,2, 2,3, ... ] // pairs, not a strip
element = SCNGeometryElement(data: indexData, primitiveType: .line, ...)
material.lightingModel = .constant // so the vertex colours pass through

Depth Ordering & Render Order

SceneKit draws opaque objects front-to-back (depth-tested) and then transparent objects back-to-front. Markers and halos need to override the default ordering to stay visible on top of the planets:

Billboards via Radially Symmetric Spheres

Three.js has Sprite for always-face-the-camera billboards; SceneKit has SCNBillboardConstraint. We avoid both for vehicle markers because a radially symmetric SCNSphere with constant lighting is a billboard — it looks identical from every viewing angle. Fewer moving parts, no constraint overhead, and the scale-by-camera-distance trick (max(0.04, camDist × 0.012)) keeps them visible at any zoom.

Procedural 2D Icons (SwiftUI Canvas + Path)

SF Symbols covers most of the toolbar (gauge, tag, circle, globe, house, antenna) but has no rocket glyph. Bundling a custom asset would break the "pure Apple frameworks" rule we follow elsewhere, so the missions dropdown uses RocketIcon — a tiny SwiftUI view that draws the shape with a Canvas and three Paths (nose cone + fuselage, fins, exhaust flame). The whole thing is under 40 lines, accepts .foregroundColor like an SF Symbol, and scales with dynamic type.

Pattern: procedural icon with Canvas + Path Canvas { context, _ in
  var body = Path()
  body.move(to: CGPoint(x: cx, y: topY))
  body.addQuadCurve(to: ..., control: ...) // nose cone
  body.addLine(to: ...) // fuselage sides
  body.closeSubpath()
  context.fill(body, with: .color(.missionOrange))
}
.frame(width: size, height: size)

Reusable pattern for any small icon that doesn't have an SF Symbol equivalent. Cheaper than a bundled image (no asset-catalogue bookkeeping, no memory overhead for a raster), cleaner than emoji (tints properly, respects dynamic type), and more flexible than a filled Shape (multiple sub-paths at different opacities — e.g. the rocket's flame is drawn at 0.55 opacity for subtle transparency).

Texture Mapping

A planet is an SCNSphere; a texture is a flat rectangle. The bridge is a projection — a way to unwrap the sphere's surface into a 2D image so a texel on the image corresponds to a specific point on the surface.

Equirectangular Projection (all planets and moons)

Every NASA texture in Textures/ uses equirectangular (aka "geographic") projection: longitude maps linearly to U and latitude maps linearly to V.

Equirectangular UV → sphere point longitude = u × // u ∈ [0, 1] spans 0°–360° east
latitude = (v 0.5) × π // v ∈ [0, 1] spans −90°–+90°
sphere.x = cos(latitude) × cos(longitude)
sphere.y = sin(latitude)
sphere.z = cos(latitude) × sin(longitude)

SCNSphere applies this UV mapping by default, so all we do is load the image and hand it to material.diffuse.contents. The distortion is large near the poles (each latitude row of pixels covers the same map width, but a tiny actual circumference on the globe), which is why NASA texture maps tend to look stretched at the top and bottom — the sphere stretches them back out into the correct shape.

Radial UVs for Saturn's Rings

The rings are a flat disc, not a cylinder or tube, so we can't use SCNTube (its caps map linearly, producing straight bands instead of circular ones). We build custom geometry: 72 radial segments around the ring axis × 4 steps from inner edge to outer edge. The U coordinate is the radial fraction (0 at Cassini division inside, 1 at outer edge), V is the azimuthal position. The texture is a thin strip (915×64 pixels) representing a single radius of the ring system, which wraps around the disc as U varies.

Ring disc UV generation // For each vertex at (radialIndex, azimuthalIndex):
u = radialIndex / radialSegments // 0 at inner edge, 1 at outer
v = azimuthalIndex / azimuthalSegments // goes round once per loop
angle = v ×
r = ringInnerKm + u × (ringOuterKm ringInnerKm)
vertex = (r × cos(angle), 0, r × sin(angle))

The ring material combines a colour map in diffuse.contents (the bright cream bands, Cassini division, dust) with a transparency map in transparent.contents (the alpha GIF, dark where the ring is sparse). Rings are drawn isDoubleSided = true so they look right from above and below, and lightingModel = .constant because Cassini's photograph already baked in the correct lighting.

Procedural Textures (Sun)

The Sun's surface texture is generated procedurally at app launch via UIGraphicsImageRenderer. A 1024×512 canvas gets 800 small "granulation" cells (bright Gaussian blobs), 40 larger "supergranulation" patches, and a radial limb-darkening gradient. This gives realistic convection-cell detail with zero bundled image assets. The corona textures (4 layers) are also procedural — just radial gradients from warm white to transparent.

Missions Pipeline

All 11 missions ported from the companion web app: Artemis II, Apollo 8/11/13, Cassini-Huygens, Voyager 1/2, Perseverance, New Horizons, Parker Solar Probe, BepiColombo. Waypoint data is extracted one-shot from the web JS into Missions.json (bundled resource). Phases 4–6 add the UI layer, ISS, and the lazy-follow mission camera. See ../solarsystem-web/MISSIONS.md for the shared specification.

graph TD J0["js/missions.js
(web app source of truth)"] --> J1["tools/export-missions.mjs
(Node vm sandbox, stubbed imports)"] J1 --> J2["SolarSystem/Resources/Missions.json
(11 missions, 58 events, 213 waypoints)"] J2 --> A["MissionData.all
(DTO decode → domain structs)"] A --> B["MissionManager.initialize"] B --> B1["Resolve anchors:
anchorMoon → Moon SMA position,
anchorBody → planet heliocentric"] B1 --> BT{"autoTrajectory
== transfer?"} BT -- "yes (Perseverance)" --> B1T["generateTransferArc
prograde ellipse,
12 samples × N segments"] BT -- "no" --> B2 B1T --> B2 B2["Moon-aligned → ecliptic rotation
(atan2(moonPos.y, moonPos.x))"] --> B3 B3["Centripetal CatmullRom
time-parameterised sampling"] --> B4["Trajectory line
(SCNGeometryElement .line +
per-vertex colour gradient)"] B3 --> B5["Vehicle marker
(emissive SCNSphere + halo)"] C["Simulated Date
(per-frame)"] --> D["MissionManager.update"] D --> D1{"Reference
frame?"} D1 -- "geocentric" --> D2["group.position = Earth scene pos"] D1 -- "heliocentric" --> D3["group.position = origin"] D --> E{"Vehicle
phase?"} E -- "moonOrbit" --> E1["cos/sin around
Moon scene position"] E -- "moonLanding" --> E2["snap to Moon
scene position"] E -- "moonOrbitReturn" --> E3["cos/sin around Moon
(post-landing)"] E -- "default" --> E4["CatmullRom.sampleAtTime
+ toLocalScene"] E1 --> F["marker.position"] E2 --> F E3 --> F E4 --> F F --> G["Scale by camera distance
max(0.04, camDist × 0.012)"] H["checkEventTrigger"] --> H1["lastTriggeredEvent cursor
+ rewind reset"] I["telemetry"] --> I1["MET, distance (km / AU),
speed (finite diff)"] C["activeMissionId selected"] --> FR{"Reference frame?"} FR -- "geocentric" --> FR1["missionBounds →
Sun-side azimuth,
elevation 17°, fit radius"] FR1 --> FR2["per-frame lerp(0.02)
target = Earth + localCentre"] FR -- "heliocentric" --> FR3["resetToOverview()"] FR2 --> FR4{"User drags?"} FR4 -- "yes" --> FR5["lazyFollowActive = false
(full manual control)"] style J0 fill:#1a2a3a,stroke:#4488ff style J1 fill:#1a2a3a,stroke:#4488ff style J2 fill:#1a1a3e,stroke:#88bbff style A fill:#1a1a3e,stroke:#ffaa33 style B fill:#1a2a2a,stroke:#44aaaa style BT fill:#2a2a1a,stroke:#aaaa44 style B1T fill:#2a1a2a,stroke:#aa44ff style D fill:#1a2a2a,stroke:#44aaaa style E fill:#2a1a2a,stroke:#aa44ff style F fill:#1a1a3e,stroke:#4488ff style H fill:#3a2a0a,stroke:#ffaa33 style I fill:#3a2a0a,stroke:#ffaa33 style C fill:#1a1a3e,stroke:#ffaa33 style FR fill:#2a2a1a,stroke:#aaaa44 style FR5 fill:#3a1a1a,stroke:#aa4444

Mission Trajectory Maths

Mission data is a handful of waypoints (typically 15–25 per vehicle), not a continuous curve. To render a smooth trajectory we interpolate between waypoints, and to anchor that curve to the right places in space we apply several transforms before sampling.

Centripetal Catmull-Rom Splines

Standard “uniform” Catmull-Rom splines overshoot badly whenever control points are irregularly spaced — common on mission trajectories with dense waypoints near launch (seconds apart) and sparse waypoints during coast (hours apart). The centripetal variant (alpha = 0.5) uses sqrt of chord length as the knot spacing, which eliminates overshoot and self-intersection in practice.

Centripetal knot spacing (alpha = 0.5) ti+1 = ti + sqrt(distance(Pi, Pi+1))

Barry-Goldman recursion on segment P1→P2 (with guides P0, P3) A1 = lerp(P0, P1; along [t0, t1])
A2 = lerp(P1, P2; along [t1, t2])
A3 = lerp(P2, P3; along [t2, t3])
B1 = lerp(A1, A2; along [t0, t2])
B2 = lerp(A2, A3; along [t1, t3])
C = lerp(B1, B2; along [t1, t2]) // = point on the curve

Three interpolations at the first level, two at the second, one at the third, all the way down to the final point. This is the same structure Three.js's CatmullRomCurve3 uses, ported one-to-one to Swift in CatmullRom.centripetal.

Time-Parameterised Sampling

Waypoints have timestamps (e.g. “TLI at T+2:83h”). If we sampled the curve uniformly along arc length, the dense launch waypoints would eat most of the line while the lunar coast compressed to a few pixels, and the marker wouldn't line up with the event banners in time. Instead, we sample uniformly along time and map each time back to the curve parameter:

Uniform-time sampling // For each sample time t:
  Find the bracketing waypoint index wi where times[wi]t < times[wi+1]
  frac = (t times[wi]) / (times[wi+1] times[wi])
  u = (wi + frac) / (N 1) // curve parameter in [0, 1]
  sample(points, u) // centripetal Catmull-Rom at u

Moon-Aligned Waypoint Frame (Geocentric Missions)

Lunar-mission waypoints are hand-authored in a frame where +X points toward the Moon at flyby time. This lets a trajectory designer reason about "the spacecraft passes 6,000 km behind the far side of the Moon" without caring about the absolute ecliptic longitude of the Moon on the mission's launch date.

At initialisation, we compute the Moon's actual ecliptic direction at flyby time and rotate all waypoints by the difference:

Moon-alignment rotation (once at init, shared by all vehicles) flybyDate = launchDate + flybyTimeHours
moonPos = moonPosition(moonElements, flybyDate) // AU ecliptic
angle = atan2(moonPos.y, moonPos.x)

Apply to every waypoint rotated.x = wp.x × cos(angle) wp.y × sin(angle)
rotated.y = wp.x × sin(angle) + wp.y × cos(angle)
rotated.z = wp.z // out-of-plane component unchanged

Anchor Resolution

Hand-authored waypoint coordinates drift: a waypoint labelled “Jupiter flyby” might sit at approximate (x=4.5, y=-2.3) AU but Jupiter's actual Keplerian position at that date could differ by 0.05 AU. Over tens of waypoints that's enough to make the trajectory visibly miss the planets it's meant to be flying past.

The anchor system solves this by snapping key waypoints to the real planet / Moon positions at their timestamps:

anchorBody (heliocentric missions) // At init, for each waypoint with an anchorBody:
planet = lookup(wp.anchorBody)
pos = heliocentricPosition(planet.elements, launchDate + wp.t)
wp.x, wp.y, wp.z pos // trajectory now passes through the real planet

anchorMoon (geocentric missions) mp = moonPosition(moonElements, launchDate + wp.t) // AU
scale = moonSemiMajorKm / (length(mp) × kmPerAU)
wp mp × kmPerAU × scale // direction of actual Moon, distance of rendered Moon

Why use the semi-major axis for anchorMoon distance instead of the Moon's actual distance? The Moon's real distance oscillates by ±21,000 km due to eccentricity. The rendered Moon mesh sits at a fixed distance (its semi-major axis) because its scene radius comes from the moon-distance compression formula. If we snapped waypoints to the actual distance, the trajectory line would miss the rendered Moon mesh by up to 21,000 km of compressed scene distance.

Auto-Generated Transfer Arcs (Perseverance)

Perseverance's data is just two anchor points — Earth at launch, Mars at arrival. A straight line between them would be visibly wrong (spacecraft don't fly in straight lines across the solar system). The transfer-arc generator expands those anchors into a Hohmann-style elliptical arc:

Prograde transfer between two anchor points r0, r1 = length(wp0), length(wp1) // heliocentric distances in AU
a0, a1 = atan2(wp0.y, wp0.x), atan2(wp1.y, wp1.x)
sweep = a1 a0 // ensure positive (prograde / CCW)

// For each intermediate fraction frac ∈ [0, 1]:
angle = a0 + sweep × frac
bulge = (r1 > r0) ? +0.05 : −0.03 // outward bulge for outbound transfer
r = r0 + (r1 r0) × frac + bulge × (r0 + r1) × sin(π × frac)
sample = (r × cos(angle), r × sin(angle), lerp(wp0.z, wp1.z, frac))

The sin(π × frac) term peaks at frac=0.5 and is zero at the endpoints, which is exactly the shape of a Hohmann transfer ellipse: widest at the midpoint, pinched at launch and arrival. 12 intermediate samples per segment produces a visibly smooth arc when handed to the CatmullRom resampler.

Runtime Moon Phases

Some mission phases can't be expressed as waypoints because they need to stay glued to the Moon as the Moon itself moves. Columbia orbiting the Moon for 65 hours, Eagle's lunar descent and ascent — these get their marker position computed every frame relative to the Moon's current scene position.

Runtime circular orbit around the Moon (moonOrbit phase) phase = (t orbitStart) / periodHours ×
moonDir = normalize(moonPosition(...))
moonScenePos = toLocalScene(moonDir × semiMajorAxisKm)
tangent = normalize(moonDir × worldUp) // perpendicular to Earth-Moon line
normal = normalize(tangent × moonDir)
marker = moonScenePos + tangent × cos(phase) × r + normal × sin(phase) × r

For moonLanding, the marker just snaps to moonScenePos for the whole window. At trajectory scale, the real 45 km descent is invisible after distance compression, so the visually correct choice is to stick the vehicle to the Moon's surface rather than try to model it.

Lazy-Follow Mission Camera (Lunar Missions)

When a lunar mission is selected, the camera needs to frame the trajectory tightly so the user sees the arc clearly, and it needs to keep framing it tightly as Earth drifts along its heliocentric orbit during the replay. Heliocentric missions don't need this — the overview camera already works — so the lazy-follow logic is geocentric-only.

Initial Frame: Sun-Side Two-Thirds Illumination

Camera azimuth placement (on selection) earthPos = Earth's current scene position
sunsideAz = atan2(earthPos.x, earthPos.z) // negated → camera between Sun and Earth
cameraAzimuth = sunsideAz + 0.55 // ~31° offset for terminator on far side
cameraElevation = 0.3 // ~17° above the ecliptic plane

Why the sign flip in atan2? The camera's spherical coordinates describe its offset from the target (Earth). The Sun is at the scene origin, so the direction "from Earth toward the Sun" is −earthPos / |earthPos|. Using atan2(−x, −z) places the camera in that direction — on the Sun side of Earth. Using atan2(x, z) (unsigned) would place the camera on the anti-Sun side and the target would appear unlit.

The 0.55 rad (~31°) offset rotates the camera slightly so the terminator is on the far side of Earth (viewed from the camera), giving us a two-thirds-lit crescent rather than a head-on disc. The 17° elevation gives enough perspective that the trajectory's out-of-plane component reads properly. The same azimuth formula is also used for planet preset focus, so selecting Jupiter or Saturn from the planet picker gives a cinematic Sun-lit view of the planet and its moon system.

Fit-to-Viewport Distance

Camera distance sizing bounds = trajectory local AABB around Earth
radius = length(bounds.halfSize)
fovFactor = 1.4 / tan(π/6) // SceneKit default FOV = 60°; 1.4× padding for portrait viewport
distance = max(radius × fovFactor, 1.5)

The 1.4× padding is a phone-portrait sweet spot — a narrow viewport needs more leeway than a wide one so the trajectory isn't clipped at the left/right edges. The minimum clamp of 1.5 scene units prevents the camera being so close to a small trajectory that the Earth mesh intersects the near clip plane.

Per-Frame Lerp Follow

Earth moves through its heliocentric orbit at 1/365th of a revolution per day of simulated time. At a 10,000× replay speed, that's 30° of ecliptic motion per mission — enough to shift the trajectory off-centre if we didn't track. Each frame:

Lazy-follow camera lerp wanted = earthScenePos + trajectoryLocalCentre
current = cameraCoordinator.currentTarget
target = current + (wanted current) × 0.02 // 2% of the gap closed per frame

The 0.02 lerp factor is a trade-off: larger values (0.1+) cause visible stutter as the camera jumps; smaller values (0.005) feel sluggish because the trajectory drifts off-centre before the camera catches up. 0.02 closes most of the gap within half a second of wall time regardless of replay speed.

User-Break Gesture Hook

The moment the user drags or pinches, we stop following — nothing worse than fighting an auto-camera. The scene coordinator's pan / orbit / pinch handlers fire a userInteractionHandler callback on gesture .began (not on every .changed delta, which would spam). The view model flips lazyFollowActive = false, and stepLazyFollowCamera becomes a no-op for the rest of the session.

Event Label Visibility Window

Mission events (TLI, Lunar Flyby, LOI, Splashdown, …) get 3D labels projected onto the SwiftUI overlay. We don't want them cluttering the screen for the full replay — each label should appear briefly around when its event fires and then fade.

Event label time window window = clamp(durationHours × 0.03, 1, 500) // 3% of mission, clamped 1–500 h
visible = |elapsed event.t| < window

The 3% rule means a 195-hour Apollo 11 mission has a ~6-hour visibility window per label. At 10,000× replay, that's about 2 seconds of real time: long enough to read, short enough that labels don't pile up. The 500-hour upper clamp prevents Voyager-length missions (28,000 h, 105,000 h) from giving each label weeks of visibility and effectively always showing everything.

Orbital Mechanics Pipeline

From Clock to Coordinates

Every frame, the app converts wall-clock time into planet positions through classical orbital mechanics:

1
Julian Date — Convert calendar date to Julian Date Number (Meeus algorithm), then to centuries since J2000.0 epoch (2000-01-01T12:00:00 TT).
2
Current Elements — Each planet's six orbital elements computed from base values + linear rates × T centuries. Data from JPL (Standish, 1992).
3
Mean Anomaly — M = L − ω̄ (mean longitude minus longitude of perihelion). Normalised to [0°, 360°).
4
Kepler's Equation — Solve E − e·sin(E) = M via Newton-Raphson. Converges in 3–6 iterations for planetary eccentricities.
5
True Anomaly — Convert eccentric anomaly E to true anomaly ν using the two-argument atan2 formula.
6
Heliocentric Coordinates — Compute (x, y, z) in the ecliptic plane from r, ν, and the orientation angles I, ω̄, Ω.

Orbital Elements

Each planet's orbit is an ellipse defined by six Keplerian elements:

ecliptic Sun a semi-major axis Planet ν true anomaly r perihelion aphelion Ω ascending node I inclination e = eccentricity (shape) a = semi-major axis (size) I = inclination (tilt) Ω = longitude of ascending node ω̄ = longitude of perihelion Kepler's Equation E - e·sin(E) = M Solve for eccentric anomaly E

Kepler's Equation Solver

Newton-Raphson iteration E0 = M + e · sin(M) // initial guess
En+1 = En (En e · sin(En) M) / (1 e · cos(En)) // iterate until |ΔE| < 10−8

Scale Transformations

Distance Scaling (Logarithmic)

sceneDistance = log(1 + AU / 0.5) × 15

Mercury ≈ 7.5, Earth ≈ 16.5, Jupiter ≈ 36.5, Neptune ≈ 63 scene units

Real Distance (AU) Scene Units linear (unusable) Me V E Ma J S U N 0 1 5 10 20 30
Radius Scaling (Square Root)

sceneRadius = √(km) × 0.00125

Jupiter ≈ 0.33 (3.3× Earth), Earth ≈ 0.10, Mercury ≈ 0.06. Min 0.03 planets, 0.012 moons.

Moon Distance Compression

moonSceneDist = parentSceneRadius × (realRatio0.6) × 1.5

Earth's Moon (real 60.3× parent radius) → pow(0.6) → 11.7 → ×1.5 → 17.6× parent radius. The exponent trades physical accuracy against visibility: 1.0 would place the Moon off-screen at every sensible zoom; 0.4 (the original) placed it at 8.8× but made Galilean moons bunch too tightly around Jupiter. 0.6 is the sweet spot where Earth-Moon reads as "far" and Jupiter's moons stay readable.

The same formula is used for mission trajectory compression so Apollo trajectories sit at the same scale as the Moon they pass — see the Mission Trajectory Maths section.

Planet Size Comparison

Real radii vs scene radii (√ scaling). The bars show how sqrt compresses the 175:1 real range into a visually useful 5.5:1 range:

Scene Radius (sqrt scaling) 0 0.1 0.2 0.3 Jupiter 0.331 Saturn 0.302 Uranus 0.199 Neptune 0.196 Earth 0.100 Venus 0.097 Mars 0.073 Mercury 0.062 Moon 0.027

SwiftUI View Hierarchy

graph TD A["SolarSystemApp"] --> B["ContentView"] B --> C["SolarSystemSceneView
(UIViewRepresentable)"] B --> D["Labels Overlay
(planets, moons, stars)"] B --> E["InfoPanelView
(date, body info)"] B --> F["Controls Bar
(speed, toggles, picker,
missions menu)"] B --> Z["Zoom Slider
(log-mapped)"] B --> MUI["Mission UI overlay
(telemetry panel, timeline
slider, event banner)"] C --> G["SCNView + Coordinator"] G --> H["Custom Gestures
(pan, orbit, pinch, tap)"] I["SolarSystemViewModel"] --> C I --> D I --> E I --> Z J["OrbitalMechanics"] --> I K["SceneBuilder"] --> I L2["UserDefaults"] --> I style A fill:#1a1a3e,stroke:#ffaa33 style I fill:#1a2a1a,stroke:#44aa44 style D fill:#2a1a2a,stroke:#aa44ff style G fill:#1a1a3e,stroke:#4488ff

Data Model

graph TD A["CelestialBody"] --> B["name, id, type"] A --> C["orbitalElements?
(planets)"] A --> D["moonElements?
(moons)"] A --> E["physical: PhysicalProperties"] A --> F2["rotation: RotationProperties"] A --> F["moons: [CelestialBody]"] A --> G["position: SIMD3<Double>"] C --> H["OrbitalElements
a, e, I, L, ω̄, Ω
+ rates per century"] D --> I["MoonOrbitalElements
semiMajorAxis, period,
eccentricity, inclination"] E --> J["radiusKm, color,
emissive, hasRings"] F2 --> J2["periodHours, obliquity,
w0, tidallyLocked"] K["ScreenLabel"] --> L["id, name, screenPoint,
isMoon, isStar, priority"] M["NamedStar"] --> N["name, position,
magnitude"] style A fill:#1a1a3e,stroke:#ffaa33 style H fill:#1a1a3e,stroke:#4488ff style K fill:#2a1a2a,stroke:#aa44ff style M fill:#1a1a3e,stroke:#88bbff style F2 fill:#1a2a1a,stroke:#44aa44

Real Star Catalogue (HYG v38)

8,920 stars from the Hipparcos/Yale/Gliese amalgamation, filtered to naked-eye visibility (magnitude ≤ 6.5). Each star has right ascension, declination, visual magnitude, and B-V colour index.

Celestial Sphere Mapping

RA/Dec to SceneKit coordinates (sphere radius = 500) ra = raHours × (π / 12) // hours to radians
dec = decDeg × (π / 180) // degrees to radians
x = 500 × cos(dec) × cos(ra)
y = 500 × sin(dec)
z = 500 × cos(dec) × sin(ra)

B-V Colour Mapping

B-V −0.4 0.0 0.4 1.0 2.0 O/B hot blue A white F/G yellow K orange M red

Brightness Tiers

4 point-size tiers by magnitude

mag < 1.5: 3–8px (~20 stars: Sirius, Vega, Arcturus) • mag 1.5–3.5: 2–5px (~200 stars) • mag 3.5–5.0: 1.5–3px (~1,500 stars) • mag 5.0–6.5: 0.8–2px (~7,000 stars, Milky Way structure)

IAU Rotation Model

Every body rotates at its real sidereal rate, with correct axial tilt (obliquity) and prime meridian orientation at J2000.0:

Rotation angle at time T spin = W0 + (daysSinceJ2000 / periodDays) × // sign flips for retrograde bodies (Venus, Uranus, Pluto)

Applied as quaternion composition (not Euler!) tiltQuat = simd_quatf(angle: obliquity, axis: (1,0,0)) // tilt around world X
spinQuat = simd_quatf(angle: spin, axis: (0,1,0)) // spin around local Y (tilted pole)
node.simdOrientation = tiltQuat * spinQuat // spin first, then tilt in world space

Why quaternions instead of Euler angles? SceneKit composes Euler angles in Y-X-Z order, which means setting eulerAngles = (tilt, spin, 0) applies yaw-then-pitch — so the tilt axis itself rotates with each spin cycle. The rings appear to oscillate once per planetary day. Quaternion composition tiltQuat * spinQuat applies spin in the body's pre-tilt frame (local Y), then tilts in world space — the tilt axis stays fixed while the body spins around the tilted pole, just like the real thing.

Saturn's rings are a child node; we cancel the parent's spin in the ring's local frame with ringNode.simdOrientation = simd_quatf(angle: -spin, axis: (0,1,0)) so the rings inherit only the tilt and stay fixed in the equatorial plane while the cloud bands rotate underneath.

Axial Tilts (to scale) Me Ju E 23° Ma 25° Sa 27° Ne 28° Ur 98°! Tidally Locked Moons Rotation period = orbital period. Same face always toward parent. Earth's Moon, all 4 Galilean moons, all 7 Saturn moons, Phobos, Deimos
Saturn's Rings

Rings share Saturn's axial tilt but do not spin with the planet. Each frame: ringNode.eulerAngles.y = -spin cancels the parent's rotation.