Build Tutorial — Porting an iOS app to the browser in a single conversation
MeshStandardMaterial for PBR, SphereGeometry for planets, Quaternion for rotations, Raycaster for hit testing. The orbital mechanics is pure math — portable as-is. The only dependency needed: Three.js itself, loaded from CDN. No npm, no bundler, no build step.index.html plus 6 ES module files, zero build tooling. Three.js loaded via import map from CDN. All 17 texture files copied from the iOS project. The scene complexity (~30 meshes, 9K star points) is trivial for WebGL.UIGestureRecognizer classes, while the web has raw mouse/touch events that need manual state tracking.file:// URLs. ES module import statements fail, fetch() for the star catalogue CSV is blocked, and TextureLoader can't read local JPEG files. The solution is trivial: python3 -m http.server 8080 starts a local server in one line.python3 -m http.server 8080, then open http://localhost:8080. No installation, no configuration, works on any machine with Python 3.file:// protocol has an opaque origin, so the browser treats every file as a different origin. A local HTTP server makes everything same-origin. Any static server works — Python, Node's npx serve, VS Code's Live Server extension, etc.PointLight intensity down to 50 and set toneMappingExposure to 1.2 with ACES filmic tone mapping to get the natural, film-like look that matches SceneKit's rendering.renderer.useLegacyLights was removed in r155) means light intensity is measured in candela, not arbitrary units. Combined with ACES tone mapping, the rendered image maps HDR lighting values to the display's SDR range in a way that mimics film exposure curves — highlights compress gradually instead of clipping.THREE.Sprite is a better fit — sprites always face the camera (billboard behaviour), which is exactly what glow effects need. Four sprites at 1.3x, 1.8x, 2.8x, and 4.0x the Sun's radius, each with a radial gradient CanvasTexture and AdditiveBlending, produce the same warm multi-layer corona.depthWrite: false on the sprite materials prevents the transparent parts from writing to the depth buffer, which would create invisible rectangular occluders. The sprites are child objects of the Sun mesh, so they move and scale with it automatically.OrbitControls, but it doesn't match the iOS app's gesture model: one-finger pan for translation, two-finger for orbit. Built a custom CameraController from scratch that manages spherical coordinates (target, distance, azimuth, elevation) and handles five input modes: left-drag pan, right-drag orbit, scroll zoom, click selection (via Raycaster), and double-click reset. Touch support maps the same gestures: one-finger pan, two-finger orbit+pinch.UIGestureRecognizer system, which handles them automatically.focusCamera() set the camera distance programmatically but didn't update the zoom slider. The slider only synced during the animation loop every 3rd frame — and if the camera moved while paused, it never synced at all. Added syncZoomSlider() calls immediately after both focusCamera() and resetCamera().cameraController.distance). Every entry point that changes the distance must also update the UI. The iOS app solved this identically — ensuring all zoom controls clamp to the same 0.5–250 range and calling syncZoomFromCamera() after programmatic changes.backdrop-filter: blur(10px) at z-index 50 (above all other UI). The panel scrolls independently (max-height: 80vh; overflow-y: auto) for smaller screens. Three event listeners handle dismissal: the X button, clicking the backdrop (checking e.target === overlay), and the Escape key would be a natural future addition.background-image clipped to a circle. Saturn gets a decorative ring overlay — an elliptical CSS border with rotateX(65deg) 3D transform. The Sun uses a radial gradient. An overview button with a star glyph sits at the far left.flex: 1 within the toolbar to distribute evenly across the available width. The planet strip takes whatever space remains after the playback controls. Saturn's ring is a pure CSS effect — a border-radius: 50% div with transform: translate(-50%, -50%) rotateX(65deg), positioned absolutely over the planet circle.touch-action: none on the full-screen canvas container was causing iOS's gesture system to capture all touches before they reached the overlaid UI — it needed to be scoped to the <canvas> element only. Second, -webkit-user-select: none on <body> was suppressing click event synthesis from touch events — again, scoped to the canvas container. Third, click events weren't always being synthesized from touches in the layered stacking context, so a dual-binding approach was needed.onTap() helper, which binds both touchend and click with a flag to prevent double-firing. Every interactive element now fires immediately on touch. Desktop mouse behaviour is completely unaffected.onTap() pattern fires on touchend with preventDefault() (no 300ms delay), and falls back to click for mouse input. A touchFired flag prevents the click from re-firing when both events are synthesized. This is now used for every interactive element in the app — 20+ bindings in total. Desktop mouse behaviour is completely unaffected.@media (max-width: 700px) breakpoint switches the toolbar from a single row (controls + planet strip inline) to a two-row layout (planet strip on top, controls below) using flex-direction: column. The planet strip also needed aspect-ratio-aware zoom: portrait phones have a narrow width, so the same camera distance that works on a wide desktop leaves planets tiny. Added a portraitFactor that scales the focus multiplier by 0.5 + 0.5 × min(aspect, 1.0).focusCamera(), not in CSS. The camera's field of view is fixed at 60°, so the constraining dimension on portrait screens is width, not height. Without the correction, a planet at the "right" distance for a 16:9 desktop would appear at roughly half its intended visual size on a 9:16 phone — the multiplier drops to ~78% on a typical phone, bringing the camera closer to compensate.cameraDistance = max(extent × multiplier, 0.5), where extent is the maximum of the planet radius and the outermost moon's orbit distance plus its own radius. The 0.5 floor prevents the camera from going inside the body.projectPoint() overlay in SwiftUI), CSS textures in toolbar (planet strip thumbnails using background-image).
MissionManager class with a Moon-aligned waypoint frame, automatic rotation to match the Moon's actual orbital position, and CatmullRom smoothing for visual quality.pow(ratio, 0.4) formula that makes the solar system visible squashes the 10,600 km between the Moon and the flyby point to 0.01 scene units — invisible. Flyby waypoints had to be slightly exaggerated so the trajectory visibly loops around the Moon. CatmullRom curves created kinks when waypoint coordinates reversed direction; keeping values monotonic within each leg solved this. Two further issues emerged through iterative tuning: the Z-component amplification problem, where constant out-of-plane values made the trajectory appear to pass over the Moon’s pole instead of behind the far side (fixed by reducing Z to near-zero at closest approach), and the flyby speed symmetry problem, where hand-crafted post-flyby waypoints initially showed 2.5x the approach speed. In a free-return trajectory, approach and departure speeds should be roughly symmetric — spacing the post-flyby waypoints wider in time and closer in distance corrected this. The multi-vehicle architecture was designed with Apollo 11 in mind — CSM and LM separating at the Moon will use the same vehicles array with diverging waypoints. Later refinements made missions look beautiful by default: the camera azimuth is set ~31° off the Sun direction (0.55 radians) so Earth and Moon appear two-thirds illuminated — a dramatic cinematic framing that required adding optional azimuth/elevation parameters to setCamera(). Event labels were changed from always-visible to timed (~3% of mission duration around each event), keeping the view clean during playback. The bounding box calculation was made robust by computing trajectory extent at both Earth’s start and end positions, preventing drift as Earth moves through its orbit.anchorBody system: each waypoint names a planet, and at initialisation the code imports heliocentricPosition() from the orbital mechanics module to resolve each anchor to the planet’s real computed position at that date. An autoTrajectory flag triggers _generateTransferArc(), which computes Hohmann-style elliptical arcs between consecutive anchor points, so missions don’t need dozens of hand-placed intermediates. The ISS was modelled as a procedural Three.js geometry — a central truss with solar panels, radiators, and habitat modules — added as an Earth satellite with a 92-minute orbital period. A dedicated Satellites menu separates artificial objects from natural moons. A timeline slider below the trajectory lets users scrub through any mission’s entire duration, with all UI elements (telemetry, event banners, planet positions, vehicle markers) updating live. URL parameters (?mission=voyager1) allow deep-linking to any mission.?mission= URL parameter pre-selects and focuses any mission on page load.anchorBody system solved a subtle but critical problem: interplanetary waypoints defined with approximate coordinates (e.g. Jupiter at x=5.2 AU) wouldn’t match the actual computed planet position at the encounter date, because Keplerian elements shift over centuries. A Voyager trajectory passing “near” Jupiter but missing by 0.3 AU in the scene looks wrong. By importing the same heliocentricPosition() function used for rendering planets, each anchored waypoint snaps exactly to the planet’s rendered position at the flyby timestamp. The autoTrajectory generator then fills in smooth transfer arcs between these anchor points using elliptical interpolation, avoiding the need to manually plot hundreds of intermediate positions. The timeline slider presented a different challenge: when the simulation is paused, the normal animation loop doesn’t run — but scrubbing needs to update everything. The solution overrides the simulation date during scrub and forces a complete render pass (positions, telemetry, labels, banners) even while paused, so the scene always reflects the slider position. A third scale problem arose with Apollo 11’s LEM descent: the real 45 km descent path compresses to just 0.035 scene units — completely invisible. The flyby exaggeration approach (inflating distances by 10–15%) doesn’t work here because the vehicle needs to arrive at the Moon, not pass near it. The solution was runtime interpolation via the moonLanding vehicle property: during descent, the marker lerps from its orbital position to the Moon’s actual computed scene position (via moonPosition() each frame), creating clear visible separation from the orbiting CSM. During surface operations it tracks the Moon exactly, and during ascent it lerps back. Interplanetary speed consistency required a general principle: always add transition waypoints ~1,000–2,000 hours before and after anchored points to prevent speed spikes. This was applied across Cassini, New Horizons, and both Voyagers. A new anchorMoon system (the geocentric equivalent of anchorBody) resolves waypoints to the Moon’s actual ecliptic position, used for Apollo 11’s departure waypoints. The mission camera evolved through several iterations into a lazy-follow system: on selection, the camera snaps to Earth’s position + the trajectory’s local center (using tight localRadius bounds from getMissionBounds() rather than the wide start+end bounding box). During playback, the camera target lerps toward Earth’s current position each frame at lerp(0.02), keeping the trajectory centered while Earth drifts through space. User interaction breaks the follow by clearing activeMissionId.