mapv10 Next Work
Last audited: 2026-05-12, during MZ strategic zoom and physical LOD implementation.
This is the prioritized work queue. Pick from the top. Completed or stale items are kept in the "Recently resolved" section so future waves do not re-open old work by accident.
Column meaning:
- Type -
Implement(new code) /Replace(delete + rewrite) /Improve(tune existing) /Fix(bug) /Investigate(read first, decide later) /Decide(needs direction before code) - Goal - what "done" looks like, observable from outside
- Verify - concrete test, scenario, or gate that proves it landed
- Benefit - what the user / next wave gets after this lands
- Risk / Depends-on - what could go wrong; prerequisites
- Wave? -
Yesmeans usewave-protocol.md;Directmeans build directly perextending.md
Recently resolved / no longer open
These were in the old queue but the code has moved on.
| # | Area | Status | Evidence | Keep / watch |
|---|---|---|---|---|
| R-1 | Regional / realm skipped-public-LOD generator panic | Done | tile_pyramid.rs now supports hidden repeated 2:1 cascades when public LOD bands skip from sample step 1 -> 4 or 1 -> 8. Regional and realm generation completed cleanly in the latest check. | Do not reopen T0-1. Old text used stale CLI names (--preset, realm-draft); actual CLI is --scale-preset regional-slice / --scale-preset realm-slice. |
| R-2 | Iterative 2:1 mip-pyramid | Done | Terrain and border-SDF pyramids now cascade internally through repeated 2:1 downsample steps. | Optional future work: add more schema coverage assertions, but it is not an open architecture issue. |
| R-3 | Channel-appropriate downsampling | Done | Continuous channels use mean-style filters; discrete ID/class channels use mode-style filters. Border-SDF R is continuous, GBA nearest-id channels are discrete. | Keep tests around filter dispatch. |
| R-4 | Borders checkbox shader hook | Code done, scenario missing | Borders now route through uBorderOpacity / setBorderSdfEnabled; the UI layer state gates SDF borders. | T2-1 below still adds explicit visual scenarios. |
| R-5 | Browser verifier HTTP + default manifest | Done | verify-browser.mjs and heap-stability-check.mjs both use the plain-HTTP dev server and the canonical /mapv10/runs/continent-lod6/manifest.json fixture. | Keep browser-driving tools on the same canonical fixture path; --run-root is ad-hoc only. |
| R-6 | Political LUT 1xN texture overflow | Done before this audit | Current code uses square-packed 2D LUT/selection textures and validates against MAX_TEXTURE_SIZE. | Keep SamplerProbe gates. |
| R-7 | Visual exaggeration config plumbing (was T0-2) | Config layer done; visual goal moved to lighting cluster | viewer/src/config/TerrainVisualConfig.ts exports typed config + validator + factory; current defaults include exaggeration, sun azimuth/elevation, ambient/diffuse relief strength, relief contrast, and altitude tint thresholds. Setter plumbing keeps decoder-baked displacement and world-to-scene projection in sync. | Do not reopen R-7 for visual quality. R-14 now owns the first lighting fix; remaining terrain readability work is T0-2B/T0-2C/T0-2E. |
| R-8 | Source-cell Voronoi tessellation (was T0-3) | Done | political.rs now generates deterministic blue-noise seeds and half-plane-clipped Voronoi-style province/location cells; raster ownership and neighbor edges are derived from those polygons; generator tests prove determinism, coverage, and non-axis-aligned province boundaries. | Keep visual validation against graph-paper regressions. Future density policy can be explicit, but row/column subdivision is gone. |
| R-9 | Synchronous pick readback removal (was T1-2) | Done | Pointer hover now raycasts active terrain meshes and samples generated semantic locationId rasters on CPU; readRenderTargetPixels, pick-id FBO allocation, and the pick-id shader mode were deleted. The browser verifier also instruments WebGL/canvas readback APIs and records totalCalls: 0 in m10-lod6-continuity-proof.json. | Add an interaction hover-sweep scenario when scenario CLI work lands, but do not reintroduce GPU readback for picking. |
| R-10 | Canonical fixture and border-SDF contract drift | Done | architecture.md, roadmap.md, schema/README.md, mesh-product.schema.json, and the generator stage description now point at the canonical viewer/public/continent-lod6 proof and the current border-SDF terrain-shader path. | Keep T3-4 fixture distribution open; fresh clones still need an intentional way to obtain or build continent-lod6. |
| R-11 | Scale ruler + camera HUD (was T0-2D) | Done | ScaleRulerConfig, the shell HUD, renderer terrain/sea-plane raycast probe, and scale-ruler scenarios now provide snapped metre/km labels plus camera/terrain readouts. Browser proof covers the canonical fixture, and scenario evidence exercises continent/realm/location ruler bands without fake altitude retunes. | Keep the HUD probe in future scenario/baseline automation; scale chrome is no longer the next open work item. |
| R-12 | Strategic route graph generation (was T1-4) | Done | routes.rs now builds route candidates from generated Location adjacency, province hubs, height/slope cost, lake/river penalties, and river intersections instead of input-order chains. The regenerated continent-lod6 fixture has 11,000 route nodes, 14,300 edges/centerlines, 56 realm-visible strategic roads, 119 crossings, and 0 orphan nodes; 07-routes validation passes connected/no-orphan/one-centerline checks. | Route visual density still belongs to T1-3 LOD visual configuration; route foundation is no longer fake. |
| R-13 | Robust water polygon triangulation (was T2-5) | Done | meshes.rs now builds lake surfaces through earcutr from an outer ring plus optional hole rings. LakePolygon serializes holes, generated lakes emit holes: [], validation checks lake hole rings, and unit fixtures cover concave and holed lake meshes by area/centroid/index validity. The regenerated continent-lod6 fixture writes mapv10-lake-polygons-v2, has 2 lake meshes with 28 outline vertices and 26 triangles each, and browser proof fetches all 7 water assets. | Future lake generation can become concave or holed without re-opening the old triangle-fan foundation. |
| R-14 | Directional terrain relief lighting (was T0-2A) | Done | The unified terrain shader now computes relief from decoder-computed terrain normals using a world-space vTerrainWorldNormal varying and TerrainVisualConfig sun direction. __mapv10TerrainShadingProbe exposes { hasDirectionalLight, lightDirectionWorld, normalsAreComputed }; browser proof asserts the probe and records normalized light [-0.591, 0.469, 0.656], active terrain normals, relief uniforms, and world-normal shader use. | Continue the visual cluster with T0-2B/T0-2C. Lighting is no longer the blocker, but altitude color and atmospheric depth are still needed for Google-Earth-style readability. |
| R-15 | LOD visual configuration for routes, borders, labels (was T1-3) | Done | Route ribbon generation now uses explicit per-LOD width bands, so z4/z5 connectivity no longer inherits overview-corridor widths. The viewer has LodVisualConfig for route opacity/color, border SDF opacity, and label budgets by camera band/mode; debug state and npm run scenarios report the active band. Close-up proof: verification/scenarios/lod-visual-config/ shows geography z5 at routeOpacity=0.04, borderOpacity=0.18, labels capped at 42; routes z5 at routeOpacity=0.46, borderOpacity=0.14, labels capped at 28. | Keep future visual tuning inside typed config or generated mesh policy. Do not reintroduce scattered route/border/label constants. |
| R-16 | Elevation tint + aerial perspective (was T0-2B/T0-2C) | Done | TerrainVisualConfig now owns snow/rock altitude thresholds plus configurable aerial perspective color/start/density/max-opacity. The unified terrain shader blends geography/routes highlands toward rock/snow from vTerrainSceneY and blends terrain toward haze from camera-fragment distance using vViewPosition. Terrain uses this explicit shader haze instead of generic Three terrain fog; __mapv10TerrainShadingProbe and npm run scenarios report the active haze settings. Proof: verification/scenarios/aerial-perspective/ passes continent oblique, low-horizon, close overlay, and clean close scenarios with terrainMaterialUsesAerialPerspective=true, no page/probe errors, and close z5 p95 at 11.9-19.2ms. | The next terrain-quality blocker is close-detail micro-relief and material variety, not missing atmosphere. Tune haze only through TerrainVisualConfig plus scenario evidence. |
| R-17 | Close-detail normal-map micro-relief (was T0-2E) | Done | The unified terrain shader now samples a shared tileable RGBA normal texture by world-space UV, perturbs the mesh-normal lighting term at close range, fades the effect out by camera distance, and gates companion material grain through TerrainVisualConfig. SamplerProbe now records the 10-sampler terrain architecture; __mapv10TerrainShadingProbe and npm run scenarios report active micro-relief strength/scale/fade. Proof: verification/scenarios/micro-relief/ covers continent oblique, low-horizon oblique, z5 overlay, z5 clean highland, and a new z5 clean lowland target around 997 m; all passed with terrainMaterialUsesMicroReliefNormalMap=true and close z5 p95 at 8.2-18.7ms. Unit proof adds a per-pixel lateral-normal variance assertion on the generated normal texture. | The lighting/readability cluster is no longer the next blocker. Remaining highland palette/snowline tuning must stay config-driven and scenario-backed; do not replace micro-relief with geometry displacement hacks. |
| R-18 | Zoom-trace measurement foundation (first T2-7 slice) | Done | Mapv10ThreeRenderer now records Mapv10ZoomTraceReport samples with frame work, camera distance/altitude/zoom band, terrain/runtime residency, scheduler pending/in-flight work, cache hit/miss/eviction deltas, route/label counts, frame-budget pressure, and long-task markers. window.__mapv10StartZoomTrace / __mapv10StopZoomTrace expose manual tracing, and npm run scenarios writes *-zoom-trace.json for scenarios with zoomTrace. Proof: verification/scenarios/zoom-trace/ covers continent->realm->province->location clean lowland and realm->province->location overlay zooms; the overlay trace caught route-family pending at 577/601 total pending with max frame 110.1ms. | Performance work can now be evidence-driven. Next T2-7 work is choosing explicit budgets and fixing measured breaches, especially route/label rebuild and streaming spikes during close overlay zooms. |
| R-19 | First zoom performance budget + route-streaming fix | Done | mapv10_zoom_continent_to_location_clean_lowland and mapv10_zoom_realm_to_location_overlays_z5 now carry enforced performanceBudget.zoomTrace thresholds. npm run scenarios records performanceBudgetIssues and exits non-zero on budget breach. Route auxiliary streaming is capped to 64 missing route assets per frame while retaining loaded/pending routes, and route batches are sharded by route tile instead of one giant route material batch. Proof: verification/scenarios/performance-budget/ passed both zoom scenarios; overlay route pending dropped from the R-18 spike of 601 to 144, p95 was 14.7ms, max was 64.9ms, cache evictions stayed 0, and final z5 routes/labels were present. | The route fan-out cliff is no longer the first performance target. Remaining isolated long frames are now label/vector lifecycle and general commit spikes, not route request flooding. |
| R-20 | Lifecycle/hitch metrics + warn/fail budget schema (was T2-7 PR 1) | Done | Mapv10ZoomTraceSample now records render calls/triangles, route batch/draw/upload/build proxies, mesh/material/texture create-dispose counters, label create/remove/churn/resident/visible/fading counters, and fallback/omitted-slot counters. performanceBudget.zoomTrace accepts legacy flat hard-fail budgets or nested { fail, warn }; npm run scenarios writes performanceBudgetWarnings without failing CI. Proof: viewer/verification/scenarios/lifecycle-hitch-metrics/ passed both canonical zoom scenarios with 0 hard failures and 17 warnings. The warnings named the next owners: clean labels-off still peaked at 816 label/material creates and 916 texture creates; overlay zoom peaked at 672 label churn, 638 label removals, 8.06 MB route upload proxy, and 38.3 ms route build. | Next work is T2-7b label/vector frame-budgeted lifecycle; route adaptive batching remains measured but should follow label churn unless scenario evidence changes. |
| R-21 | Label lifecycle candidate gating + frame-budgeted residency (was T2-7b PR 2) | Done | Labels now run through candidate selection before resource allocation: layer state, zoom band, priority, projection, and collision estimates are evaluated before any sprite/material/texture is created. Selected labels are queued through the label frame-budget lane, and faded labels retire through the same capped lane. Markers no longer allocate while the markers layer is off. Proof: viewer/verification/scenarios/label-lifecycle-budget/ passed the two canonical zoom scenarios. Clean labels-off improved from R-20 max 93.6ms / 816 label creates / 916 texture creates to max 12.1ms / 0 label creates / 0 slow frames. Overlay zoom improved from R-20 max 119.9ms / 672 label churn to max 23.3ms / 16 label churn / 0 slow frames. | Keep label caps as UX controls, not perf crutches. Remaining warnings are route upload/build proxies, so route adaptive batching is now measurable but not blocking parent fallback unless it regresses. |
| R-22 | Strict parent fallback contract (was T1-2 PR 3) | Done | RenderResolver now emits a per-primary fallback ledger with direct, resident-ancestor, previous-rendered, and omitted sources; Mapv10ThreeRenderer records resolver-owned fallback metrics in zoomTrace instead of inferring them after render. Resolver tests cover pending/failed/canceled child absence, partial child residency, previous-rendered fallback, omitted-slot accounting, fallback-duration increment/reset, and transient empty commits preserving previous metadata. Planner tests cover predicted preload requests not counting as current residency gaps. Proof: viewer/verification/scenarios/parent-fallback-contract/ passed clean zoom, slow-network zoom, and overlay zoom. Slow-network recorded fallbackSlotCountMax=20, omittedSlotCountMax=0, fallbackDurationFramesMax=3, no page errors. Overlay stayed under frame gates with fallbackDurationFramesMax=13; only route upload/build warnings remain. | Parent fallback is now strict enough for T1-3 geometric-error SSE. Keep omittedSlotCountMax=0 as a hard gate on canonical zoom scenarios. |
| R-23 | Terrain geometric-error SSE upgrade (was T1-3 PR 4) | Done | VisibilitySet now chooses terrain LOD from measured geometricError using (geometricError * viewportHeight) / (2 * tan(fov/2) * distance) and keeps ssePixelsByKey as true terrain-error telemetry. Tile manifests/types/schemas now carry tileId, parentId, childIds, 3D bounds, measured generator geometricError, and cost hints; the loader normalizes older fixtures into the stricter runtime contract. __mapv10TerrainSelectionProbe and scenario results expose the selector state. Proof: viewer npm test passed 212 tests, npm run build passed, generator cargo check passed, cargo test -q stages::tile_pyramid passed, and npm run scenarios passed all 23 canonical scenarios with 0 hard budget failures. | SSE made close/low-angle scenarios select finer terrain intentionally. Watch the 12 warn-only budget owners from the proof: terrain mesh/texture create-dispose spikes, fallback-slot peak 28 > 24, and route upload/build proxies in the overlay zoom. These are follow-up pacing/baseline-ratchet work, not correctness blockers. |
| R-24 | SSE terrain lifecycle pacing + baseline ratchet (was T2-9) | Done | Terrain render commits are now staged across frames, old active terrain remains visible while the new commit fills in, terrain texture creation is capped per frame, and retired terrain node disposal is capped per frame. The clean zoom fallback warn baseline was ratcheted from 24 to 32 because R-23's strict resolver intentionally records a stable fallbackSlotCountMax=28 with omittedSlotCountMax=0. Proof: viewer npm test passed 212 tests, npm run build passed, and npm run scenarios passed all 23 canonical scenarios with 0 hard failures and only 2 route warnings. Clean zoom now records frameMaxMs=10.4, meshCreate=48, meshDispose=38, textureCreate=96, textureDispose=95, fallbackSlot=28, omitted=0. Slow-network records frameMaxMs=9.3, same lifecycle caps, fallbackSlot=28, omitted=0. Overlay records frameMaxMs=21.8; only routeUploadBytesProxyMax=8060160 and routeBuildMsMax=12.2 warn. | Terrain lifecycle is no longer the zoom owner. Next renderer work should target route upload/build pacing; Valenar export remains a deferred product lane until the user explicitly asks for runtime import packaging. |
| R-25 | Route adaptive batching + upload pacing (was T2-8) | Done | Route mesh decode no longer emits normals for unlit MeshBasicMaterial ribbons, one-member auxiliary batches draw their source geometry directly instead of copying through a merge buffer, oversized intermediate route aggregates (route.aggregate.z2/z3) normalize to the coarse z1 fallback when available, and never-drawn stale route batch work is canceled instead of filling the frame-budget queue. Planner tests cover aggregate fallback and missing-fallback behavior. Proof: viewer npm test passed 214 tests, npm run build passed, and npm run scenarios passed all 23 canonical scenarios with 0 hard failures and 0 warnings. Overlay zoom now records frameMaxMs=20.5, maxFrameBudgetPending=169, routeUploadBytesProxyMax=67584, routeBuildMsMax=4.6, routeBatchCountMax=24, and final z5 routes/labels present. | The canonical renderer hardening chain is no longer warning on terrain lifecycle, label churn, fallback, or route upload/build. Next work can move to the product lane and process/tooling gates unless new scenario evidence opens a fresh owner. |
| R-26 | Generated artifact server boundary | Done | The viewer now exposes generated runs under /mapv10/runs/<run-id>/... through viewer/server/mapv10ArtifactMiddleware.ts. The default manifest, browser verifier, heap check, scenario runner, and slow-network URL filters use /mapv10/runs/continent-lod6/manifest.json. Missing runs/artifacts return typed application/problem+json; binary artifacts support byte ranges; legacy /continent-lod6/... paths are intercepted so they cannot fall through to the Vite app shell. | Keep Vite responsible for the app shell and the artifact middleware responsible for generated run files. Do not add gameplay REST endpoints for immutable map products. |
| R-27 | Browser screenshot baselines (was T3-2) | Done | verify-browser.mjs now supports --update-baseline and --check-baseline for the canonical full browser proof. The baseline covers overview, slow zoom, constant-distance camera move, all current MapModeId values, close/detail, boundary, zoom-back, and required-failure UX screenshots; check mode reports changed-pixel ratio, RGB RMS, max channel delta, and diff PNGs. viewer exposes npm run verify:browser:baseline:update and npm run verify:browser:baseline; local baseline PNGs are ignored until LFS/artifact storage is chosen. | T3-3 can now wire the visual baseline gate into CI or a clean-fixture harness. Do not update baselines to hide a regression; update only after visual acceptance. |
| R-28 | Scenario CLI wrapper (was T2-6) | Done | viewer/package.json exposes npm run scenarios, backed by viewer/scripts/run-scenarios.mjs. It drives all scenario IDs from viewer/src/scenarios/mapv10_scenarios.json, writes screenshots and scenario-results.json, records load/frame/cache/LOD probes, and exits non-zero on hard budget issues. | Keep scenario poses in mapv10_scenarios.json; do not duplicate them in separate verifier scripts. |
| R-29 | Source-of-truth docs reconciliation (was T3-1) | Done | The portable packet now states source-of-truth boundaries, validation commands, current scenario warning status, implemented M0-M10 status, deferrals, and next queue; governance mirrors and bundle guidance point at the same packet and canonical manifest URL. | Future feature waves should reconcile docs first when source/doc drift appears; hidden source behavior is not architecture. |
| R-30 | LOD/Data Coherence Foundation chain (Waves 1-4.5 + MZ foundation) | Done | Wave 1 installed scenario gates that hard-FAIL on master before fixes landed — gate vocabulary now lists forbidZ0UnderlayAtPrimaryZ, ancestorZDistanceMax, sidecarReadyForActiveMeshes, noPlaceholderTextureBound, coverageHoleMax, labelTextUniqueness, labelTextMinDistinctCount, labelMaxDuplicatesForText (viewer/src/scenarios/scenarioTypes.ts, evaluated by viewer/scripts/scenario-gates.mjs). Wave 2 redesigned the resolver+sidecar contract after 3+ patches failed: MAX_ANCESTOR_Z_DISTANCE = 1 plus structural-parent walk plus isFullyResident AND-gate of CPU+GPU residency replaced bounds-containment-alone ancestor promotion and silent fallback (viewer/src/renderer/lod/RenderResolver.ts). Wave 3a split SSE telemetry by role × metric: primary/underlay × geometric/raster four-way budget (docs/performance-and-streaming-hardening-spec.md §7.3). MZ implements the accepted z6/z7 physical LOD decision: z0 is now a true coarse rung, z1 is repaired as macro-region, z6/z7 are generated physical residency levels, closeDetailNormal supplies generated close-detail truth, and semanticDisplayPolicy separates strategic semantic zoom from physical lodBand. Wave 4 replaced the 120-name location_name pool with a per-biome procedural namer (see docs/generator.md § "Naming"). Wave 4.5 completed the *Stats → *Channels rename (engine + benchmarks + tests). Latest committed scenario evidence is incomplete: viewer/verification/scenarios/latest/scenario-results.json declares 27 scenarios but stores 25 entries, missing mapv10_location_height_z5 and mapv10_location_label_uniqueness_z5; 24 stored entries have no hard issues, and the slow-network stored entry carries the old blocking omitted-slot issue with actual value 24 under remote RTT throttle. | Do not reopen these as new tier-0/tier-1 items. Follow-up work is full scenario evidence regeneration and direct polish; do not refresh visual baselines until foundation behavior is accepted. |
| R-31 | Slow-network strict-fallback contract under throttle | Decision recorded | docs/local-only-product-premise.md records the accepted policy: mapv10 is a local-only generated product, so remote/network-throttled evidence is future-resilience / cold-cache stress evidence, not a blocking product gate by itself. mapv10_continent_to_location_slow_network is retained, but the throttled omittedSlotCountMax check moves out of performanceBudget.zoomTrace.fail and into non-blocking warning evidence. Clean/local scenarios that model shipped local filesystem/decode/upload latency still own blocking omitted-slot regressions. Do not widen RenderResolver.MAX_ANCESTOR_Z_DISTANCE, raise the raster SSE target, or treat this as permission to hide real local-cache holes. | Regenerate full scenario evidence later; do not fabricate missing results or update visual baselines in this process wave. |
Tier 0 - Blocks normal user-visible quality
These are the issues that still make mapv10 look or feel wrong as a strategic map.
No open Tier 0 visual blocker is currently queued after R-14/R-16/R-17/R-18/R-19. The next work should move to broader performance budgets, baselines, fixture reproducibility, or the next product lane rather than reopening terrain readability without new scenario evidence.
Tier 1 - Architecture waves
Cross-cutting changes that should use the full wave protocol.
| # | Area | Type | What | Why | Goal | Verify | Benefit | Risk / Depends-on | Wave? |
|---|---|---|---|---|---|---|---|---|---|
| T1-1 | State-field producer transport | Implement | Build the runtime state-field producer that writes per-tick scalar fields into the existing world-space RT. The shader contract is already corrected: uStateField samples continent/world UV derived from per-fragment world km and uWorldBoundsKm, not tile-local vUv. | The RT, uniform binding, world-bounds uniform, and shader sampling contract exist; the missing piece is the cross-process producer/transport that fills the RT with live strategic fields. | Server publishes ticks; viewer updates uStateField at >= 1 Hz; overlays line up across tile boundaries and are governed by alpha. | Add a mocked gradient/checker state overlay scenario that crosses tile boundaries; fail if it repeats per tile. Add __mapv10Ready.stateField.connected = true once transport is live. | Turns mapv10 from a terrain viewer into a strategic-simulation surface for weather, corruption, economy, war intensity, etc. | Cross-process protocol design, backpressure, update cadence, and producer lifecycle. | Yes |
| T1-5 | Valenar WorldData export | Implement | First slice implemented: generator stage 15-valenar-worlddata writes run-local valenar_world gameplay JSON and valenar_world_mesh manifest products from mapv10 truth, with stable ids, derived Region/Area/Province/Location hierarchy, normalized facts, symmetric neighbors, anchors, content hash, and mesh artifact references. Next slice should package/import the generated files into Valenar content without replacing the dummy fixture by accident. | Valenar needs a narrow gameplay import contract, not the whole raw mapv10 artifact tree. The current dummy fixture is useful for fast tests, but production must be generated from the same continent pipeline as the viewer proof. | valenar/world-<seed>.json and .mesh.json are generated from a mapv10 run and registered in the run manifest. The remaining product bridge is an explicit Valenar content import/copy lane that runs the C# validators against the generated mapv10 export and preserves traceability back to the source run. Dummy data remains allowed as a fixture, never as hidden fallback. | Run cargo test in the generator, run the mapv10 export command, inspect stage 15-valenar-worlddata validation, then dotnet test tests/Valenar.Host.Tests/Valenar.Host.Tests.csproj. Add the next import/package test before treating generated mapv10 WorldData as the runtime default. Real-browser pass on the source map before declaring production export done. | Gives Valenar a stable production-world input while keeping mapv10's richer artifacts and viewer pipeline intact. It is now the product lane after the R-20 through R-25 renderer hardening chain went green. | Area hierarchy is currently derived deterministically from realm/province clusters during export; decide later whether areas become native mapv10 generator truth. Coordinate-system and hash semantics are locked enough for run-local products, but runtime import packaging still needs an explicit policy. | Yes |
Tier 2 - Quality / polish
Direct builds with discipline unless the row explicitly grows larger during exploration.
| # | Area | Type | What | Why | Goal | Verify | Benefit | Risk / Depends-on | Wave? |
|---|---|---|---|---|---|---|---|---|---|
| T2-1 | Borders checkbox scenarios | Implement | Add explicit mapv10_continent_political_borders_off and _on scenarios, ideally same camera/mode with only layers.borders changed. | The shader toggle is fixed in code, but the scenario file has no direct on/off visual pair. | On screenshot shows SDF borders; off screenshot shows none. | Visual diff between the pair; fail if OFF has border darkening or ON has no borders. | Locks in the recent border-layer fix. | None. | Direct |
| T2-2 | Label scenarios and threshold tuning | Improve | Reframe the old label item: collision, priority sorting, and min/max zoom bands already exist. Add label-on scenarios per LOD and tune thresholds only through explicit config/data, not ad hoc visibility hacks. | Label code is ahead of next-work.md, but it lacks scenario coverage and acceptance criteria. | Continent shows realm labels; realm/province zoom shows province labels; location zoom shows locations/features without overlap. | Add mapv10_*_labels_on scenarios. Check visibleLabelsByBand in debug state and screenshot overlap. | Professional strategic-map readability. | Depends on label-anchor quality in the regenerated R-8 organic fixture. | Direct |
| T2-3 | Geometry validator agent prompt | Implement | Add a committed mapv10-validate-geometry agent prompt in the active agent surface and list it in wave-protocol.md Part 6's file map. It should use __mapv10TerrainGeometryProbe, oblique scenarios, console checks, and screenshot judgement. | The protocol mentions geometry validation through scenario-driven gates, but there is no dedicated geometry validator agent to dispatch. | Future terrain/render waves have a structured geometry validator. | Grep the active agent surface for mapv10-validate-geometry; run a mock validation prompt from it. | Prevents future "mountains exist in code but not on screen" false passes. | None. | Direct |
| T2-4 | Document browser-driving tool ownership | Fix / Investigate | The verifier and heap check now share HTTP and /mapv10/runs/continent-lod6/manifest.json; the remaining work is to document which tool owns scenario visuals, network/failure proof, and heap stability. | There are multiple browser-driving tools, and contributors still need an explicit ownership table. | extending.md documents which tool to use for scenario visuals, network/failure proof, and heap stability. | Run npm run verify:browser:continent; run heap check once; docs include a tool-purpose table. | Keeps performance and visual validation responsibilities clear. | Fixture bootstrap remains open under T3-4 because continent-lod6 is untracked. | Direct |
Tier 3 - Process / tooling
Infrastructure that improves future waves and review quality.
| # | Area | Type | What | Why | Goal | Verify | Benefit | Risk / Depends-on | Wave? |
|---|---|---|---|---|---|---|---|---|---|
| T3-3 | CI integration of scenarios | Implement | Add CI/local headless Playwright scenario runs after the baseline strategy is decided. | Scenarios currently run only manually. | CI fails on scenario errors, WebGL console errors, missing fixtures, or visual diff failures. | Push/branch with intentional regression; CI fails with the right scenario id. | Locks visual behavior into the build gate. | Headless WebGL can be environment-sensitive; prove locally first. | Direct (medium) |
| T3-4 | Fixture distribution / bootstrap | Implemented | Generator bootstrap is the selected fresh-clone path for now. npm run fixture:continent from examples/map/mapv10/viewer regenerates ignored public/continent-lod6 with the release Rust generator, then validates stage 15-valenar-worlddata and prints Valenar export counts, hashes, and byte sizes. | The generated fixture is untracked. Fresh clones need one explicit setup path before scenarios or verifier can run. | One documented command/script creates the required fixture and validates the run-local Valenar export. Valenar still defaults to world-42; generated exports stay side products until an explicit import/package step selects them. | Run npm run fixture:continent, then npm run fixture:continent:validate, npm run verify:browser:continent, or scenario commands. | Makes mapv10 reproducible outside one machine without committing the huge fixture. | Generator bootstrap can be slow; Git LFS/artifact download remains a later option if regeneration time becomes the bottleneck. | Direct |
| T3-5 | External checker as formal Step 3.5 | Decide -> Implement | Decide whether the external Codex-style review is a required wave step. If yes, document it in wave-protocol.md between validation and tenets check. | Recent work showed validators can still false-pass. A separate cross-check catches assumptions. | Protocol states when Step 3.5 runs, what it checks, and what evidence it must provide. | Run one wave dry-run and confirm the new step has clear inputs/outputs. | Reduces "agent said PASS but user sees broken map" risk. | More process overhead; keep scope tight. | Direct |
| T3-6 | Sub-agent prefix enforcement | Decide -> Implement | Optional: add lint/checking for mapv10 agent prompt names if transcript access is stable. | Naming convention exists in protocol, but enforcement is process-only. | Misnamed mapv10 dispatches are easy to detect. | Run hook/check on known-good and known-bad transcript samples. | Easier wave traceability. | Depends on transcript format stability. | Direct if accepted |
Tier 4 - Hygiene
| # | Area | Type | What | Why | Goal | Verify | Benefit | Risk / Depends-on | Wave? |
|---|---|---|---|---|---|---|---|---|---|
| T4-3 | Remove unused SSL dependency | Fix | Remove @vitejs/plugin-basic-ssl from viewer/package.json and lockfile if no longer used. | Vite config is plain HTTP. The dependency now implies a dev-server mode that does not exist. | Package files contain no unused SSL plugin dependency. | npm install/lockfile update, npm run typecheck, npm test, npm run build. | Less dependency drift and less confusion. | None. | Direct |
| T4-5 | Memory rules consolidation | Improve | Optional: consolidate stale project-memory entries outside the repo if they are slowing future sessions. | Old mapv5/mapv6/mapv8 memories can distract from current mapv10 architecture. | Active rules are lean; historical notes are archived rather than deleted. | Manual review of memory index before/after. | Lower context noise. | Outside repo; only do if user wants process cleanup. | Direct |
| T4-6 | Stale sample-run fixture cleanup | Decide | The on-disk fixture viewer/public/sample-run/labelAnchors.json and viewer/public/sample-run/locationPolygons.json still contain pre-Wave-4 "Cairnfield"-era labels from the deleted legacy 120-pool location_name function (superseded by the per-biome procedural namer in docs/generator.md § "Naming"). Commit a644e88 carried the message "drop sample-run" but left the directory on disk, and the fixture is still referenced by viewer/src/ui/__tests__/manifestUrl.test.ts (the manifest-URL test fixture path), viewer/server/mapv10ArtifactMiddleware.ts (legacy-path interception), and viewer/scripts/verify-browser.mjs (alternate fixture lookup). See docs/local-only-product-premise.md — the fixture's historical justification was partly "simulate a slow load," which is weakened once the local-only product premise is acknowledged; this affects how strong option (b) "remove entirely" looks vs option (c) "keep as documented historical." | The fixture is inconsistent with the Wave 4 namer and the commit message implies it should already be gone, so its on-disk state diverges from doc + commit truth. | Pick one of: (a) regenerate sample-run with the Wave 4 procedural namer so the legacy "Cairnfield" labels are gone and the fixture stays internally consistent with the current naming contract, (b) confirm sample-run is genuinely unused outside test/middleware/verifier paths and remove the entire directory plus the three referencing usages in manifestUrl.test.ts, mapv10ArtifactMiddleware.ts, and verify-browser.mjs, or (c) explicitly document why the legacy fixture is retained as historical (tenet-disfavored; record the rationale on this row). Do not silently delete or auto-regenerate this fixture — viewer/src/ui/__tests__/manifestUrl.test.ts, viewer/server/mapv10ArtifactMiddleware.ts, and viewer/scripts/verify-browser.mjs may depend on its current shape. | After resolution: npm test and npm run verify:browser:continent from viewer/ pass; the three referencing files are either updated to the new fixture or have their sample-run references removed; no other code path silently falls back to sample-run content. | Removes a pre-Wave-4 source-of-truth drift item that future namer/label work would otherwise re-trip over. | Touches viewer test fixture + dev server middleware + browser verifier in lockstep; misordered changes break npm test. | Direct (after user decision a/b/c) |
Recommended order now
MZ is now the most recent foundation work on top of the LOD/Data Coherence chain. Latest committed scenario evidence (viewer/verification/scenarios/latest/scenario-results.json, 2026-05-11) is incomplete and predates MZ: 27 scenario definitions exist, the artifact declares 27, but it stores 25 result entries and omits mapv10_location_height_z5 plus mapv10_location_label_uniqueness_z5. Of the stored entries, 24 have no hard issues and the slow-network entry carries the old blocking omitted-slot issue under remote RTT throttle. The R-25 historical baseline showed 23 canonical scenarios green with 0 warnings; the gate vocabulary has since grown (Wave 1 + MZ) and the scenario count and budget targets are now stricter. Visual baselines under viewer/baselines/browser/continent-lod6/ were captured 2026-05-07 with the pre-Wave-2 fixture content-hash; recapture/update them only after foundation behaviour is accepted, never as part of process reconciliation.
- Regenerate and validate full MZ scenario evidence. The next proof must record all 27 definitions under the accepted local-only slow-network classification and the new semantic/close-detail gates.
- Inspect and accept/reject the MZ visual result before baseline work. Do not refresh visual baselines until the z6/z7 fixture, semantic policy, underlay, and close-detail product behavior are accepted.
- T3-3 CI scenario integration. Baselines now exist as local verifier artifacts; next wire the scenario/verifier suite from a clean fixture bootstrap, decide where CI obtains golden PNGs, and only commit
baselines/browser/<run-id>/manifest.json(SHA-256 contract) while keeping PNG screenshots gitignored. - Remaining direct polish: T2-1 borders on/off scenarios, T2-2 label scenarios, T2-4 browser-driving tool ownership, T4-3 unused SSL dependency cleanup.
- Deferred product lane: Valenar keeps the dummy fixture by default. Generated mapv10 WorldData stays produced as data only until an explicit import/package step is requested.
Dependency sketch:
R-14/R-16/R-17 terrain readability cluster
-> R-18 zoom-trace measurement
-> R-19 first zoom performance budget + route-streaming fix
-> R-20 lifecycle/hitch metrics + warn/fail schema
-> R-21 label lifecycle candidate gating + frame-budgeted residency
-> R-22 strict parent fallback contract
-> R-23 terrain geometric-error SSE upgrade
-> R-24 SSE terrain lifecycle pacing
-> R-25 route adaptive batching + upload pacing
-> R-26 generated artifact server boundary
-> R-27 browser screenshot baselines
-> R-28 scenario CLI wrapper
-> R-29 source-of-truth docs reconciliation
-> T3-3 CI scenario integration
T3-4 fixture distribution
-> needed before CI/baselines are reliable on fresh clones
T1-5 Valenar WorldData export (deferred product lane; keep generating data, do not switch Valenar off dummy fixture yet)
Open decisions
- Fixture distribution: generator bootstrap, Git LFS, or artifact download script?
- External checker: formal Step 3.5 in
wave-protocol.md, or keep ad hoc? - Performance thresholds: choose scenario budgets for p95/p99/long tasks/memory before optimizing.
- Highland palette policy: keep global snow/rock thresholds, or later make them biome/latitude conditional? Any retune must stay inside
TerrainVisualConfigplus scenario evidence. - Valenar WorldData hierarchy mapping: the first export derives Areas deterministically from realm/province clusters; decide later whether Areas become native mapv10 generator truth.
- Warn/fail budget ratchet policy: nested
zoomTrace.{fail,warn}is now locked. Decide when current zero-warning baselines should graduate selected warn gates to hard-fail. - SSE target pixel threshold: keep the default 1.5 px until new scenario evidence says the target, not lifecycle pacing, is the problem.
- Fallback-duration budget ratchet:
omittedSlotCountMax=0is hard-fail; clean fallback-slot warn baseline is 32 after R-24; fallback duration is measured and should become a tighter fail gate after repeated stable baselines. - Vectors share label state machine vs separate: recommend shared with collapsed visible/fading_out for vectors; lock if vector churn reappears as an owner metric.
- Stale
sample-runfixture (T4-6):viewer/public/sample-run/labelAnchors.jsonandviewer/public/sample-run/locationPolygons.jsonstill carry pre-Wave-4 "Cairnfield" labels and survive on disk despite commita644e88's "drop sample-run" message; referenced byviewer/src/ui/__tests__/manifestUrl.test.ts,viewer/server/mapv10ArtifactMiddleware.ts, andviewer/scripts/verify-browser.mjs. Decide between (a) regenerate with the Wave 4 namer, (b) remove the directory and the three referencing usages, or (c) keep as documented historical. Do not silently delete or auto-regenerate — tests/middleware/verifier depend on its current shape.