Context
Follow-up to #46 / #57. Routes are now React.lazy, so each chunk loads on first navigation. First-paint wins are real, but now we have to wait for each route to load its chunk on each initial visit, TBD whether this wait is negligible or not. We can close that gap by prefetching chunks in the background — keeping the first-paint win, adding a cached-on-arrival win.
Filing as an issue rather than implementing now; currently the suspense delay isn't a reported problem. Revisit if/when it becomes one.
Option 1 — Idle import() prefetch
Fire the lazy loaders inside requestIdleCallback after first paint.
```ts
useEffect(() => {
const idle = window.requestIdleCallback ?? ((cb) => setTimeout(cb, 1));
idle(() => {
import("@/protoFleet/features/dashboard/pages/Dashboard");
import("@/protoFleet/features/fleetManagement/components/Fleet");
// ...etc
});
}, []);
```
Pros
- Zero new deps. ~10 lines per `App.tsx`.
- `React.lazy` reuses the in-flight promise, so navigation resolves instantly if prefetch completed.
- Easy to scope: just call with the paths you want warmed.
Cons / considerations
- Fetches at script priority, not `prefetch` priority. Competes with real user requests more than option 2.
- Parses + executes module top-level code immediately on fetch. For our route modules that's cheap (imports + component definition, no side effects), but worth confirming per-route.
- Doesn't respect `Save-Data` / `prefers-reduced-data` out of the box.
Option 2 — Build-time `<link rel="prefetch">`
Inject `<link rel="prefetch" href="/assets/route-foo-HASH.js">` for async chunks into the entry HTML at build time (e.g. `vite-plugin-preload`, or a small custom plugin reading the rollup bundle manifest).
Pros
- Browser-native low-priority fetch. Never competes with user-initiated requests.
- No JS cost; purely HTML hints.
- Browser respects `Save-Data` and network conditions automatically.
- Cache-populated before React even hydrates.
Cons / considerations
- Adds a build plugin we need to vet / maintain (or ~30 lines of custom plugin code walking `this.bundle`).
- Harder to scope dynamically — by default warms every async chunk, wasting bandwidth on never-visited routes.
- Doesn't parse/execute; still pays module init cost on actual navigation. Usually cheap, but bigger chunks (logs, diagnostics) see slightly less benefit than option 1.
Route-aware extension (applies to either option)
DOM scanning won't work — protoOS nav uses buttons + programmatic `navigate()` (not ``), and protoFleet's nav is partial. Need config-driven reachability instead. Both routers already centralize it:
- `client/src/protoFleet/config/navItems.ts`
- `client/src/protoOS/components/Navigation/AppNavigationItems.tsx`
- Onboarding flows: linear, hardcoded `navigate()` calls.
Three-tier policy:
- Global (always reachable) — sidebar routes. Prefetch immediately after first paint. ~6 chunks protoFleet, ~5 protoOS.
- Section (one click away) — Settings tabs when entering `/settings/`; KPI tabs + Logs/Diagnostics when entering `/miners/:id/`. Triggered in section `Layout` `useEffect`.
- Sequence (next step only) — onboarding steps prefetch only the next step. Skips steps user may never hit.
Recommendation when we pick this up
- Start with option 1, global + section tiers. Simplest, no new deps, measurable win.
- Consider option 2 only if we start seeing contention with real user requests, or want `Save-Data` respect for mobile/metered connections.
- Skip route-aware prefetch entirely if total async-chunk size stays small enough that warming everything is acceptable (currently ~500 KB gz across all routes — borderline on mobile).
Acceptance
- First navigation to every route on a warm session has no visible suspense fallback.
- First paint time unchanged vs. current lazy-only state.
- No regression in Lighthouse "unused JS" / "main thread work" metrics.
Refs #46, #57
Context
Follow-up to #46 / #57. Routes are now
React.lazy, so each chunk loads on first navigation. First-paint wins are real, but now we have to wait for each route to load its chunk on each initial visit, TBD whether this wait is negligible or not. We can close that gap by prefetching chunks in the background — keeping the first-paint win, adding a cached-on-arrival win.Filing as an issue rather than implementing now; currently the suspense delay isn't a reported problem. Revisit if/when it becomes one.
Option 1 — Idle
import()prefetchFire the lazy loaders inside
requestIdleCallbackafter first paint.```ts
useEffect(() => {
const idle = window.requestIdleCallback ?? ((cb) => setTimeout(cb, 1));
idle(() => {
import("@/protoFleet/features/dashboard/pages/Dashboard");
import("@/protoFleet/features/fleetManagement/components/Fleet");
// ...etc
});
}, []);
```
Pros
Cons / considerations
Option 2 — Build-time `<link rel="prefetch">`
Inject `<link rel="prefetch" href="/assets/route-foo-HASH.js">` for async chunks into the entry HTML at build time (e.g. `vite-plugin-preload`, or a small custom plugin reading the rollup bundle manifest).
Pros
Cons / considerations
Route-aware extension (applies to either option)
DOM scanning won't work — protoOS nav uses buttons + programmatic `navigate()` (not ``), and protoFleet's nav is partial. Need config-driven reachability instead. Both routers already centralize it:
Three-tier policy:
Recommendation when we pick this up
Acceptance
Refs #46, #57