Darhost

2026-05-17 13:15:46

Instant Navigation in GitHub Issues: A Client-First Performance Overhaul

GitHub Issues achieves instant navigation by shifting to client-side caching with IndexedDB, a preheating strategy, and a service worker, reducing perceived latency by 70-80%.

From Latency to Flow: Rethinking GitHub Issues Performance

Every developer knows the frustration of a context switch caused by slow page loads. When you’re triaging a backlog—opening an issue, following a linked thread, then returning to the list—even a half-second delay breaks concentration. These small pauses, repeated throughout the day, add up to significant productivity loss. GitHub Issues wasn’t “slow” in isolation, but the architecture forced users to repeatedly pay the cost of redundant server requests, destroying flow at critical moments.

Instant Navigation in GitHub Issues: A Client-First Performance Overhaul
Source: github.blog

To solve this, we didn’t chase incremental backend gains. Instead, we reimagined how issue pages load from end to end, shifting work to the client and prioritizing perceived latency. The goal: render content instantly from locally available data, then refresh in the background. This led to three key components: a client-side caching layer powered by IndexedDB, a preheating strategy to maximize cache hits without overloading the network, and a service worker that makes cached data usable even during hard navigations.

The Speed of Thought: Web Performance in 2026

In 2026, “fast enough” is no longer a competitive bar—especially for developer tools. Latency directly impacts product quality. When someone is triaging multiple issues, reviewing a feature request, or reporting a bug, every avoidable wait shatters their flow. Modern local-first applications and aggressively optimized clients set a new standard: not “loads in a second,” but “feels instant.”

Users today benchmark GitHub Issues not against other web apps, but against the fastest experience they encounter daily. GitHub Issues serves millions weekly, forming the backbone of codebase management for teams worldwide. As it becomes the planning layer for AI-assisted work, perceived performance becomes even more critical—if the loop between intent and feedback is slow, the whole system feels sluggish.

We heard consistent feedback from internal teams and the community: Issues felt too heavy compared to tools built with speed as a first principle. The bottleneck wasn’t feature depth or correctness—it was the request lifecycle. Common navigation paths still triggered full server renders, network fetches, and client boot processes, even when the user only wanted to look at the next issue.

Rethinking the Architecture for Instant Navigation

Our approach was to treat client-side caching as the foundation, not an afterthought. By storing issue data locally in the browser, we could serve pages instantly on repeat visits, then revalidate in the background to keep content fresh. This reduces perceived latency to near zero for the most common actions.

Client-Side Caching with IndexedDB

We built a caching layer using IndexedDB, a browser-based database that persists across sessions. When a user opens an issue, the data is stored with a timestamp. On subsequent navigations to that same issue (or a related one), the page renders immediately from the cache. A background revalidation checks for updates, updating the cache silently if needed. This eliminated the need for redundant server fetches, cutting the time between click and content display dramatically.

Preheating Strategy

A cache is only effective if it has high hit rates. We implemented a preheating mechanism that anticipates likely navigations based on user behavior—for example, when viewing a list of issues, we proactively fetch and cache the first few items. This ensures that when a user taps on an issue, it’s often already in the local store, achieving instant loads without spamming the server with unnecessary requests.

Service Worker for Hard Navigations

Traditional hard navigations (e.g., typing a URL directly or pressing refresh) bypass client-side caches, forcing a new fetch. To solve this, we introduced a service worker that intercepts network requests for issue data. The service worker can serve cached responses even on fresh page loads, ensuring that users who refresh or navigate via browser controls still experience low latency. It also handles offline scenarios gracefully, though that wasn’t the primary goal.

Instant Navigation in GitHub Issues: A Client-First Performance Overhaul
Source: github.blog

Real-World Results and Tradeoffs

What Improved

The primary metric we optimized was “time to first contentful paint” (FCP) for issue views, especially those reached from a list. In real-world usage, we saw a 70-80% reduction in perceived load time for repeat visits to issues. Cached pages appeared in under 100 milliseconds, compared to the previous 300-500ms. Background revalidation added a few hundred milliseconds, but since the content was already displayed, users didn’t notice the delay.

Tradeoffs and Challenges

This approach isn’t free. IndexedDB operations have overhead, and storing large amounts of data can impact device storage. We set limits on cache size and implemented eviction policies for older issues. Preheating increases background network usage, which could affect mobile users on slow connections—so we throttle preheating based on network quality. The service worker adds complexity; we had to carefully manage cache invalidation to prevent stale data from persisting too long.

Additionally, initial visits (first-time user, clear cache) still incur the full server load. We’re working on strategies like server-side rendering with streaming to improve that path as well. The tradeoff is acceptable for a product where most users interact with the same issues repeatedly.

Building for the Future

Our goal is to make “fast” the default across every entry point into Issues—whether from a link in an email, a notification, or a direct URL. We’re exploring partial prerendering for lists and using the chrome idle time to push updates. The service worker will eventually handle push notifications for real-time changes.

These improvements also pave the way for offline-first features, where users can browse and triage issues without an internet connection, syncing changes later. While not currently enabled, the architecture supports it.

Transferable Patterns for Web Developers

If you’re building a data-heavy web app, these patterns are directly transferable. Start by identifying common navigation paths where users revisit the same data. Implement a client-side cache (IndexedDB works well but localStorage or Cache API are alternatives). Add a preheating layer that predicts user intent without over-fetching. Deploy a service worker to serve cached responses on hard navigations and background sync for updates.

You don’t need a full rewrite. Even incremental gains—caching a few common items—can dramatically improve perceived performance. The key is to focus on user experience metrics (like time to display content) rather than raw server response times. By reducing perceived latency, you keep users in flow, which is the ultimate goal.