Wiring PostHog Into Next.js App Router Without Losing Pageviews
We added PostHog to twenty1-media.com after a pipeline incident where leads were silently dropping into a dead n8n webhook. We needed a second signal on form submissions so we'd catch write failures before they flatlined for a week. We also wanted real pageview data, which we'd never had.
Getting PostHog into a Next.js App Router project isn't plug-and-play. Here's what we ran into.
The Pageview Problem
App Router is a SPA with server-rendered pages. When a visitor navigates from / to /ai, the browser doesn't reload. Client-side navigation happens under React. PostHog's default capture_pageview: true fires once, when the SDK initializes, then stops. Every subsequent navigation is invisible.
You have to turn off autocapture and track manually:
posthog.init(key, {
api_host: "https://us.i.posthog.com",
capture_pageview: false,
capture_pageleave: true,
person_profiles: "identified_only",
});
Then you write a component that listens to usePathname and useSearchParams and fires the event on each route change:
function PageviewTracker() {
const pathname = usePathname();
const searchParams = useSearchParams();
useEffect(() => {
if (!posthog.__loaded) return;
let url = window.origin + pathname;
const qs = searchParams?.toString();
if (qs) url += `?${qs}`;
posthog.capture("$pageview", { $current_url: url });
}, [pathname, searchParams]);
return null;
}
Clean pageview data that matches actual navigation. Not just the first load.
The Suspense Requirement
useSearchParams() inside App Router requires a Suspense boundary. Without it, Next.js throws during static generation and the build fails. The fix is to wrap the tracker component:
export function PostHogProvider({ children }: { children: React.ReactNode }) {
return (
<PHProvider client={posthog}>
<Suspense fallback={null}>
<PageviewTracker />
</Suspense>
{children}
</PHProvider>
);
}
The error message you get without this isn't obvious. It complains about a missing parent boundary. Worth knowing the pattern before you hit the wall.
Server Events via a Singleton
Client-side PostHog handles pageviews and the lead_submitted event that fires when a form response comes back successful. We also wanted server confirmation, because a client event fires when the fetch resolves and a server event fires when data actually lands in Airtable. Two different moments, two different failure modes.
For the server side, posthog-node needs a singleton. Next.js API routes share the same Node process across requests, so constructing a new client per request leaks connections:
let _client: PostHog | null = null;
export function getPostHogClient(): PostHog | null {
const key = process.env.NEXT_PUBLIC_POSTHOG_KEY;
if (!key) return null;
if (!_client) {
_client = new PostHog(key, { host: "https://us.i.posthog.com" });
}
return _client;
}
Each API route calls this after a successful Airtable write and fires a named event: lead_captured, audit_request_processed, and so on. The distinctId comes from a request header passed from the client so the server event joins the same session in PostHog.
What We're Watching
Two signals per submission: client fires on form success, server fires on confirmed write. If those counts diverge, something in the write path is broken. We catch it before a week passes with no new leads in Airtable.
Pageview data is a side benefit. The real value is using two independent event sources as a cross-check on pipeline health.