Twenty One Media
aiMay 12, 2026

We Used AI to Generate Our Website's Hero Video

When we rebuilt the Twenty1 Media homepage, the hero needed to feel alive. A static gradient mesh looked fine in screenshots but flat in person. We wanted the kind of background you see on high-end AI company sites: subtle motion, dark atmosphere, nothing that fights the headline.

The options were a stock license, a motion designer, or generating it ourselves. We generated it.

The Tool: Higgsfield with Seedance 2.0

Higgsfield is a text-to-video platform with direct access to Seedance 2.0, one of the better models for atmospheric, non-photorealistic generation. We used the text-to-video mode with this prompt:

Volumetric blue and violet light drifting slowly through dark space. Subtle holographic geometry in the background. Seamless loop, cinematic, no text, no faces, no objects.

One generation, two minutes, 8 seconds of output. The result: soft light that moves without calling attention to itself. It fits the dark gradient mesh we have on the hero and doesn't compete with the headline or the CTAs.

The total cost was a few credits on a Higgsfield account we already had. No stock license, no designer invoice, no revision rounds.

The Technical Side

The raw video came out as an MP4. We compressed it to around 5MB, which is acceptable for a background asset that loads in parallel with the rest of the page.

The HeroVideo component handles four things that matter in production:

Autoplay blocked. Mobile browsers often block autoplay even with muted and playsInline set. The component shows a poster frame (a screenshot of the first frame) if the video never starts. The hero looks fine either way.

Reduced motion. Users with prefers-reduced-motion enabled shouldn't see video backgrounds. We use a Tailwind motion-reduce:hidden class on the video element and a static gradient takes over.

Slow connections. The video fades in on the playing event, not on mount. If the video is buffering, the poster stays visible. The fade-in is 800ms opacity, so even on a fast connection it doesn't pop.

Readability. The video sits at the bottom of the z-stack: behind the gradient mesh, behind the cursor glow, behind the content. A vignette gradient and a bottom fade keep the headline readable regardless of which frame is visible.

<video
  ref={videoRef}
  src="/videos/hero-tech.mp4"
  autoPlay
  loop
  muted
  playsInline
  poster="/images/hero-poster.jpg"
  className="motion-reduce:hidden ..."
/>

The opacity transition is controlled by a playing state that flips on the video's playing event:

videoRef.current.addEventListener("playing", () => setPlaying(true), {
  once: true,
});

One listener, fires once, cleans itself up.

Why This Approach

Stock video sites have thousands of "tech background" clips, but they're generic and often overused. A custom generation matches your exact aesthetic because you describe it. The prompt above took three tries to get right: the first two were too bright, too particle-heavy. The third one was quiet enough to disappear behind the content.

The workflow is fast enough to iterate. Generate, preview, reject, adjust the prompt, regenerate. The whole session took under 20 minutes. That's faster than sourcing and licensing a stock clip you're only half-satisfied with.

We use Higgsfield for this same reason when building client sites that need atmospheric motion: events pages, restaurant landing pages, professional service heroes. The asset is custom, the cost is low, and it takes less time than briefing a designer and waiting for delivery.

The hero video is the kind of detail that most visitors won't consciously notice. They'll just feel like the site looks more alive. That's the goal.