EmergingInterfaces & UXNew entryMarch 2026 New Items

Interesting and early. Worth a spike or exploration session.

Google Stitch

Free, fast UI generation from prompts and sketches. Useful for rapid prototyping but don't ship without a designer reviewing the output.

Multimodal·DevTool·Emerging

stitch.withgoogle.com

Our Take

What It Is

Google Stitch is an AI design tool from Google Labs that generates responsive UI designs from text prompts and image inputs. Upload a whiteboard sketch, wireframe, or screenshot and get a high-fidelity digital design back. It outputs Figma-compatible designs and production-ready HTML/CSS/React code. Standard mode uses Gemini 2.5 Flash (350 generations/month free), and Experimental mode uses Gemini 2.5 Pro (50 generations/month free). The December 2025 Stitch 2.0 release added UX prediction heatmaps and a VS Code extension.

Why It Matters

The sketch-to-UI pipeline is the standout feature. Multimodal input handling (photos of whiteboards, rough wireframes, existing screenshots) is meaningfully more useful than text-only competitors. The Figma integration matters because it slots into existing design team workflows rather than replacing them. And the price is hard to argue with: genuinely free, no credit card required, with enough monthly generations for real exploration. For rapid prototyping and design exploration, this removes significant friction.

Key Developments

  • Feb 2026: Enterprise rollout with SOC2 compliance, team collaboration features, and API endpoints.
  • Dec 2025: Stitch 2.0 launched with free React code generation, UX prediction heatmaps, and VS Code extension.
  • Dec 2025: Prototypes feature shipped for multi-screen designs with linked flows.
  • Oct 2025: Open-source Stitch Skills ecosystem for community-built React component converters.
  • May 2025: Launched at Google I/O 2025 as a Google Labs experiment.

What to Watch

Accessibility is the biggest gap. Generated designs regularly fail basic WCAG requirements: colour contrast, touch targets, and semantic HTML all need manual review. The visual sameness problem (everything has a Material Design-adjacent feel regardless of prompt specificity) limits its utility for teams needing distinct brand identity. Watch whether Stitch moves beyond Labs to a GA product (Google's track record with Labs experiments is mixed), and whether the accessibility tooling catches up to the generation quality.

Strengths

  • Zero cost for prototyping: 350 generations/month on Standard, 50 on Experimental. Genuinely free, no credit card required.
  • Sketch-to-UI pipeline: Upload whiteboard photos, wireframes, or screenshots and get high-fidelity digital designs. Standout multimodal input handling.
  • Figma integration: Standard mode designs paste directly into Figma for refinement and handoff. Slots into existing design workflows.
  • Code output quality: React and Tailwind CSS output from Stitch 2.0 is clean enough to use as a starting point. VS Code extension reduces context-switching.

Considerations

  • Accessibility gaps: Generated designs regularly fail basic WCAG requirements. Colour contrast, touch targets, and semantic HTML all need manual review.
  • Visual sameness: Outputs default to a narrow set of Material Design-adjacent layouts regardless of prompt specificity. Difficult to get distinct brand identity.
  • Complexity ceiling: Handles 2-3 screens well but struggles beyond that. Navigation elements vary inconsistently between screens.
  • Google Labs risk: Still a Labs product, not GA. Google has a track record of sunsetting experiments. Don't build critical workflows around it without a fallback.

More in Interfaces & UX

Google Stitch· GitHub Spark· Video Generation· Vercel AI SDK

Back to AI Radar