Developer Experience
Structured Output
A technique for constraining a language model's output to follow a specific format like JSON, XML, or a defined schema, ensuring the response can be reliably parsed by downstream code.
Why it matters
Getting reliable structured output from LLMs is essential for building production AI applications. Without it, you are left parsing free-form text with brittle regex — structured output makes AI responses machine-readable.
The problem
Language models generate free-form text by default. If your application needs to extract a product name, price, and rating from a review, you need the model to return those fields in a predictable format — not buried in a paragraph of prose.
Approaches
- Prompt-based — ask the model to respond in JSON and hope it complies. Works most of the time but not guaranteed.
- Schema-constrained — use API features (like OpenAI's response_format or Anthropic's tool use) that force the model to produce valid JSON matching a specific schema. Much more reliable.
- Tool use / Function calling — define functions with typed parameters. The model "calls" these functions by outputting structured arguments rather than free text.
Best practices
- Always validate the output against your schema, even with constrained generation.
- Provide the schema in the prompt so the model understands the expected structure.
- Use few-shot examples showing the exact output format you want.
- Keep schemas simple — deeply nested structures increase error rates.
Related Terms
Prompt Engineering— The practice of designing and refining inputs to language models to elicit more accurate, useful, and consistent outputs.Function Calling— The mechanism that allows LLMs to interact with external tools and APIs by outputting structured data — typically JSON — specifying which function to invoke and with what parameters.