LLM-Friendly Web
What Your Site Looks Like to an LLM — and How to Control It
Side-by-side comparison of how five AI systems parse the same HTML page. Seven concrete markup changes that control how LLMs cite, summarize, and retrieve your content.
Read article →01 Core Prompting Techniques
Each technique below targets a different failure mode. Choose based on what's going wrong — not what's popular.
| Technique | Best For | Expected Benefit | Complexity |
|---|---|---|---|
| Zero-shot | Simple, well-defined tasks | Fast; no examples needed | Low |
| Few-shot | Tasks with a specific output format | Consistent structure and tone | Low |
| Chain-of-thought | Multi-step reasoning, math, logic | Reduces errors on complex tasks | Medium |
| Role prompting | Domain-specific writing or analysis | Shifts vocabulary and perspective | Low |
| Self-consistency | High-stakes decisions | Majority-vote accuracy boost | Medium |
| Structured output | Downstream parsing (JSON, CSV) | Machine-readable responses | Medium |
Direct instruction with no examples. The starting point for all tasks.
Provide 2–5 examples before the real task. Dramatically improves format consistency.
Ask the model to reason step-by-step before giving an answer.
Specify JSON, XML, or CSV schema explicitly. Use with tool-calling APIs.
02 Five Rules That Always Apply
1. Lead with the task, not the context
State what you want in the first sentence. Models weight the beginning and end of a prompt more heavily than the middle — burying the task in background context is the most common prompting mistake.
2. Specify the output format explicitly
If you need JSON, say Respond in valid JSON with keys: name, summary, tags. If you need a table, say Format as a markdown table with columns: Feature, Price, Limit. Never assume the model will infer the right format.
3. Add "think step by step" for reasoning tasks
Chain-of-thought prompting — asking the model to reason before answering — measurably improves accuracy on arithmetic, logic, and multi-step tasks. The phrase Let's think step by step is a well-documented trigger across all major models.
4. Use delimiters to separate instruction from content
Wrap user-supplied content in triple backticks or XML-style tags to prevent prompt injection and clarify structural boundaries. Example:
Summarize the following article in three sentences:
```
{article_text}
```
5. Iterate — don't over-engineer the first draft
Start with the simplest prompt that could work. Add constraints only when the output fails a specific test. Over-engineered prompts are harder to debug and often perform worse than simple, clear ones.
03 Which Model Should You Use?
Model choice matters as much as prompt design. Different models respond differently to the same prompt — especially for instruction following, reasoning depth, and structured output reliability. See the model comparison page for a structured breakdown of context windows, strengths, and cost trade-offs across eight major LLMs.
For most production tasks: use the largest model you can afford for prototyping, then downgrade once the prompt is stable. Smaller, faster models often match larger ones when the prompt is well-crafted.
04 About This Site's Design
This site is itself an example of LLM-friendly web design. Every page includes:
- A
<meta name="description">that leads with the answer (≤160 chars) - JSON-LD structured data with the correct
@typefor each page - Full OpenGraph tags including
og:image - Machine-readable dates via
<time datetime="..."> - Structured data in
<table>elements — not prose lists - A llms.txt file describing the site for AI consumers
- A sitemap.xml with accurate
<lastmod>dates
View the source on GitHub or read the about page for the full rationale.
Last updated: