✍️Prompt EngineeringLesson 4.1

Core Prompting Techniques

Clarity, role prompting, few-shot, chain-of-thought, and XML structure.

25 min

Learning Objectives

  • Write clear, specific prompts that minimize ambiguity
  • Use role prompting and few-shot examples effectively
  • Structure prompts with XML tags for organization

Core Prompting Techniques

Prompt engineering is the practice of designing inputs to Claude that reliably produce the desired outputs. While it may sound informal, prompt engineering is the single most impactful lever available to architects building on the Anthropic API. The difference between a mediocre prompt and a well-engineered one can be the difference between a prototype that occasionally works and a production system that consistently delivers. This lesson covers the foundational techniques every Claude Certified Architect must master.

Clarity and Specificity

The most common prompting mistake is ambiguity. Claude is remarkably capable, but it cannot read your mind. A vague instruction like “summarize this” gives the model enormous latitude — it must guess the desired length, tone, audience, and level of detail. A specific instruction removes guesswork and increases consistency.

Principles of Clear Prompts

  • State the task explicitly: Open with a direct statement of what you want Claude to do. Avoid burying the actual instruction in the middle of a long paragraph of context.
  • Define the output format: If you need bullet points, say so. If you need exactly three paragraphs, say so. If you need JSON, specify the schema.
  • Specify constraints: Maximum length, required fields, topics to include or exclude, language, tone, and audience.
  • Provide context about purpose: Telling Claude why you need the output often improves quality. “This will be shown to a non-technical executive” produces different output than “This will be reviewed by a senior engineer.”
import anthropic

client = anthropic.Anthropic()

# Bad: Vague prompt
response = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Summarize this article."}]
)

# Good: Specific prompt with clear constraints
response = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    messages=[{"role": "user", "content": (
        "Summarize the following article in exactly 3 bullet points. "
        "Each bullet should be one sentence, written for a non-technical "
        "executive audience. Focus on business impact, not technical details.\n\n"
        "<article>\n{article_text}\n</article>"
    )}]
)

Role Prompting (System Prompts)

Role prompting uses the system parameter to establish Claude's persona, expertise, and behavioral constraints before the conversation begins. The system prompt is not just flavor text — it fundamentally shapes how Claude interprets every subsequent message.

When to Use Role Prompting

  • Domain expertise: When Claude needs to respond as a specialist (e.g., a medical professional, a legal analyst, a financial advisor).
  • Behavioral constraints: When Claude must follow specific rules throughout the conversation (e.g., always respond in JSON, never discuss competitor products, always cite sources).
  • Tone and style: When the output must match a brand voice or communication standard.
  • Safety guardrails: When Claude must refuse certain categories of requests or always include disclaimers.
import anthropic

client = anthropic.Anthropic()

response = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=2048,
    system=(
        "You are a senior security auditor reviewing code for vulnerabilities. "
        "You have 15 years of experience in application security. "
        "For every issue you find, provide:\n"
        "1. The vulnerability type (e.g., SQL Injection, XSS)\n"
        "2. The severity (Critical, High, Medium, Low)\n"
        "3. The affected line(s) of code\n"
        "4. A concrete remediation with a code example\n\n"
        "If you find no vulnerabilities, explicitly state that the code "
        "passed your review and explain what you checked."
    ),
    messages=[{"role": "user", "content": (
        "Review the following Python function for security issues:\n\n"
        "<code>\n"
        "def get_user(user_id):\n"
        "    query = f\"SELECT * FROM users WHERE id = {user_id}\"\n"
        "    return db.execute(query)\n"
        "</code>"
    )}]
)
Exam Tip: The CCA-F exam tests whether you understand the distinction between system prompts and user messages. System prompts set persistent behavior and identity. User messages provide the specific task. Placing task-specific instructions in the system prompt wastes tokens on every turn of a multi-turn conversation.

Few-Shot Prompting

Few-shot prompting provides Claude with concrete examples of the desired input-output mapping. Instead of describing what you want in abstract terms, you show Claude what correct output looks like. This is one of the most effective techniques for tasks where the output format or reasoning style is non-obvious.

How Few-Shot Works

You include one or more example pairs in the prompt — each consisting of an input and the ideal output for that input. Claude uses pattern matching to generalize from these examples and apply the same logic to the actual input.

import anthropic

client = anthropic.Anthropic()

response = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": (
            "Classify the sentiment of the following customer review.\n\n"
            "Example 1:\n"
            "Review: \"The product arrived on time and works perfectly.\"\n"
            "Sentiment: POSITIVE\n\n"
            "Example 2:\n"
            "Review: \"Broke after two days. Complete waste of money.\"\n"
            "Sentiment: NEGATIVE\n\n"
            "Example 3:\n"
            "Review: \"It\'s okay. Nothing special but it gets the job done.\"\n"
            "Sentiment: NEUTRAL\n\n"
            "Now classify this review:\n"
            "Review: \"Shipping was slow but the quality exceeded my expectations.\"\n"
            "Sentiment:"
        )}
    ]
)

Best Practices for Few-Shot Examples

  • Include edge cases: Do not just show the easy examples. Include examples that demonstrate how to handle ambiguous or tricky inputs.
  • Use diverse examples: Cover the range of expected inputs. If you are classifying into five categories, show at least one example per category.
  • Order matters: Place the most representative examples first. The last example before the actual input has the strongest influence on the output.
  • Keep examples realistic: Synthetic examples that do not match real-world data can mislead Claude.

Chain-of-Thought Prompting

Chain-of-thought (CoT) prompting asks Claude to show its reasoning step by step before arriving at a final answer. This technique dramatically improves accuracy on tasks that require multi-step reasoning: math problems, logical deduction, complex classification, and planning tasks.

Explicit CoT

The simplest form of CoT is adding a phrase like “Think step by step” or “Show your reasoning before giving a final answer.” For production systems, a more structured approach is preferable.

import anthropic

client = anthropic.Anthropic()

response = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=2048,
    messages=[{"role": "user", "content": (
        "A customer wants to return a laptop purchased 45 days ago. "
        "Our return policy allows returns within 30 days for a full refund, "
        "31-60 days for store credit, and no returns after 60 days. "
        "The laptop has a cracked screen.\n\n"
        "Determine the correct return action. Use this process:\n"
        "<steps>\n"
        "1. Identify the number of days since purchase\n"
        "2. Determine which policy tier applies\n"
        "3. Check for any damage-related exceptions\n"
        "4. State the final decision with justification\n"
        "</steps>"
    )}]
)
Exam Tip: Chain-of-thought prompting is distinct from Extended Thinking (covered in Lesson 4.4). CoT is a prompting technique where reasoning appears in the visible output. Extended Thinking uses a dedicated thinking block with a separate token budget. The exam may test whether you know when to use each approach.

XML Tag Formatting

Anthropic specifically recommends using XML tags to structure prompts. XML tags provide unambiguous delimiters that separate instructions from data, mark different sections of a prompt, and make it easy for Claude to parse complex inputs. This is a hallmark of professional prompt engineering with Claude.

Common XML Tag Patterns

  • Data delimiters: <document>, <article>, <code> — wrap input data to separate it from instructions.
  • Instruction sections: <instructions>, <rules>, <constraints> — group behavioral directives.
  • Output format: <output_format>, <example_output> — show Claude the expected structure.
  • Few-shot examples: <example>, <input>, <output> — wrap example pairs.
import anthropic

client = anthropic.Anthropic()

response = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=2048,
    system=(
        "You are a technical documentation writer. Follow the rules exactly."
    ),
    messages=[{"role": "user", "content": (
        "<instructions>\n"
        "Convert the following raw API response into a user-friendly "
        "documentation entry. Include a description, parameters table, "
        "and a usage example.\n"
        "</instructions>\n\n"
        "<rules>\n"
        "- Use Markdown formatting\n"
        "- Keep the description under 50 words\n"
        "- Include type information for all parameters\n"
        "- The usage example must be in Python\n"
        "</rules>\n\n"
        "<api_response>\n"
        "{raw_api_json}\n"
        "</api_response>\n\n"
        "<output_format>\n"
        "## [Endpoint Name]\n"
        "[Description]\n\n"
        "### Parameters\n"
        "| Name | Type | Required | Description |\n"
        "| --- | --- | --- | --- |\n\n"
        "### Example\n"
        "```python\n"
        "[code]\n"
        "```\n"
        "</output_format>"
    )}]
)

Why XML Tags Work Well with Claude

Claude was trained with XML-tagged data and responds particularly well to this structure. Key advantages include:

  • Unambiguous boundaries: Unlike Markdown headers or dashes, XML tags have clear open/close semantics that eliminate parsing ambiguity.
  • Nestable structure: XML naturally supports hierarchical data, allowing you to nest examples inside instruction blocks.
  • Easy extraction: When Claude produces output wrapped in XML tags, your code can reliably parse it with simple string operations or XML parsers.
Key Takeaway: The five core prompting techniques — clarity, role prompting, few-shot examples, chain-of-thought, and XML tag formatting — are not mutually exclusive. Production prompts typically combine multiple techniques. A well-engineered prompt might use a system prompt for role definition, XML tags for structure, few-shot examples for format demonstration, and chain-of-thought for reasoning quality. The art is knowing which techniques to combine for a given task.

Prompt Structure Best Practices

Recommended Prompt Ordering

Anthropic recommends a specific ordering of prompt components for best results:

  • 1. System prompt: Role, persona, and global behavioral constraints.
  • 2. Context / data: The documents, code, or data Claude needs to work with, wrapped in XML tags.
  • 3. Task instructions: What Claude should do with the data.
  • 4. Output format specification: How the result should be structured.
  • 5. Examples (if using few-shot): Concrete demonstrations.

Placing long context (documents, code) before the instructions takes advantage of Claude's attention patterns. Claude tends to pay strong attention to the beginning and end of the prompt, so placing instructions after context ensures they are near the end where attention is highest.

Exam Tip: The exam may present a poorly structured prompt and ask you to identify what is wrong. Common anti-patterns include: burying instructions inside data, mixing format specifications with task instructions, failing to delimit data boundaries, and using ambiguous separators like dashes or blank lines instead of XML tags.