Introduction to Agentic Systems
What makes a system 'agentic' and when to use agents vs. simple prompts.
Learning Objectives
- Define what makes a system agentic
- Distinguish between workflows and agents
- Identify when agentic complexity is warranted
Introduction to Agentic Systems
Agentic AI systems represent a fundamental shift in how we deploy large language models. Rather than treating a model as a one-shot question-answering engine, agentic architectures allow models to take sequences of actions, make decisions over time, and interact with the world through tools — all in pursuit of a longer-horizon goal. Understanding what makes a system truly "agentic," and when that complexity is actually warranted, is the foundational knowledge tested in Domain 1 of the Claude Certified Architect exam.
What Makes a System "Agentic"?
A system is considered agentic when it exhibits three core properties:
- Autonomous decision-making: The model determines its own next action rather than simply responding to a fixed instruction. It can choose which tool to call, whether to ask a clarifying question, or whether the task is complete.
- Tool use: The model can interact with external systems — searching the web, reading files, calling APIs, executing code, or writing to databases — in order to gather information or produce effects in the world.
- Iterative execution: The model runs in a loop, using the results of previous actions to inform future ones. Each iteration narrows the gap between the current state and the desired goal.
Agentic systems are also frequently described as having planning capability — the ability to decompose a complex goal into sub-tasks and reason about the order in which those sub-tasks should be completed. Anthropic's documentation frames this simply: agents are systems "where LLMs dynamically direct their own processes and tool usage to accomplish longer-horizon tasks."
The Spectrum: Prompts → Workflows → Agents
Not every AI application needs to be an agent. Anthropic describes a spectrum of complexity, and the right position on that spectrum depends on the task at hand.
Simple Prompts
A single API call with a well-crafted prompt. The model receives input, generates output, and the interaction is complete. No tools, no loops, no state. This is the correct choice for the vast majority of NLP tasks: classification, summarization, translation, drafting, and extraction.
Workflows (Orchestrated Pipelines)
Multiple LLM calls chained together, often with deterministic logic between them. The control flow — what happens next — is defined by the developer, not by the model. Patterns include prompt chaining, routing, and parallelization (covered in lessons 1.2 and 1.3). Workflows are predictable, testable, and easier to debug than fully autonomous agents.
Agents
The model itself directs the control flow. It decides which tools to call, in what order, and when to stop. Agents are appropriate when the task is genuinely open-ended, requires adaptive reasoning, or cannot be fully specified in advance by a human designer.
When Agentic Complexity Is Warranted
Anthropic's guidance is explicit: prefer the simplest solution that solves the problem. Agentic complexity introduces real costs — latency, token consumption, error propagation, and difficulty in debugging. Before reaching for an agent, ask:
- Can this task be solved with a single, well-engineered prompt?
- Is the control flow predictable enough to be hard-coded in a workflow?
- Would a human reviewing the output catch any errors before they have real-world impact?
Agentic complexity is warranted when:
- The task is too complex or long-horizon to fit in a single context window.
- The model genuinely needs to discover what steps are required, not just execute pre-defined ones.
- External tool calls are needed to gather real-time information or produce side effects.
- Parallel workstreams would benefit from independent sub-agents working simultaneously.
Anthropic's "Building Effective Agents" Framework
Anthropic's foundational guidance on agentic systems — published in their "Building Effective Agents" documentation — provides the conceptual backbone for the entire Domain 1 exam section. The key principles are:
- Start simple. Use augmented LLMs (models with retrieval, tools, and memory) as the base building block before adding orchestration layers.
- Prefer workflows over agents when the task allows it. Workflows trade flexibility for predictability — and predictability is valuable in production systems.
- Minimize agent footprint. Request only necessary permissions, avoid storing sensitive information beyond immediate needs, prefer reversible over irreversible actions, and err on the side of doing less when uncertain about intended scope.
- Human-in-the-loop checkpoints. For high-stakes or irreversible actions, build in the ability to pause and confirm with a human rather than proceeding autonomously.
- Instrument for observability. Agentic loops are harder to debug than single calls. Logging intermediate steps, tool inputs, and tool outputs is essential for diagnosing failures.
The Agent Loop
The fundamental execution pattern of an agent is the agent loop. In its simplest form, the loop works as follows:
- Send the current conversation state (system prompt + messages + tool definitions) to the model.
- Receive a response. If the response contains tool calls, execute them.
- Append the tool results to the conversation history.
- Repeat until the model returns a final answer with no tool calls, or a stopping condition is met.
This loop is at the heart of every agentic application, from a simple research assistant to a multi-agent software engineering system. The key insight is that the conversation history acts as the agent's working memory — each iteration adds more context, and the model uses that accumulated context to decide what to do next.
Code Example: A Simple Agent Loop in Python
The following example demonstrates a minimal agent loop using the Anthropic Python SDK. The agent has access to a single tool — a web search stub — and will continue looping until the model stops issuing tool calls.
import anthropic
import json
client = anthropic.Anthropic()
# Define the tools available to the agent
tools = [
{
"name": "web_search",
"description": "Search the web for current information on a topic.",
"input_schema": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query to execute."
}
},
"required": ["query"]
}
}
]
def web_search(query: str) -> str:
"""Stub: replace with a real search API call."""
return f"Search results for '{query}': [Simulated result 1] [Simulated result 2]"
def run_agent(user_message: str) -> str:
"""Run the agent loop until the model produces a final answer."""
messages = [{"role": "user", "content": user_message}]
while True:
response = client.messages.create(
model="claude-opus-4-5",
max_tokens=4096,
tools=tools,
messages=messages
)
# Append the assistant's response to conversation history
messages.append({"role": "assistant", "content": response.content})
# If the model is done (no tool calls), return the final text
if response.stop_reason == "end_turn":
for block in response.content:
if hasattr(block, "text"):
return block.text
# Otherwise, execute any requested tool calls
tool_results = []
for block in response.content:
if block.type == "tool_use":
if block.name == "web_search":
result = web_search(**block.input)
else:
result = f"Unknown tool: {block.name}"
tool_results.append({
"type": "tool_result",
"tool_use_id": block.id,
"content": result
})
# Feed tool results back into the conversation
messages.append({"role": "user", "content": tool_results})
# Run the agent
answer = run_agent("What are the latest developments in quantum computing?")
print(answer)
Notice the structure: the while True loop continues untilstop_reason == "end_turn", meaning the model has decided it has enough information to answer. Tool results are fed back as a new user message containing tool_result blocks — this is the standard format required by the Anthropic API.
Exam Tip: The exam tests whether you know when not to use agents. A common distractor question will describe a task that sounds complex but is actually solvable with a single prompt or a simple two-step workflow. If the control flow can be fully specified in advance by a developer, it is a workflow — not an agent. Look for keywords like "unpredictable steps," "open-ended exploration," or "the model must decide what to do next" to identify genuine agentic scenarios.
Key Takeaways
Agentic systems combine autonomous decision-making, tool use, and iterative execution. They exist on a spectrum from simple prompts through structured workflows to fully autonomous agents.
Prefer simplicity. Anthropic's guidance is to use the least complex architecture that solves the problem. Agents introduce latency, cost, and debugging complexity that workflows and single prompts do not.
The agent loop is the core execution primitive: send messages → receive response → execute tools → append results → repeat. The conversation history serves as the agent's working memory across iterations.
Minimize footprint and favor reversibility. Agents should request only necessary permissions, prefer reversible actions, and pause for human confirmation before taking high-stakes or irreversible steps.