Lesson 1: Use Cases, Models, and the LLM App Lifecycle
Learning Outcome
By the end of this lesson, you will be able to:
- Classify a problem as suitable for GenAI or traditional software
- Identify the core building blocks of a GenAI application
- Make an informed model selection based on requirements
- Build your first simple GenAI app with AgentFlow
Prerequisites
- Read LLM basics for engineers for foundational context
Concept: The LLM App Lifecycle
Before diving into specific problems, understand the typical lifecycle of a GenAI application:
This lesson focuses on Phase 1 and 2—understanding when and how to use GenAI.
Concept: What Problems Fit GenAI?
Not every problem needs an LLM. Understanding fit is the first skill for building GenAI systems.
Decision Tree: Do You Need an LLM?
When to Use (and Not Use) LLMs
| Use LLM When | Don't Use LLM When |
|---|---|
| Natural language understanding needed | Precise calculations required |
| Flexible output format acceptable | Exact, deterministic output needed |
| Knowledge is broad or dynamic | Knowledge is fixed and small |
| Content generation required | Data transformation (use ETL) |
| Classification with context | Binary true/false logic |
| Summarization of text | Copying data between systems |
Concept: Core Building Blocks
Every GenAI application has the same fundamental building blocks:
| Block | What It Is | Example |
|---|---|---|
| Model | The LLM that generates responses | GPT-4o, Claude, Gemini |
| Instructions | Prompts that guide behavior | System instructions, user messages |
| Tools | Functions the model can call | Calculator, search, database |
| State | What the system remembers | Conversation history, variables |
| Output | The response format | JSON, text, streaming tokens |
Concept: Model Selection Deep Dive
Model choice is an engineering decision. Consider these factors:
Provider Comparison
| Provider | Model | Context | Strengths | Best For |
|---|---|---|---|---|
| OpenAI | GPT-4o | 128K | Balanced, tool use | General purpose |
| OpenAI | GPT-4o Mini | 128K | Fast, cheap | High volume |
| Anthropic | Claude 3.5 Sonnet | 200K | Long context, reasoning | Complex tasks |
| Anthropic | Claude 3 Haiku | 200K | Fast, affordable | Speed-sensitive |
| Gemini 1.5 Pro | 1M | Massive context | Long documents |
Model Selection Decision Matrix
Cost Estimation
def estimate_monthly_cost(
daily_requests: int,
avg_input_tokens: int,
avg_output_tokens: int,
model: str = "gpt-4o"
) -> float:
"""Estimate monthly API costs."""
costs = {
"gpt-4o": {"input": 5.00, "output": 15.00}, # per 1M tokens
"gpt-4o-mini": {"input": 0.15, "output": 0.60},
"claude-3-5-sonnet": {"input": 3.00, "output": 15.00},
}
model_costs = costs.get(model, costs["gpt-4o"])
daily_input_cost = (daily_requests * avg_input_tokens / 1_000_000) * model_costs["input"]
daily_output_cost = (daily_requests * avg_output_tokens / 1_000_000) * model_costs["output"]
return (daily_input_cost + daily_output_cost) * 30 # monthly
Example: Building Your First GenAI App
Here's how the building blocks come together in AgentFlow:
Step 1: Set Up Your Environment
# Install AgentFlow
pip install 10xscale-agentflow
# Import the core components
from agentflow.core.graph import StateGraph, AgentState
from agentflow.core.state import Message
from agentflow.core.llm import OpenAIModel
Step 2: Define Your Model
# Choose a model
model = OpenAIModel(
"gpt-4o",
temperature=0.7 # 0 = deterministic, 1 = creative
)
Step 3: Create the Agent Graph
from agentflow.core.graph import StateGraph
# Create a state graph
builder = StateGraph(AgentState)
# Define a simple chat node
@builder.node
def chat(state: AgentState) -> AgentState:
messages = state.get("messages", [])
# Get the last user message
last_message = messages[-1].content if messages else ""
# Generate a response
response = model.generate(
system_instruction="You are a helpful coding assistant.",
messages=[m.dict() for m in messages]
)
# Add the response to messages
messages.append(Message(role="assistant", content=response))
return {"messages": messages}
# Add the node and set entry/finish points
builder.add_node("chat", chat)
builder.set_entry_point("chat")
builder.set_finish_point("chat")
# Compile the graph
app = builder.compile()
Step 4: Run the App
# Create an initial state
initial_state = {
"messages": [
Message(role="user", content="Hello! What is AgentFlow?")
]
}
# Invoke the agent
result = app.invoke(initial_state)
# Get the response
response = result["messages"][-1].content
print(response)
Step 5: Add Streaming (Better UX)
# Stream responses for better perceived latency
for chunk in app.stream(initial_state):
if hasattr(chunk, 'content'):
print(chunk.content, end="", flush=True)
print() # newline at the end
Complete Code
from agentflow.core.graph import StateGraph, AgentState
from agentflow.core.state import Message
from agentflow.core.llm import OpenAIModel
# Initialize model
model = OpenAIModel("gpt-4o")
# Create graph
builder = StateGraph(AgentState)
@builder.node
def chat(state: AgentState) -> AgentState:
messages = state.get("messages", [])
response = model.generate(
system_instruction="You are a helpful coding assistant.",
messages=[m.dict() for m in messages]
)
messages.append(Message(role="assistant", content=response))
return {"messages": messages}
builder.add_node("chat", chat)
builder.set_entry_point("chat")
builder.set_finish_point("chat")
app = builder.compile()
# Run
result = app.invoke({
"messages": [Message(role="user", content="Hello!")]
})
print(result["messages"][-1].content)
Exercise: Classify Product Ideas
For each product idea, decide:
- No LLM — Traditional software
- LLM App — Single prompt + structured output
- Workflow — Sequential steps with LLM
- Agent — Dynamic tool use and decisions
Product Ideas
| # | Idea | Classification | Reasoning |
|---|---|---|---|
| 1 | Email spam classifier | ||
| 2 | Research paper summarizer | ||
| 3 | Automated customer support chatbot | ||
| 4 | Code review assistant | ||
| 5 | Daily news digest generator | ||
| 6 | Trading bot with live data | ||
| 7 | Meeting notes action extractor | ||
| 8 | Form auto-filler | ||
| 9 | Customer sentiment analyzer | ||
| 10 | Personal calendar scheduler |
Answer Key
Click to reveal answers
| # | Classification | Reasoning |
|---|---|---|
| 1 | No LLM | Binary classification, can use traditional ML |
| 2 | LLM App | Summarization is a core LLM capability |
| 3 | Agent | Needs tools (KB, orders), dynamic responses |
| 4 | LLM App or Agent | Depends on complexity |
| 5 | LLM App | Summarization + formatting |
| 6 | Agent | Dynamic tool use (APIs), decision-making |
| 7 | LLM App | Extraction from text |
| 8 | No LLM | Form filling is deterministic |
| 9 | LLM App | Classification task |
| 10 | Workflow or Agent | Depends on complexity |
What You Learned
- Problem fit matters — Not every problem needs an LLM or agent
- GenAI apps have 5 building blocks — Model, instructions, tools, state, output
- Model selection is a tradeoff — Quality vs. speed vs. cost vs. capabilities
- AgentFlow provides StateGraph — Simple way to compose GenAI applications
- Start simple — Add complexity only when needed
Common Failure Mode
Starting with an agent when a workflow would work
Teams often over-engineer by jumping straight to multi-agent systems. Before reaching for agents, ask:
Agents add complexity. Make sure you need that complexity.
Next Step
Continue to Lesson 2: Prompting, context engineering, and structured outputs to learn how to build reliable outputs.
Or Explore
- AgentFlow Architecture — How AgentFlow packages fit together
- StateGraph concepts — The core graph structure
- First Python Agent tutorial — Hands-on implementation