Skip to main content

Lesson 1: Use Cases, Models, and the LLM App Lifecycle

Learning Outcome

By the end of this lesson, you will be able to:

  • Classify a problem as suitable for GenAI or traditional software
  • Identify the core building blocks of a GenAI application
  • Make an informed model selection based on requirements
  • Build your first simple GenAI app with AgentFlow

Prerequisites


Concept: The LLM App Lifecycle

Before diving into specific problems, understand the typical lifecycle of a GenAI application:

This lesson focuses on Phase 1 and 2—understanding when and how to use GenAI.


Concept: What Problems Fit GenAI?

Not every problem needs an LLM. Understanding fit is the first skill for building GenAI systems.

Decision Tree: Do You Need an LLM?

When to Use (and Not Use) LLMs

Use LLM WhenDon't Use LLM When
Natural language understanding neededPrecise calculations required
Flexible output format acceptableExact, deterministic output needed
Knowledge is broad or dynamicKnowledge is fixed and small
Content generation requiredData transformation (use ETL)
Classification with contextBinary true/false logic
Summarization of textCopying data between systems

Concept: Core Building Blocks

Every GenAI application has the same fundamental building blocks:

BlockWhat It IsExample
ModelThe LLM that generates responsesGPT-4o, Claude, Gemini
InstructionsPrompts that guide behaviorSystem instructions, user messages
ToolsFunctions the model can callCalculator, search, database
StateWhat the system remembersConversation history, variables
OutputThe response formatJSON, text, streaming tokens

Concept: Model Selection Deep Dive

Model choice is an engineering decision. Consider these factors:

Provider Comparison

ProviderModelContextStrengthsBest For
OpenAIGPT-4o128KBalanced, tool useGeneral purpose
OpenAIGPT-4o Mini128KFast, cheapHigh volume
AnthropicClaude 3.5 Sonnet200KLong context, reasoningComplex tasks
AnthropicClaude 3 Haiku200KFast, affordableSpeed-sensitive
GoogleGemini 1.5 Pro1MMassive contextLong documents

Model Selection Decision Matrix

Cost Estimation

def estimate_monthly_cost(
daily_requests: int,
avg_input_tokens: int,
avg_output_tokens: int,
model: str = "gpt-4o"
) -> float:
"""Estimate monthly API costs."""

costs = {
"gpt-4o": {"input": 5.00, "output": 15.00}, # per 1M tokens
"gpt-4o-mini": {"input": 0.15, "output": 0.60},
"claude-3-5-sonnet": {"input": 3.00, "output": 15.00},
}

model_costs = costs.get(model, costs["gpt-4o"])

daily_input_cost = (daily_requests * avg_input_tokens / 1_000_000) * model_costs["input"]
daily_output_cost = (daily_requests * avg_output_tokens / 1_000_000) * model_costs["output"]

return (daily_input_cost + daily_output_cost) * 30 # monthly

Example: Building Your First GenAI App

Here's how the building blocks come together in AgentFlow:

Step 1: Set Up Your Environment

# Install AgentFlow
pip install 10xscale-agentflow

# Import the core components
from agentflow.core.graph import StateGraph, AgentState
from agentflow.core.state import Message
from agentflow.core.llm import OpenAIModel

Step 2: Define Your Model

# Choose a model
model = OpenAIModel(
"gpt-4o",
temperature=0.7 # 0 = deterministic, 1 = creative
)

Step 3: Create the Agent Graph

from agentflow.core.graph import StateGraph

# Create a state graph
builder = StateGraph(AgentState)

# Define a simple chat node
@builder.node
def chat(state: AgentState) -> AgentState:
messages = state.get("messages", [])

# Get the last user message
last_message = messages[-1].content if messages else ""

# Generate a response
response = model.generate(
system_instruction="You are a helpful coding assistant.",
messages=[m.dict() for m in messages]
)

# Add the response to messages
messages.append(Message(role="assistant", content=response))

return {"messages": messages}

# Add the node and set entry/finish points
builder.add_node("chat", chat)
builder.set_entry_point("chat")
builder.set_finish_point("chat")

# Compile the graph
app = builder.compile()

Step 4: Run the App

# Create an initial state
initial_state = {
"messages": [
Message(role="user", content="Hello! What is AgentFlow?")
]
}

# Invoke the agent
result = app.invoke(initial_state)

# Get the response
response = result["messages"][-1].content
print(response)

Step 5: Add Streaming (Better UX)

# Stream responses for better perceived latency
for chunk in app.stream(initial_state):
if hasattr(chunk, 'content'):
print(chunk.content, end="", flush=True)
print() # newline at the end

Complete Code

from agentflow.core.graph import StateGraph, AgentState
from agentflow.core.state import Message
from agentflow.core.llm import OpenAIModel

# Initialize model
model = OpenAIModel("gpt-4o")

# Create graph
builder = StateGraph(AgentState)

@builder.node
def chat(state: AgentState) -> AgentState:
messages = state.get("messages", [])
response = model.generate(
system_instruction="You are a helpful coding assistant.",
messages=[m.dict() for m in messages]
)
messages.append(Message(role="assistant", content=response))
return {"messages": messages}

builder.add_node("chat", chat)
builder.set_entry_point("chat")
builder.set_finish_point("chat")

app = builder.compile()

# Run
result = app.invoke({
"messages": [Message(role="user", content="Hello!")]
})
print(result["messages"][-1].content)

Exercise: Classify Product Ideas

For each product idea, decide:

  1. No LLM — Traditional software
  2. LLM App — Single prompt + structured output
  3. Workflow — Sequential steps with LLM
  4. Agent — Dynamic tool use and decisions

Product Ideas

#IdeaClassificationReasoning
1Email spam classifier
2Research paper summarizer
3Automated customer support chatbot
4Code review assistant
5Daily news digest generator
6Trading bot with live data
7Meeting notes action extractor
8Form auto-filler
9Customer sentiment analyzer
10Personal calendar scheduler

Answer Key

Click to reveal answers
#ClassificationReasoning
1No LLMBinary classification, can use traditional ML
2LLM AppSummarization is a core LLM capability
3AgentNeeds tools (KB, orders), dynamic responses
4LLM App or AgentDepends on complexity
5LLM AppSummarization + formatting
6AgentDynamic tool use (APIs), decision-making
7LLM AppExtraction from text
8No LLMForm filling is deterministic
9LLM AppClassification task
10Workflow or AgentDepends on complexity

What You Learned

  1. Problem fit matters — Not every problem needs an LLM or agent
  2. GenAI apps have 5 building blocks — Model, instructions, tools, state, output
  3. Model selection is a tradeoff — Quality vs. speed vs. cost vs. capabilities
  4. AgentFlow provides StateGraph — Simple way to compose GenAI applications
  5. Start simple — Add complexity only when needed

Common Failure Mode

Starting with an agent when a workflow would work

Teams often over-engineer by jumping straight to multi-agent systems. Before reaching for agents, ask:

Agents add complexity. Make sure you need that complexity.


Next Step

Continue to Lesson 2: Prompting, context engineering, and structured outputs to learn how to build reliable outputs.

Or Explore