Skip to main content

GenAI Beginner Course

Build your first production-ready GenAI application with AgentFlow. This course takes you from "I know what an LLM is" to "I can ship a GenAI app with tools, memory, and evaluation."

What You'll Build

A small engineer-facing assistant that:

  • Answers questions using a curated knowledge source
  • Uses tools safely (calculator, search, file operations)
  • Accepts file or multimodal input
  • Returns structured output (JSON)
  • Supports thread continuity and memory
  • Streams responses to a client
  • Includes evaluation and a release checklist

What You'll Learn

LessonTopicKey Concept
1Use cases, models, and the LLM app lifecyclePick the right use case before building
2Prompting, context engineering, and structured outputsBuild reliable outputs with schemas
3Tools, files, and MCP basicsExtend the agent with safe tool use
4Retrieval, grounding, and citationsGround answers in real knowledge
5State, memory, threads, and streamingBuild conversation-aware applications
6Multimodal and client/server integrationConnect to frontends and handle files
7Evals, safety, cost, and releaseShip with confidence

Course Structure

Prerequisites

  • Python basics (functions, classes, async/await)
  • Comfortable with API request/response formats
  • No prior LLM or agent experience needed

Time Commitment

ComponentTime
7 lessons30-45 min each
Capstone exercise1-2 hours
Total~5-6 hours

How Each Lesson Is Structured

Every lesson includes:

  1. Concept — Brief explanation with diagrams
  2. Example — Complete, runnable AgentFlow code
  3. Exercise — Try it yourself with guidance
  4. What you learned — Key takeaways
  5. Next step — Where to go next

AgentFlow Concepts You'll Master

ConceptWhere It's Used
StateGraphLesson 1+
Tools and validationLesson 3
Structured outputsLesson 2, 7
Memory and storesLesson 4, 5
CheckpointingLesson 5
StreamingLesson 5, 6
Client integrationLesson 6

Your Learning Path

Start Here

If you're new to AgentFlow, start with these shared foundations:

  1. LLM basics for engineers — What LLMs are
  2. Tokenization and context windows — Why tokens matter
  3. Prompt patterns cheatsheet — Reliable prompting

Then Continue With Lessons

Start with Lesson 1: Use cases, models, and the LLM app lifecycle

After This Course

After completing this course, you'll be ready for:

Coming from the Beginner Path?

If you've already completed the Beginner Path, this course goes deeper into the "why" and "when" of GenAI system design. The lessons will feel familiar but with more context.


Ready to start? Begin with Lesson 1: Use cases, models, and the LLM app lifecycle.