
Most engineers know how to prompt an LLM but building reliable agentic systems requires a different mental model entirely. This is not for "vibe coders" but for serious engineers looking to truly leverage the SOTA technology. This workshop is for developers already using AI coding tools who want to go deeper, not "What is an LLM?" territory. We'll use Claude Code as our primary tool, but the frameworks apply to any serious agentic system. What you'll learn: You'll learn to think about AI agents as probability-shaping systems, not instruction-following assistants. We cover diagnostic frameworks that let you identify why an agent is failing and which lever to pull to fix it. These are skills that won't become outdated as models evolve. Core concepts: • The 3-tier progression: from default tools → custom experts → fully autonomous agents • The Pit of Success mental model for context design • Prompt maturity: from copy-paste templates to systems that adapt and improve • Workflow design principles that fit your use case (not rigid patterns to copy) What you'll leave with: By the end of the workshop, you'll have configured a custom agent tailored to your workflow — plus the lasting intuition for how these systems actually work, not just recipes that expire with the next model release.