This episode covers a shift from prompt engineering to context engineering, with a practical emphasis on what context engineering adds beyond prompting.
The discussion frames prompt engineering as essential but limited by context, while context engineering provides an environment and memory that enables sustained, task-focused interaction with AI. A key line captures this distinction: “prompt engineering is like the steering wheel that you guide the AI with it and the context engineering is the main engine that drives or gives you more.”
Core topics include the three pillars of context engineering: global context, dynamic context, and tool binding.
Adham Khaled here shows practical demos and insights into tools and workflows. The live demonstration showcases building a web app using Gemini CLI and Google Antigravity, illustrating the workflow from a global context setup to a working implementation. The session highlights the advantages of context-based agents for reliability and efficiency, including persistent memory and team-like capabilities.
Explore the evolution of prompt engineering into the more advanced and persistent world of context engineering.
In this episode:
The fundamental differences between prompt and context engineering
Techniques for crafting effective prompts: role setting, step-by-step reasoning, negative instructions, and more
Handling AI context limitations and how persistent memory transforms interactions
Practical demos using Gemini CLI and Google Antigravity IDE for building an app
The significance of global and dynamic context in AI agents
Future of AI teamwork with multi-agent workflows and skill integrations
Resources & Links:
Adham’sarticle “Stanford just killed Prompt Engineering with 8 words”
Note:
This episode covers advanced techniques and demos that require some familiarity with AI tools and prompt design. Practice with the resources and scripts provided to master context engineering.










