Member-only story
4 Patterns of Agentic Reasoning Design
Today, the way most of us interact with Language Models (LMs) is akin to a non-agentic workflow. We prompt them, they generate an answer, and that’s it. For the use case of asking ChatGPT to write an essay for you, it’s like asking someone to write it without ever revising it.
Surprisingly, existing LMs excel in this mode, but when you contrast it with an agentic workflow, and you’ll see a world of difference.
In an agentic workflow, the process is more iterative. Imagine having an LM write an essay outline, conduct web research, draft the first version, evaluate its own work, and revise accordingly. This approach yields remarkably better results, a fact often overlooked.
To illustrate, let’s consider a coding benchmark called the Human Eval Benchmark. When tasked with coding problems using zero-shot prompting, LLMs perform reasonably well. However, when wrapped in an agentic workflow, their performance surpasses even the latest models. This observation holds significant implications for how we approach AI applications.
The Emergence of Agentic Reasoning
Amidst the buzz around AI’s future, let’s look at the concrete trends and design patterns shaping agentic reasoning. While the field is vast and chaotic, there are four key patterns for agentic design: