Member-only story

Plan-and-Solve Prompting method in Language Model Reasoning

Angelina Yang
4 min readOct 12, 2023

--

Sharing one interesting concept today about the Plan-and-Solve (PS) Prompting method in a recent paper — “Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models”. The idea is to provide a better way to enhance reasoning capabilities in language models.

What’s the problem?

The authors acknowledges the limitations in step-by-step reasoning of existing language models (e.g.Zero-shot-CoT), particularly in terms of three issues: calculation errors, missing reasoning steps, and semantic misunderstandings.

To address these limitations, the authors proposed the plan-and-solve prompting strategies (PS and PS+ prompting). In essence, they are new zero-shot prompting methods that guide LLMs to devise a plan that divides the entire task into smaller subtasks and then carries out the subtasks according to the plan.

The Plan-and-Solve (PS) Prompting Method

Step 1: Prompting for Reasoning Generation

The PS Prompting method consists of two key components.

The first component involves explicitly asking the model to devise a plan to break down complex tasks into smaller, more manageable subtasks. The second component involves executing the subtasks according to the devised plan, enabling the language models to solve the problem incrementally. This approach allows the…

--

--

No responses yet