How to Measure Prompts Performance?

Angelina Yang
2 min readAug 17, 2023

There are a lot of explanations elsewhere, here I’d like to share some example questions and potential answers in an interview setting.

Question 1: In the setting of LLM applications, how do you measure whether your new prompt is better than the old one?

Question 2: Why does this matter?

Image credit: Source

Here are some tips for readers’ reference:

Question 1:

Introducing new prompts often leads to varied outcomes across different scenarios. The conventional approach to evaluating a model’s success, typical in traditional machine learning, doesn’t directly align with the nature of generative models. Metrics like accuracy (or the ones we talked about last week) might not seamlessly apply, as determining correctness can be subjective and challenging to quantify.

At a broader level, there are two key focal points to consider:

  1. Curate an Evaluation Dataset Incrementally: Develop a dataset tailored to your specific tasks. This dataset will aid in evaluating prompt performance during its development and this data should be built out incrementally.
  2. Identify an Appropriate Metric or Framework for Evaluation: Select a suitable…

--

--