A Simple Trick to Help ChatGPT Provide Correct Answers

How you ask ChatGPT to provide a solution can have a profound impact on the correctness of the solution it provides.

A Simple Trick to Help ChatGPT Provide Correct Answers

In DeepLearning.AI's newsletter, THE BATCH, author Andrew Ng shares an incredibly helpful insight into building better prompts for ChatGPT and other large language models.

From the newsletter (GOOD/BAD markers mine):

BAD: [Problem/question description] State the answer and then explain your reasoning.

GOOD: [Problem/question description] Explain your reasoning and then state the answer.

These two prompts are nearly identical, and the former matches the wording of many university exams. But the second prompt is much more likely to get an LLM to give you a good answer. Here’s why: An LLM generates output by repeatedly guessing the most likely next word (or token). So if you ask it to start by stating the answer, as in the first prompt, it will take a stab at guessing the answer and then try to justify what might be an incorrect guess. In contrast, prompt 2 directs it to think things through before it reaches a conclusion. This principle also explains the effectiveness of widely discussed prompts such as “Let’s think step by step.”

Check out the full issue of the newsletter here: THE BATCH (August 23, 2023)

Sign up for the newsletter here: THE BATCH: What Matters in AI Right Now

All original code samples by Mike Wolfe are licensed under CC BY 4.0