Build Better Prompts with AI
Who better to help you improve your AI prompts than AI itself? This meta prompt will help you get better results from your most used prompts.
The age of AI and large language models (LLMs) has arrived.
The models have been improving at an astonishing pace. Much of the recent advancement comes from hurling exponentially more inputs at the models–GPT-4 is said to be trained on over one trillion parameters. At some point, though, even the internet runs out of content.
In other words, such as Sam Altman's words, "I think we’re at the end of the era where it’s gonna be these giant models, and we’ll make them better in other ways."
Future Model Improvements
One of the ways I expect future models to improve is in how forgiving they are with typical user prompts. As of today, the quality of your prompt has a huge impact on the quality of the model's response. This manifests itself in unusual ways, as I've written about previously:
So, while I expect future LLMs will make prompt engineering less important, that day has not yet arrived. For now, how you ask an LLM is just as important as what you ask an LLM.
Ad Hoc vs. Repeated Prompts
Some LLM interactions are one-off experiences.
For these, it makes no sense to agonize over getting the perfect prompt. One of the great things about LLMs is that they have memory. You can ask follow-up questions without having to provide all of your question's context a second time.
Other LLM interactions are more generic, like asking ChatGPT to document a VBA procedure.
For these, it makes sense to spend some time fine-tuning your initial prompt. This will save you time in the long run, as you won't be constantly providing follow-up prompts to guide the LLM.
The more I've used LLMs, the more I've found myself having the same kinds of interactions again and again.
Fine-Tuning Your Initial Prompt
When I first saw this prompt–Optimizing Prompt Iterations–from AI content creator Ruben Hassid, I thought it was gimmicky. But having used it several times now, it's become one of my most used LLM prompts:
Forget all the previous instructions. Act like a Prompt Creator.
Your goal is to help me craft the best possible prompt for my needs. The prompt will be used by you, ChatGPT.
You will follow the following process:
Your first response will be to ask me what the prompt should be about. I will provide my answer, but we will be improving it through continual iterations by going through the next steps.
Based on my input, you will generate 3 sections.
Revised Prompt (provide your rewritten prompt. It should be clear, concise, and easily understood by you)
Suggestions (provide 3 suggestions on what details to include in the prompt to improve it)
Questions (ask the 3 most relevant questions pertaining to what additional information is needed from me to improve the prompt)
We will continue this iterative process with me providing additional information to you and you updating the prompt in the Revised Prompt section until it is complete.
You can see the script in action by clicking on the "Optimizing Prompt Iterations" link above. The best way to experience it, though, is to try it for yourself.