More on LLM Prompting
September, 2023
Generative AI requires humans to give clear instructions
to do well. The text instructions given to a a large
language model (LLM) are called "prompts". Prompt
strategies are form the prompt engineering discipline.
Prompt engineering
Zero-shot learning; give the text to the model
and ask for results.
# A prompt example
A brown cow walked over the moon,
# Another prompt
If I have 20 apples, eat three of them, and sell three more, how many do I have left?
Few-shot learning; give a few example
input-output pairs, and ask for the next. Some hints;
choose examples similar to the ask, choose diverse
examples, choose representative examples, and keep the
examples in random order.
# Few-shot prompt example
You rock; positive.
It's amazing; positive.
It sucks; negative.
Yikes;
Instruction prompting finetunes a
pretrained model with high quality task instruction,
input and output sets. Some hints; give the task in
detail, be specific and precise.
# Instruction
Describe LLMs to a 5 year old.
# More structure
A role: You are a web developer.
An instruction/task: Create the following HTML to React.
A question: Can you explain what you will do?
Context: React and HTML developer:
Examples: ...
Chain-of-Thought (CoT) prompting;
give a step by step reasoning to eventually lead to the
answer.
Few-shot CoT is prompting the model with
few examples, each comprising reason
chains.
Zero-shot CoT; asks the model to generate
a reasoning chain and then prompt.
Question: When we add the ages of Alice and Bob, we get 14. Alice is 5 years old; how old is Bob?
Answer: We subtract 5 from 14 and get 9, which is Bob's age.
Question: When we add the grades of Claire and Alice, we get 111. Claire got 51 from the exam. How much did Bob get?
Tree-of-Thought (ToT) prompting;
explores multiple reasoning paths at each step.
Automatic prompt design; using
algorithms or machine learning models to generate
prompts.
One can also
augment the models with
reasoning capabilities, or capabilities to use external
tools.
Retrieval over a knowledge base can
incorporate the retrieved content as part of the
prompt.
Internal retrieval is providing the
message in the prompt, and that can help too.
External API calls may help the prompt to
give better answers. text-to-text calls for example, can
return some text to render as part of the response.
External API calss like calculators, q&a systems, search
engines, translation systems or calendars can help the
LLM to prepare a better response.