AWS Basic Prompt Techniques
Basic Prompt Techniques
When crafting and manipulating prompts, you can use specific techniques to achieve the response you want from AI models.
In this lesson, you learn how to use various prompt engineering techniques to help you use generative AI applications effectively for your unique business objectives.
Zero-shot prompting
Zero-shot prompting is a technique where you present a task to an LLM without giving the model further examples.
With zero-shot prompting, you expect the model to perform the task without a prior understanding, or shot, of the task.
Modern LLMs demonstrate remarkable zero-shot performance.
Image created by Amazon Web Services.
The following tips can help you use zero-shot prompting:
- Larger LLMs are more likely to produce effective results with zero-shot prompts.
- Instruction tuning can improve zero-shot learning. You can use reinforcement learning from human feedback (RLHF) to scale instruction tuning, aligning modern LLMs to better fit human preferences.
Zero-shot prompt example
Don't miss the electric vehicle revolution! AnyCompany is ditching muscle cars for EVs, creating a huge opportunity for investors.
Note: This prompt did not provide any examples to the model. However, the model was still effective in deciphering the task.
Few-shot prompting
Few-shot prompting is a technique where you provide the model with contextual information about requested tasks. You include examples of both the task and the output you want, helping the model follow your guidance closely.
Image created by Amazon Web Services.
When using few-shot prompting, consider:
- Labels in a few-shot prompt don't need to be correct to improve performance. Random labels often perform better than no labels at all. However, the label space and distribution of input text are important.
- If you have many examples, use techniques to comply with token limits and dynamically populate prompt templates. An example selector based on semantic similarity can help.
Few-shot prompt example
Research firm fends off allegations of impropriety over new technology.
Answer: Negative
Offshore windfarms continue to thrive as vocal minority in opposition dwindles.
Answer: Positive
Manufacturing plant is the latest target in investigation by state officials.
Answer:
Note: The Amazon Titan Text model was used in this example.
Chain-of-thought prompting
Chain-of-thought (CoT) prompting breaks down complex reasoning tasks through intermediary reasoning steps. You can use both zero-shot and few-shot techniques with CoT prompts.
Image created by Amazon Web Services.
CoT prompts are specific to a problem type. Use the phrase "Think step by step" to invoke CoT reasoning.
Tip: Use CoT prompting when the task involves several steps or requires a series of reasoning.
CoT zero-shot prompt example
The total cost of vehicle A is $40,000, and it requires a 30 percent down payment.
The total cost of vehicle B is $50,000, and it requires a 20 percent down payment.
(Think step by step)
The down payment for vehicle B is 20 percent of $50,000, which is (20/100) * 50,000 = $10,000.
We can see that vehicle A needs a larger down payment than vehicle B.
Note: This prompt did not provide examples to the model, yet the model was still effective in deciphering the task.
CoT few-shot prompt example
CoT prompting becomes more powerful when combined with few-shot prompting:
Monday: 6,500 viewers
Tuesday: 6,400 viewers
Wednesday: 6,300 viewers
Question: How many viewers can we expect on Friday?
Answer: Based on the numbers given and without any more information, there is a daily decrease of 100 viewers. If we assume this trend will continue, we can expect 6,200 viewers on Thursday, and therefore 6,100 viewers on Friday.
Question: How many viewers can we expect on Saturday? (Think step by step)
Answer:
This prompt provides both few-shot context (a question-and-answer example) and CoT prompting ("Think step by step").