AWS Model-Specific Prompt Techniques
Model-Specific Prompt Techniques
In this section of the course, you will learn how to engineer prompts for three models available as part of the Amazon Bedrock service.
Amazon Bedrock is a managed service that makes foundation models from leading AI startups available through an API.
Amazon Titan FMs
Amazon Titan Foundation Models (FMs) are pretrained on large datasets, making them powerful, general-purpose models.
Use them as is or customize them with your own data for a particular task without annotating large volumes of data.
Amazon Nova
Amazon Nova is a new generation of FMs.
With the ability to process text, image, and video as prompts, you can use Amazon Nova-powered generative AI applications to understand videos, charts, and documents, or generate videos and other multimedia content.
Anthropic Claude
Claude is an AI chatbot built by Anthropic, which you can access through chat or API in a developer console.
Claude can process conversation, text, summarization, search, creative writing, coding, question answering, and more.
Claude is designed to respond conversationally and can modify character, style, and conduct to best suit output needs.
AI21 Jurassic-2
Jurassic-2 is trained specifically to process instructions-only prompts with no examples, or zero-shot prompts.
Using only instructions in the prompt can be the most natural way to interact with large language models.
Parameters
Configure prompt parameters to customize results. Adjust one parameter at a time, as results vary by LLM. Not all parameters are available with all models.
Determinism parameters
Lower values = factual results. Higher values = diverse/creative results.
- Temperature: Controls randomness. Lower focuses on probable tokens; higher adds diversity.
- Top_p: "Nucleus sampling" - lower gives exact answers, higher gives diverse responses.
- Top_k: Number of highest-probability tokens to keep for filtering.
Token count
- MinTokens: Minimum tokens to generate per response.
- MaxTokenCount: Maximum tokens before stopping.
Stop sequences
StopSequences: List of strings that stops content generation.
Number of results
numResults: How many responses to generate per prompt.
Penalties (Jurassic)
- FrequencyPenalty: Penalty on frequently generated tokens.
- PresencePenalty: Penalty on tokens already in the prompt.
- CountPenalty: Penalty based on token frequency in responses.
Comparing parameters
| Model Provider | Model Name | Parameters |
|---|---|---|
| Amazon | Amazon Titan |
temperature topP maxTokenCount stopSequences |
| Amazon | Amazon Nova |
maxTokens temperature topP topK stopSequences toolConfig |
| Anthropic | Claude |
temperature top_p top_k max_tokens_to_sample stop_sequences |
| AI21 Labs | Jurassic-2 |
temperature topP topKReturn maxTokens stopSequences numResults minTokens frequencyPenalty presencePenalty countPenalty |
Parameter interaction can be complex.
Results depend on the model and language model version, so adjust parameters carefully and test thoroughly to achieve the best results.