Earlier this year, we launched Amazon Bedrock Prompt Management in preview to simplify the creation, testing, versioning, and sharing of prompts. Today, we’re announcing its general availability and adding several new key features. First, we are introducing the ability to easily run prompts stored in your AWS account. Amazon Bedrock Runtime APIs Converse and InvokeModel now support executing a prompt using a Prompt identifier. Next, while creating and storing the prompts, you can now specify system prompt, multiple user/assistant messages, and tool configuration in addition to the model choice and inference configuration available in preview — this enables advanced prompt engineers to leverage function calling capabilities provided by certain model families such as the Anthropic Claude models. You can now store prompts for Bedrock Agents in addition to Foundation Models, and we have also introduced the ability to compare two versions of a prompt to quickly review the differences between versions. Finally, we now support custom metadata to be stored with the prompts via the Bedrock SDK, enabling you to store metadata such as author, team, department, etc. to meet your enterprise prompt management needs.
Amazon Bedrock is a fully managed service that offers a choice of high-performing large language models (LLMs) and other FMs from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, as well as Amazon via a single API.
Learn more here and in our documentation. Read our blog here.
Source:: Amazon AWS