Knowledge Bases for Amazon Bedrock (KB) securely connects foundation models (FMs) to internal company data sources for Retrieval Augmented Generation (RAG), to deliver more relevant and accurate responses. We are excited to announce Guardrails for Amazon Bedrock is integrated with Knowledge Bases. Guardrails allow you to instrument safeguards customized to your RAG application requirements, and responsible AI policies, leading to a better end user experience. Guardrails provides a comprehensive set of policies to protect your users from undesirable responses and interactions with a generative AI application. First, you can customize a set of denied topics to avoid within the context of your application. Second, you can filter content across prebuilt harmful categories such as hate, insults, sexual, violence, misconduct, and prompt attacks. Third, you can define a set of offensive and inappropriate words to be blocked in their application. Finally, you can filter user inputs containing sensitive information (e.g., personally identifiable information) or redact confidential information in model responses based on use cases. Guardrails can be applied to the input sent to the model as well the content generated by the foundation model. This capability within Knowledge Bases is now available in Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), US East (N. Virginia), US West (Oregon) regions. To learn more, refer to Knowledge Bases for Amazon Bedrock documentation. To get started, visit the Amazon Bedrock console or utilize the RetrieveAndGenerate API.
Source:: Amazon AWS