Amazon Bedrock Guardrails announces the general availability of industry-leading image content filters

Amazon Bedrock Guardrails announces the general availability of image content filters – offering industry-leading text and image content safeguards that help customers block up to 88% of harmful multi modal content. This new capability removes the heavy lifting required by customers to build their own safeguards for image content or spend cycles with manual content moderation that can be error-prone and tedious. Bedrock Guardrails provides configurable safeguards to detect and block harmful content and prompt attacks, define topics to deny and disallow specific topics, redact personally identifiable information (PII) such as personal data, block specific words, along with contextual grounding checks to detect and block model hallucinations and to identify the relevance of model responses and claims, and identify, correct, and explain factual claims in model responses using Automated Reasoning checks. Guardrails can be applied across any foundation model including those hosted with Amazon Bedrock, self-hosted models, and third-party models outside Bedrock using the ApplyGuardrail API, providing a consistent user experience and helping to standardize safety and privacy controls.

Image content filters can be applied to all categories within the content filter policy of Bedrock Guardrails including hate, insults, sexual, violence, misconduct, and prompt attack. With this new capability, customers have the flexibility to choose either image or text content, or both, and build safe generative AI applications adhering to their responsible AI policies.

This new capability is generally available in US East (N. Virginia), US West (Oregon), Europe (Frankfurt), and Asia Pacific (Tokyo) AWS regions.

To learn more, see the blog, technical documentation, and the Bedrock Guardrails product page.

Source:: Amazon AWS