With the launch of the Automated Reasoning checks safeguard in Amazon Bedrock Guardrails, AWS becomes the first and only major cloud provider to integrate automated reasoning in our generative AI offerings. Automated Reasoning checks help detect hallucinations and provide a verifiable proof that a large language model (LLM) response is accurate. Automated Reasoning tools are not guessing or predicting accuracy. Instead, they rely on sound mathematical techniques to definitively verify compliance with expert-created Automated Reasoning Policies, consequently improving transparency. Organizations increasingly use LLMs to improve user experiences and reduce operational costs by enabling conversational access to relevant, contextualized information. However, LLMs are prone to hallucinations. Due to the ability of LLMs to generate compelling answers, these hallucinations are often difficult to detect. The possibility of hallucinations and an inability to explain why they occurred slows generative AI adoption for use cases where accuracy is critical.
With Automated Reasoning checks, domain experts can more easily build specifications called Automated Reasoning Policies that encapsulate their knowledge in fields such as operational workflows and HR policies. Users of Amazon Bedrock Guardrails can validate generated content against an Automated Reasoning Policy to identify inaccuracies and unstated assumptions, and explain why statements are accurate in a verifiable way. For example, you can configure Automated Reasoning checks to validate answers on topics defined in complex HR policies (which can include constraints on employee tenure, location, and performance) and explain why an answer is accurate with supporting evidence.
Contact your AWS account team to request access to Automated Reasoning checks in Amazon Bedrock Guardrails in US East (N. Virginia) and US West (Oregon) AWS regions. To learn more, visit Amazon Bedrock Guardrails and read the News blog.
Source:: Amazon AWS