Announcing new deployment guardrails for Amazon SageMaker Inference endpoints

Amazon SageMaker Inference now supports new model deployment options to update your machine learning models in production. Using the new deployment guardrails, you can easily switch from the current model in production to a new one in a controlled way. This launch introduces canary and linear traffic shifting modes so that you can have granular control over the shifting of traffic from your current model to the new one during the course of the update. With built-in safeguards such as auto-rollbacks, you can catch issues early and automatically take corrective action before they cause significant production impact.

Source:: Amazon AWS