Amazon SageMaker Pipelines, the first purpose-built continuous integration and continuous delivery (CI/CD) service for machine learning (ML), now supports registering and deploying SageMaker inference pipelines with the model registry. SageMaker Pipelines includes a model registry, which is a central repository for cataloging models for production, managing model versions, associating metadata with models, managing approval statuses of models, and automating their deployment with CI/CD. An inference pipeline is a SageMaker model that is composed of a linear sequence of two to fifteen containers that process requests for inferences on data. In the past, the model registry supported only models that were composed of a single container for processing requests for inference. Now, customers can register inference pipelines in the model registry as well. Each model package version of an inference pipeline will now jointly track all containers of the pipeline. An approved model package version can then be deployed as an inference pipeline hosted on a SageMaker inference endpoint with CI/CD.
Source:: Amazon AWS