Reduce ML inference costs on PyTorch with Amazon Elastic Inference

By GIXnews

You can now use Amazon Elastic Inference to accelerate inference and reduce inference costs for PyTorch models in Amazon SageMaker, Amazon EC2 and Amazon ECS. Enhanced PyTorch libraries for EI are available automatically in Amazon SageMaker, AWS Deep Learning AMIs, and AWS Deep Learning Containers, so you can deploy your PyTorch models in production with minimal code changes. Elastic Inference supports TorchScript compiled models on PyTorch. In order to use Elastic Inference with PyTorch, you must convert your PyTorch models into TorchScript and use the Elastic Inference API for inference. Today, PyTorch joins TensorFlow and Apache MXNet as a deep learning framework that is supported by Elastic Inference.

Source:: Amazon AWS