Introducing Amazon EC2 Inf1 Instances, high performance and the lowest cost machine learning inference in the cloud
Today, we are announcing the general availability of Amazon EC2 Inf1 instances, built from the ground up to support machine learning inference applications. Inf1 instances feature up to 16 AWS Inferentia chips, high-performance machine learning inference chips designed and built by AWS. In addition, we’ve coupled the Inferentia chips with the latest custom 2nd Gen Intel® Xeon® Scalable processors and up to 100 Gbps networking to enable high throughput inference. This powerful configuration enables Inf1 instances to deliver up to 3x higher throughput and up to 40% lower cost per inference than Amazon EC2 G4 instances, which were already the lowest cost instance for machine learning inference available in the cloud.
Source:: Amazon AWS