The most efficient, performant, and capable Llama models to date, Llama 3.2, are now available in Amazon SageMaker JumpStart, a machine learning (ML) hub that offers pretrained models and built-in algorithms to help you quickly get started with ML. You can deploy and use Llama 3.2 models—90B, 11B, 3B, 1B, and Llama Guard 3 11B Vision—with a few clicks in SageMaker Studio or programmatically through the SageMaker Python SDK. You can also easily fine-tune Llama 3.2 1B and 3B models with SageMaker JumpStart today.
Llama 3.2 models are offered in various sizes, from small and medium-sized multimodal models, 11B and 90B parameter models, capable of sophisticated reasoning tasks including multimodal support for high resolution images, to lightweight text-only 1B and 3B parameter models suitable for edge devices, to the Llama Guard 3 11B Vision model, which is designed to support responsible innovation and system-level safety. Llama 3.2 is the first Llama model to support vision tasks, with a new model architecture that integrates image encoder representations into the language model. Llama 3.2 models can help you build and deploy generative AI models to ignite new innovations like image reasoning and are more accessible for on edge applications. The new models are also designed to be more efficient for AI workloads, with reduced latency and improved performance, making them suitable for a wide range of applications.
Llama 3.2 models are available in SageMaker JumpStart initially in the US East (Ohio) AWS region. To get started, see documentation and blog.
Source:: Amazon AWS