Site icon GIXtools

Amazon SageMaker JumpStart now supports fine-tuning of Foundation Models with domain adaptation

Starting today, Amazon SageMaker JumpStart provides the capability to fine-tune a large language model, particularly a text generation model on a domain-specific data set. Customers can now fine-tune models with their custom data set to improve performance in specific domains. For example, this blog describes how to use domain adaption to fine tune a GPT-J 6B model on publicly available financial data from the Security and Exchange Commission so that the model can generate more relevant text for financial services use cases. Customers can fine-tune Foundation Models such as GPT-J 6B and GPT-J 6B FP16 models for domain adaptation on JumpStart inside Amazon SageMaker Studio through UI, and through SageMaker Python SDK.

Source:: Amazon AWS

Exit mobile version