With Amazon Bedrock Model Distillation, customers can use smaller, faster, more cost-effective models that deliver use-case specific accuracy that is comparable to the most capable models in Amazon Bedrock.
Today, fine-tuning a smaller cost-efficient model to increase its accuracy for a customers’ use-case is an iterative process where customers need to write prompts and response, refine the training dataset, ensure that the training dataset captures diverse examples, and adjust the training parameters.
Amazon Bedrock Model Distillation automates the process needed to generate synthetic data from the teacher model, trains and evaluates the student model, and then hosts the final distilled model for inference. To remove some of the burden of iteration, Model Distillation may choose to apply different data synthesis methods that are best suited for your use-case to create a distilled model that approximately matches the advanced model for the specific use-case. For example, Bedrock may expand the training dataset by generating similar prompts or generate high-quality synthetic responses using customer provided prompt-response pairs as golden examples.
Learn more in our documentation and blog.
Source:: Amazon AWS