Distillation and fine-tuning of SLMs to achieve great performance in specific tasks | Santiago .

Members-Only

Recent Talks & Demos are for members only

Exclusive feed

You must be an AI Tinkerers active member to view these talks and demos.

February 27, 2025 · Santiago

SLM Distillation and Fine-tuning

How to compress large language models via distillation and fine‑tuning, with code, performance metrics, data format, and a call‑center case study.

Overview
Tech stack

Related projects