Members-Only
Recent Talks & Demos are for members only
You must be an AI Tinkerers active member to view these talks and demos.
SLM Distillation and Fine-tuning
How to compress large language models via distillation and fine‑tuning, with code, performance metrics, data format, and a call‑center case study.
This process compresses large-scale language models into efficient, task-optimized versions, enhancing performance and adaptability.
I will present a code, an example of performance and an example of the format of the training data
Related projects
Mining opportunities
Santiago
This talk demonstrates how an AI platform uses GPT-4 to analyze public procurement data, assess supplier capabilities, and…
Code Fixer
Santiago
The session explains how Code Fixer’s multi‑agent system integrates into the software development lifecycle to automate debugging, improve…
Deep Learning sin farándula
Santiago
Explora los fundamentos de redes neuronales profundas: arquitectura, proceso de entrenamiento, algoritmos de aprendizaje y los principios técnicos…
Creando IA desde Latam con Impacto Global
Santiago
Learn how Latin American product architects convert AI into scalable, globally adopted solutions, emphasizing talent, innovation, and execution…
Entrena tu propio modelo sin morir en el intento: Optimización de recursos para LLMs
Bogotá
Esta charla muestra cómo entrenar y ajustar LLMs usando recursos limitados, optimizando memoria y GPU con técnicas como…
ExpertFinder - Harvard E115 (Spring) Final Project
Bogotá
Demonstrating how to build an end‑to‑end expert search system using embeddings, RAG, ChromaDB, DVC versioning, and GKE deployment…