Members-Only
Recent Talks & Demos are for members only
You must be an AI Tinkerers active member to view these talks and demos.
Transformer Lab: Tune LLMs Locally
Live demo of Transformer Lab: practical, code-free fine-tuning and evaluation of LLMs locally on a MacBook Air, including a real-time unconventional dataset experiment.
Transformer Lab is an open source platform that allows anyone to build, tune, & run Large Language Models locally, without writing code.
We imagine a world where every software developer will incorporate large language models in their products. Transformer Lab allows users to do this without needing to know Python nor have previous experience with machine learning.
I will be doing a demo of the platform showing how to fine-tune LLMs and performing evals on my Macbook Air without any other hardware needed.
Open-source desktop application for local LLM/Diffusion model training and evaluation.
Related projects
Transformer Lab
Waterloo
Live demo of Transformer Lab showing how to install, run, train, tune, evaluate, perform RAG and quantize LLMs…
All the Trainingz, No Codez
Toronto
This talk demonstrates how to train, finetune, and preference tune large language models on a home computer using…
A transformer from scratch in go
Toronto
Explore a Go implementation of Karpathy's llm.c, covering transformer architecture, code structure, and unit tests for practical parameter…
LLM.f90 - Minimal Large Language Model Inference Framework
Toronto
A low‑dependency Fortran framework for LLM inference, showing zero‑dependency implementation, matrix operations, and support for Llama, Phi, and…
From Haikus to Helper - Wrangling Agentic LLM's
Montreal
This talk covers building an LLM-powered product for data transformation and enrichment, detailing architecture, SQL and vector retrieval,…
Deploying fine-tuned language models: From start to finish, in nothing but Python
Toronto
Learn how to fine‑tune an LLM, then deploy it as an API service using Covalent to orchestrate cloud…