Members-Only
Recent Talks & Demos are for members only
You must be an AI Tinkerers active member to view these talks and demos.
In-Context Learning for Personalization
This talk explores using in-context learning with few-shot examples to continuously personalize large language model outputs for individual users efficiently.
An iterative system for personalizing an LLM’s generated outputs to specific users through few-shot examples.
Related projects
Better Insights into Team Activity
Los Angeles
The session demonstrates a Jupyter Notebook proof‑of‑concept that combines usage‑tracking dashboards, forecasting models, and a retrieval‑augmented LLM to…
Thinking LLMs
Los Angeles
This talk explains how to generate synthetic data for training custom o1 style language models using methods from…
Building an LLM Email Assistant
Orange County
Learn how to build an email assistant using OpenAI's LLM: system architecture, prompt design, integration steps, and a…
Concrete use cases where AI agents perform better than a single LLM call
Los Angeles
Live demonstration comparing multi‑agent AI workflows to single LLM calls, showing how specialized agents improve accuracy and efficiency…
Controllable AI Video Generation: Wan 2.1 & ComfyUI
Los Angeles
Learn how Wan 2.1’s control‑code system and ComfyUI integration enable precise, multimodal video generation and collaborative prototyping for…
Lessons from building an LLM-first framework
London
Explore how Tonk enables non‑coders to quickly update multiplayer applets by limiting context to the frontend and using…