Members-Only
Recent Talks & Demos are for members only
You must be an AI Tinkerers active member to view these talks and demos.
Open-webui
Learn how to use Open‑WebUI to integrate multiple local or API LLMs with a RAG pipeline, reducing costs and enabling multi‑user, web‑based deployment.
How to leverage multiple LLM’s (local or api endpoint) within a polished framework
live demo arxiv rag pipeline I’ve been developing using a pipelines solution
Self-hosted AI platform supporting Ollama, OpenAI, RAG, and Docker deployment.
This Llama Index RAG pipeline retrieves full papers using PGVector, Mistral.
Related projects
BetaTester - Bot that tests the UI / UX of your web app
San Francisco
This talk covers BetaTester, an open-source tool using LLMs and Playwright to automatically test web app UI/UX flows,…
AI ∞ UI: A Versatile Web Interface for Seamless Interaction with LLM APIs
Seattle
A walkthrough of AI ∞ UI, showing model switching, system message and temperature controls, conversation management, markdown/LaTeX support, and layered…
Controllable AI Video Generation: Wan 2.1 & ComfyUI
Los Angeles
Learn how Wan 2.1’s control‑code system and ComfyUI integration enable precise, multimodal video generation and collaborative prototyping for…
Building an LLM Email Assistant
Orange County
Learn how to build an email assistant using OpenAI's LLM: system architecture, prompt design, integration steps, and a…
CorpusKeeper - Talk To Data
Seattle
Demonstrating how to integrate LLMs, RAG, and function calls to create an agency‑style interface that designs, scripts, and…
LLM drives a web browser
New York City
This talk demonstrates an open-source interface that enables large language models to interact with web pages through a…