Members-Only
Recent Talks & Demos are for members only
You must be an AI Tinkerers active member to view these talks and demos.
Assistant Shell: Client-Side AI Chat
Learn how I built a clientâside AI chat PWA using only AI assistants, without any backend code, with a live demo and development insights.
What if you could build an entire AI-powered web chat app without writing a single line of code yourself?
Thatâs exactly what I did!
Assistant Shell đ¤đ is a sleek, no-install AI chat app that lets you talk to every major AI Assistant â OpenAIâs GPTs, Anthropicâs Claude, Googleâs Gemini, even local models via Ollama, and even more.
You just drop in your API keys, tweak your settings, and boom â youâre chatting with AI on your terms.
But hereâs the real kicker: I built the whole thing using only AI Assistants.
Every bit of code of Next.js, TypeScript, TailwindCSS was written with ChatGPT, Claude, Gemini, Copilot, Continue, and Cline.
Furthermore, the app has no backend, no server storing your conversations, no other data collection â just pure, client-side AI magic.
Last but not least, this web app is also a PWA so you can âinstallâ it with one click on any device!
Assistant Shell provides a command-line interface for AI assistant interaction.
Browser PWA unifies multi-model AI, local keys, instant switching, no subscriptions.
- NextNext.js is the full-stack React framework: it delivers high-performance web applications via hybrid rendering and powerful, Rust-based tooling.This is the React Framework for production: Next.js enables you to build full-stack web applications with zero configuration and maximum efficiency. It supports a hybrid rendering approach (Server-Side Rendering, Static Site Generation, and Incremental Static Regeneration) for optimal speed and SEO performance. Key features include React Server Components, Server Actions for running server code directly, and the App Router for advanced routing and nested layouts. Developed by Vercel, it leverages Rust-based tools like Turbopack and the Speedy Web Compiler for the fastest possible builds and a superior developer experience.
- TypeScriptTypeScript is an open-source superset of JavaScript: it adds static typing and compiles to clean, standards-based JavaScript.TypeScript is a high-level, open-source language developed by Microsoft: it acts as a superset of JavaScript, adding a powerful static type system. This system enables compile-time type checking, catching errors before runtime (a critical benefit for large-scale applications). The TypeScript Compiler (TSC) reliably transpiles all code into clean, standards-based JavaScript (ES3 or newer), ensuring compatibility across any browser or host environment (Node.js, React.js, etc.).
- Tailwind CSSUtility-first CSS framework: rapidly build modern UIs by composing low-level classes (e.g., `flex`, `pt-4`, `text-center`) directly in your HTML markup.Tailwind CSS is a utility-first framework: it provides thousands of low-level classes (like `flex`, `pt-4`, and `bg-blue-500`) that map directly to single CSS properties, allowing developers to build complex, custom designs without writing custom CSS. This approach, unlike traditional component-based frameworks, ensures consistency across a project's design system and significantly accelerates development speed. The Just-In-Time (JIT) engine is key: it scans your code to generate only the necessary styles, resulting in an optimized, tiny production CSS bundle, which is a major performance advantage for any modern web application.
- OpenAI APIOpenAI API: Your direct gateway to cutting-edge AI models (GPT-4o, DALL-E 3, Whisper), enabling scalable, multimodal intelligence integration into any application.The OpenAI API provides authenticated, programmatic access to a powerful suite of generative AI models. Developers leverage REST endpoints and official libraries (Python, Node.js) to integrate capabilities like advanced text generation (GPT-4o), image creation (DALL-E 3), and speech-to-text transcription (Whisper). This platform is engineered for scale, supporting millions of daily requests for tasks from complex reasoning to real-time customer support agents, ensuring your application gets reliable, state-of-the-art intelligence.
- OllamaDeploy and run open-source Large Language Models (LLMs) like Llama 3 and Mistral locally on your machine: achieve private, cost-effective AI via a simple command-line interface.Ollama is the essential tool for running LLMs locally: consider it the Docker for AI models. It packages complex models and dependencies into a single, easy-to-use application for macOS, Linux, and Windows systems. You get immediate access to models like Gemma 2 and DeepSeek-R1 via a straightforward CLI or REST API. This local-first approach guarantees data privacy and security, eliminating cloud dependency and high API costs. Ollama also optimizes performance on consumer hardware using techniques like quantization, ensuring efficient execution even on standard desktops.
Related projects
Beyond the Prompt: Building Chatbots That Control Your UI
Hamburg
Learn how to integrate a chatbot that directly manipulates a frontend, using inâchat UI components and automatic pageâŚ
AI curated whatsapp memories
Cologne
Learn how to extract, rank, and store the most memorable WhatsApp messages using a Python script, Gemini 2.5âŚ
1nterface ai
ZĂźrich
The talk explores an AI-driven context collection tool designed for laptops, demonstrating how it understands user behavior toâŚ
Finally, an AI support agent that asks you when it doesnât know
Munich
This talk covers building AI support agents that recognize their limits, ask for help when uncertain, and prioritizeâŚ
Meal Planner
Hamburg
Learn how to build a mealâplanning system with LangGraph, combining AI agents, chains, and persistent memory to adaptâŚ
ProductBar: Using AI to identify ideas in customer feedback
Hamburg
A walkthrough of ProductBar, an AI system that categorizes customer feedback, demonstrates its analysis workflow, and explores implementationâŚ