Members-Only
Recent Talks & Demos are for members only
You must be an AI Tinkerers active member to view these talks and demos.
Vibe Coding 101
This talk demonstrates how to use git worktree, Claude, and Uttertype together to efficiently parallel program multiple features across branches.
I’d like to present a decent setup for how to vibe code effectively and efficiently without having needless impediments. The goal of the presentation is to demonstrate how one can multiply their engineering output just by using the existing tools cleverly.
Tools I would present:
git worktree
claude code
uttertype (a transcription app I wrote)
Combining all three of these lets one parallel program multiple features in multiple branches all at the same time.
This Python project demonstrates real-time dictation using Whisper, MLX, Gemini.
- GPT-4GPT-4 is OpenAI’s large multimodal model: it processes both text and image inputs, delivering human-level performance on complex professional and academic benchmarks.This is OpenAI’s latest milestone in scaling deep learning: a large multimodal model accepting both text and image inputs. It demonstrates a significant capability leap over its predecessor, scoring in the top 10% on a simulated bar exam (GPT-3.5 scored in the bottom 10%). The model handles nuanced instructions and long-form content, supporting context windows up to 32,768 tokens (32K model). This capacity allows processing up to 25,000 words in a single, complex prompt. GPT-4 is engineered for enhanced reliability, steerability, and advanced reasoning across diverse tasks.
- Claude-3Claude-3 is Anthropic's state-of-the-art multimodal model family (Opus, Sonnet, Haiku), setting new industry benchmarks for intelligence, speed, and vision capabilities.Claude-3, developed by Anthropic, is a powerful family of three generative AI models: Opus, Sonnet, and Haiku. Opus, the flagship, excels in complex reasoning, outperforming peers on key benchmarks (MMLU, GPQA) and supporting a 200,000-token context window. Sonnet offers an optimal balance for enterprise workloads, delivering performance that is 2x faster than its predecessor, Claude 2.1. Haiku is the fastest and most cost-effective option, capable of processing a 10,000-token research paper (including charts) in under three seconds. All three models are multimodal, featuring strong vision capabilities for analyzing charts, diagrams, and PDFs alongside text, enabling advanced data extraction and analysis.
- Llama-2Llama 2 is Meta AI's powerful, openly accessible family of large language models (LLMs), featuring models from 7B to 70B parameters for research and commercial applications.Llama 2 is Meta AI's next-generation LLM family, released for free research and commercial use. The collection includes both pre-trained foundation models and instruction-tuned 'Chat' variants, scaling from 7 billion (7B) up to 70 billion (70B) parameters. Key technical upgrades over Llama 1 involve training on 2 trillion tokens (40% more data) and doubling the context length to 4096 tokens. The Llama-2-chat models were rigorously aligned using Reinforcement Learning from Human Feedback (RLHF), positioning them as a top-tier, openly available option for developers building advanced generative AI solutions.
- OpenAI APIOpenAI API: Your direct gateway to cutting-edge AI models (GPT-4o, DALL-E 3, Whisper), enabling scalable, multimodal intelligence integration into any application.The OpenAI API provides authenticated, programmatic access to a powerful suite of generative AI models. Developers leverage REST endpoints and official libraries (Python, Node.js) to integrate capabilities like advanced text generation (GPT-4o), image creation (DALL-E 3), and speech-to-text transcription (Whisper). This platform is engineered for scale, supporting millions of daily requests for tasks from complex reasoning to real-time customer support agents, ensuring your application gets reliable, state-of-the-art intelligence.
- TransformersThe deep learning architecture that revolutionized sequence modeling (NLP, vision) by replacing recurrent units with a parallelizable multi-head self-attention mechanism.The Transformer: a neural network architecture introduced in the landmark 2017 paper, "Attention Is All You Need." It eliminated the sequential processing bottleneck of prior Recurrent Neural Networks (RNNs) by relying solely on self-attention, enabling massive parallelization and significantly faster training (up to 10x faster) on modern hardware. This efficiency allowed for the creation of large-scale pre-trained models: BERT (encoder-only) and the generative GPT series (decoder-only). The architecture is now foundational to all modern Large Language Models (LLMs) and drives the current state-of-the-art in AI.
Related projects
Vibe Coding an MCP Server
Amsterdam
Learn how to implement and test an MCP server from scratch, covering protocol basics, server setup, and common…
Vibe Coding AI Agents
Chicago
Learn how to build and deploy production‑grade AI agents quickly using Mastra AI, a TypeScript framework, with practical…
Vibe Coding with yourturn
Boston
We'll demonstrate a web framework for game development, using a new template system to live-code a simple game…
Semi-Technical Vibe Coding!
Chicago
Live demo shows how to build an AI recipe generator and extend it to meal planning, using simple…
How to vibe code a presentation
Boston
Learn how to build a React web application that generates slides, replacing Google Sheets and Keynote, using AI…
Vibe Cooking - your next meal, sorted ✨
Toronto
Learn how Vibe Cooking uses agentic AI, prompt engineering, and API workflows to generate personalized meal recipes with…