Members-Only
Recent Talks & Demos are for members only
You must be an AI Tinkerers active member to view these talks and demos.
SpecStory Cursor: Save AI chat history
The talk demonstrates how SpecStory Cursor automatically saves Composer chat logs, creates versioned .cursorrules, and enables searchable histories for AI‑generated code.
I’ll quickly show off the SpecStory Cursor extension (which is completely free) that auto-saves your composer chat logs (for versioning / future referencing) and automatically derives .cursorrules for you.
Extensions save Copilot, Cursor, Claude Code AI chat history locally.
SpecStory integrates Claude, Gemini, and Codex AI coding agents for development.
- GPT-4GPT-4 is OpenAI’s large multimodal model: it processes both text and image inputs, delivering human-level performance on complex professional and academic benchmarks.This is OpenAI’s latest milestone in scaling deep learning: a large multimodal model accepting both text and image inputs. It demonstrates a significant capability leap over its predecessor, scoring in the top 10% on a simulated bar exam (GPT-3.5 scored in the bottom 10%). The model handles nuanced instructions and long-form content, supporting context windows up to 32,768 tokens (32K model). This capacity allows processing up to 25,000 words in a single, complex prompt. GPT-4 is engineered for enhanced reliability, steerability, and advanced reasoning across diverse tasks.
- Claude-3Claude-3 is Anthropic's state-of-the-art multimodal model family (Opus, Sonnet, Haiku), setting new industry benchmarks for intelligence, speed, and vision capabilities.Claude-3, developed by Anthropic, is a powerful family of three generative AI models: Opus, Sonnet, and Haiku. Opus, the flagship, excels in complex reasoning, outperforming peers on key benchmarks (MMLU, GPQA) and supporting a 200,000-token context window. Sonnet offers an optimal balance for enterprise workloads, delivering performance that is 2x faster than its predecessor, Claude 2.1. Haiku is the fastest and most cost-effective option, capable of processing a 10,000-token research paper (including charts) in under three seconds. All three models are multimodal, featuring strong vision capabilities for analyzing charts, diagrams, and PDFs alongside text, enabling advanced data extraction and analysis.
- Llama-2Llama 2 is Meta AI's powerful, openly accessible family of large language models (LLMs), featuring models from 7B to 70B parameters for research and commercial applications.Llama 2 is Meta AI's next-generation LLM family, released for free research and commercial use. The collection includes both pre-trained foundation models and instruction-tuned 'Chat' variants, scaling from 7 billion (7B) up to 70 billion (70B) parameters. Key technical upgrades over Llama 1 involve training on 2 trillion tokens (40% more data) and doubling the context length to 4096 tokens. The Llama-2-chat models were rigorously aligned using Reinforcement Learning from Human Feedback (RLHF), positioning them as a top-tier, openly available option for developers building advanced generative AI solutions.
- LangChainThe open-source framework for building and deploying reliable, data-aware Large Language Model (LLM) applications.LangChain is the essential framework for engineering LLM-powered applications: it simplifies connecting models (like GPT-4 or Claude) to external data, computation, and APIs. The platform provides a modular set of components—Chains, Agents, Tools, and Memory—allowing developers to quickly build complex workflows like Retrieval-Augmented Generation (RAG) pipelines and sophisticated conversational agents. Its Python and JavaScript libraries, combined with LangChain Expression Language (LCEL), offer a standardized interface for rapid prototyping and moving applications to production with confidence.
- PyTorchPyTorch is the open-source machine learning framework: it provides a Python-first tensor library with strong GPU acceleration and a dynamic computation graph for building deep neural networks.PyTorch, developed by Meta AI, is a premier open-source deep learning framework favored in both research and production environments. Its core is a powerful tensor library (like NumPy) optimized for GPU acceleration, delivering 50x or greater speedups for complex computations. The key differentiator is its 'Pythonic' design and dynamic computation graph (eager execution), which allows for rapid prototyping and simplified debugging compared to static-graph frameworks. Leveraging its Autograd system for automatic differentiation, practitioners build and train models for computer vision and NLP; major companies like Tesla (Autopilot) and Microsoft utilize PyTorch for critical AI applications.
Related projects
Coder
Seattle
This talk explains how GPT-4 and Roslyn are combined to generate reliable C# code with over 90% success…
AI Decision-Making in Low / No Trainable Data Domains
DC
This talk explores using expert-created rules of thumb to guide large language models in specialized domains with little…
Building Working Code Live: Documentation-First AI Development
Seattle
We'll demonstrate a documentation‑first prompt workflow using Claude models to generate a CLI tool, covering specification writing, parallel…
Next level AI-driven development with Cursor - .cursorrules, Notepads and MCP services
Boston
Learn to use Cursor’s Agent Mode, configure .cursorrules with JSON, leverage Notepads, and integrate Model Context Protocol services…
Stevens: a hackable AI assistant using a single SQLite table and a handful of cron jobs
DC
Learn how to build a personal AI assistant using one SQLite table for memories, simple cron jobs for…
An agentic autonomous prompt optimization and model selection tool/gateway
DC
The talk demonstrates an autonomous agent that examines LLM execution traces, suggests optimal models and usage patterns, and…