Members-Only
Recent Talks & Demos are for members only
You must be an AI Tinkerers active member to view these talks and demos.
Artificial Societies: LLM Agent Simulation
Demonstrates an LLM‑driven simulation of a LinkedIn audience, showing how model‑generated agents predict and explain recent post performance dynamics in real‑time.
Artificial Societies uses LLMs to simulate large groups of people and how they react to information. Here, we demo a simulation of the host Luke Harries’ LinkedIn audience, and see if we can accurately simulate how his recent posts perform.
Optimizes content using simulated audience networks for data-driven insights.
- GPT-4GPT-4 is OpenAI’s large multimodal model: it processes both text and image inputs, delivering human-level performance on complex professional and academic benchmarks.This is OpenAI’s latest milestone in scaling deep learning: a large multimodal model accepting both text and image inputs. It demonstrates a significant capability leap over its predecessor, scoring in the top 10% on a simulated bar exam (GPT-3.5 scored in the bottom 10%). The model handles nuanced instructions and long-form content, supporting context windows up to 32,768 tokens (32K model). This capacity allows processing up to 25,000 words in a single, complex prompt. GPT-4 is engineered for enhanced reliability, steerability, and advanced reasoning across diverse tasks.
- LangChainThe open-source framework for building and deploying reliable, data-aware Large Language Model (LLM) applications.LangChain is the essential framework for engineering LLM-powered applications: it simplifies connecting models (like GPT-4 or Claude) to external data, computation, and APIs. The platform provides a modular set of components—Chains, Agents, Tools, and Memory—allowing developers to quickly build complex workflows like Retrieval-Augmented Generation (RAG) pipelines and sophisticated conversational agents. Its Python and JavaScript libraries, combined with LangChain Expression Language (LCEL), offer a standardized interface for rapid prototyping and moving applications to production with confidence.
- TransformersThe deep learning architecture that revolutionized sequence modeling (NLP, vision) by replacing recurrent units with a parallelizable multi-head self-attention mechanism.The Transformer: a neural network architecture introduced in the landmark 2017 paper, "Attention Is All You Need." It eliminated the sequential processing bottleneck of prior Recurrent Neural Networks (RNNs) by relying solely on self-attention, enabling massive parallelization and significantly faster training (up to 10x faster) on modern hardware. This efficiency allowed for the creation of large-scale pre-trained models: BERT (encoder-only) and the generative GPT series (decoder-only). The architecture is now foundational to all modern Large Language Models (LLMs) and drives the current state-of-the-art in AI.
- PyTorchPyTorch is the open-source machine learning framework: it provides a Python-first tensor library with strong GPU acceleration and a dynamic computation graph for building deep neural networks.PyTorch, developed by Meta AI, is a premier open-source deep learning framework favored in both research and production environments. Its core is a powerful tensor library (like NumPy) optimized for GPU acceleration, delivering 50x or greater speedups for complex computations. The key differentiator is its 'Pythonic' design and dynamic computation graph (eager execution), which allows for rapid prototyping and simplified debugging compared to static-graph frameworks. Leveraging its Autograd system for automatic differentiation, practitioners build and train models for computer vision and NLP; major companies like Tesla (Autopilot) and Microsoft utilize PyTorch for critical AI applications.
- OpenAI APIOpenAI API: Your direct gateway to cutting-edge AI models (GPT-4o, DALL-E 3, Whisper), enabling scalable, multimodal intelligence integration into any application.The OpenAI API provides authenticated, programmatic access to a powerful suite of generative AI models. Developers leverage REST endpoints and official libraries (Python, Node.js) to integrate capabilities like advanced text generation (GPT-4o), image creation (DALL-E 3), and speech-to-text transcription (Whisper). This platform is engineered for scale, supporting millions of daily requests for tasks from complex reasoning to real-time customer support agents, ensuring your application gets reliable, state-of-the-art intelligence.
Related projects
The next evolution of AI - Neurosymbolic
London
Demonstration of a lightweight neurosymbolic platform that maps data, lets you define business logic in plain English, and…
AI-Powered Prompt Engineering: Enhancing LLM Performance with PromptLab and IQ
Boston
Live demo shows PromptLab and IQ optimizing Apple Intelligence prompts, demonstrating real-time prompt refinement across multiple LLMs for…
Twitter '95 (and Fireside Chat)
New York City
A prototype of 1995‑style Twitter built with LLM‑generated posts and historically grounded content, demonstrating early‑web aesthetics and AI‑driven…
Human machine
Seattle
This talk explores a code editor and runtime that integrates multiple AIs for prototyping, enabling coding with voice…
SMELL: A Framework For Aligning LLM Evaluators To Human Feedback
London
The talk introduces SMELL, a practical framework that aligns large language model evaluators with human feedback, detailing methodology,…
Conversational AI recruiting
New York City
This talk demonstrates using conversational AI and LLMs to automate candidate phone screens by generating questions, conducting calls,…