Members-Only
Recent Talks & Demos are for members only
You must be an AI Tinkerers active member to view these talks and demos.
DeepSeek R1: Temporal Agent Workflow
Learn how to use DeepSeek R1 agentic workflows and temporal prompting to filter, rank, and retrieve the most relevant NVIDIA GTC sessions efficiently.
With over 1200 sessions at NVIDIA’s GTC conference, selecting the most relevant talks can feel like searching for a needle in a haystack. Traditional search methods often fall short, returning a flood of keyword matches and marketing fluff. In this talk, I’ll share how we tackled this challenge by leveraging agentic workflows with DeepSeek R1 to filter and rank sessions efficiently.
I’ll talk about how I crafted the prompts in detail, why temporal was technically necessary to solve this problem (and others). I’ll also discuss how other models could be used in stead of DeepSeek-R1 (like Gemini, OpenAI 01 etc).
CentML agentic workflow for GTC session selection via Temporal orchestration.
- GPT-4GPT-4 is OpenAI’s large multimodal model: it processes both text and image inputs, delivering human-level performance on complex professional and academic benchmarks.This is OpenAI’s latest milestone in scaling deep learning: a large multimodal model accepting both text and image inputs. It demonstrates a significant capability leap over its predecessor, scoring in the top 10% on a simulated bar exam (GPT-3.5 scored in the bottom 10%). The model handles nuanced instructions and long-form content, supporting context windows up to 32,768 tokens (32K model). This capacity allows processing up to 25,000 words in a single, complex prompt. GPT-4 is engineered for enhanced reliability, steerability, and advanced reasoning across diverse tasks.
- Llama-2Llama 2 is Meta AI's powerful, openly accessible family of large language models (LLMs), featuring models from 7B to 70B parameters for research and commercial applications.Llama 2 is Meta AI's next-generation LLM family, released for free research and commercial use. The collection includes both pre-trained foundation models and instruction-tuned 'Chat' variants, scaling from 7 billion (7B) up to 70 billion (70B) parameters. Key technical upgrades over Llama 1 involve training on 2 trillion tokens (40% more data) and doubling the context length to 4096 tokens. The Llama-2-chat models were rigorously aligned using Reinforcement Learning from Human Feedback (RLHF), positioning them as a top-tier, openly available option for developers building advanced generative AI solutions.
- PythonPython: The high-level, general-purpose language built for readability, powering everything from web backends to advanced machine learning models.Python is the high-level, general-purpose language prioritizing clear, readable syntax (via significant indentation), ensuring rapid development for any team . Its ecosystem is massive: use it for robust web development with frameworks like Django and Flask, or leverage its power in data science with libraries such as Pandas and NumPy . The Python Package Index (PyPI) provides thousands of community-contributed modules, offering immediate solutions for tasks from network programming to GUI creation . The language is actively maintained by the Python Software Foundation (PSF), with the stable release currently at Python 3.14.0 (as of November 2025) .
- TemporalTemporal is the Durable Execution platform: It orchestrates complex, long-running workflows as code, guaranteeing fault-tolerant completion at any scale.Temporal is your microservices orchestration solution, built for mission-critical reliability. It enables developers to define complex, stateful workflows directly in code (Go, Java, Python, TypeScript, etc.) using the Temporal SDKs. The platform's core feature, Durable Execution, automatically persists application state and handles all retries, timeouts, and failures: your workflow always picks up exactly where it left off. This eliminates writing thousands of lines of custom error-handling logic. Global enterprises like NVIDIA and Salesforce leverage Temporal to ensure their transactions, deployments, and AI agents run invincibly.
- Deepseek R1DeepSeek R1: The open-source, reinforcement learning (RL)-driven LLM that delivers state-of-the-art reasoning, math, and coding performance, rivaling models like OpenAI's o1, at a fraction of the operational cost.DeepSeek R1 is a powerful, open-source large language model (LLM) from the Chinese startup DeepSeek, launched in January 2025 . The core innovation is its efficient Mixture of Experts (MoE) architecture: it utilizes 671 billion total parameters but activates only 37 billion per forward pass, drastically cutting computational overhead . This RL-based model achieves superior performance in complex benchmarks, specifically excelling in reasoning, math (e.g., AIME, MATH), and coding tasks, often rivaling or surpassing top proprietary models like OpenAI's o1 . Released under the MIT license, R1 democratizes access to advanced reasoning capabilities with a highly cost-effective structure .
Related projects
Developing AI Solutions for Busy Parents
Toronto
A demo of a custom email summarizer that extracts key details and dates, with notebooks covering model selection,…
Observability that drives true ROI
Toronto
This talk introduces AI managers that monitor and summarize agent behavior, moving beyond dashboards to quickly show what…
JetBrains Long Code Arena
Toronto
Exploring JetBrains Long Code Arena benchmarks, we'll demonstrate project‑wide code completion and library‑based generation, discuss context strategies, and…
JobsYo: Building an AI-based but Human-driven Job Search, Research and Apply Ecosystem
Toronto
See a live demo of an AI job search platform featuring multi-model API routing, context engineering, agentic job…
From Local Prototyping to Distributed Clusters: An Open Source Platform for ML Research Teams
Toronto
See a demo scaling ML training from a local notebook to a GPU cluster, covering checkpoint recovery, hyperparameter…
Personalized AI Tutor
Toronto
Demonstrating a Flask/JavaScript web app that uses a local open‑source LLM to assess users and generate personalized learning…