Members-Only
Recent Talks & Demos are for members only
You must be an AI Tinkerers active member to view these talks and demos.
Pruna: 2x faster diffusion
This talk demonstrates how to use pruna to compress text-to-image diffusion models in three lines of code, doubling inference speed without quality loss.
In a few lines of code, pruna enables you to compress any text to image GenAI model. We will show how we can easily generate an image, and how we can speedup this generation without any quality loss.
Related projects
Generate precise and coherent images on your laptop with Stable Diffudion
Paris
This talk demonstrates controlling poses, styles, and character consistency using Stable Diffusion and ComfyUI for customized, inclusive image…
Agents for building bespoke machine learning models
Paris
Exploring how agents build custom machine learning models from customer data, covering toolkits, orchestration layer, template framework, and…
NuExtract - a foundation model for Structured Extraction
Paris
Learn how NuExtract, an open‑source foundation model, extracts complex information into JSON, covering data generation, distillation, hallucination mitigation,…
Training custom controlnets for virtual staging
Paris
Exploring custom controlnet training for virtual staging: methods, architecture variations, and comparative results on converting empty rooms into…
The Unreasonable Power of Structured Outputs
Paris
This talk demonstrates three applications using structured outputs: a PowerPoint generator, a smart search bar, and a skin…
Group Partner Healthtech & CTO Theodo France
Paris
Learn how AI and an AST parser mapped a 500k+ Express monolith to a modular Hono architecture, producing…