Members-Only
Recent Talks & Demos are for members only
You must be an AI Tinkerers active member to view these talks and demos.
Stable Diffusion: Precise image control
This talk demonstrates controlling poses, styles, and character consistency using Stable Diffusion and ComfyUI for customized, inclusive image generation workflows.
This demo will show how to control poses, styles, and character consistency for use cases like yoga pose visualization (feedback from my ongoing work for my Yoga website www.yogarkana.com) with the local use of Stable Diffusion models and advanced ComfyUI workflows.
Learn how to customize Stable Diffusion workflows to meet their your own creative, commercial, or inclusive design needs.
GenAI-Lamp provides ComfyUI workflows for Stable Diffusion T2I/I2I generation.
Related projects
Dream It, Generate It, Animate It: Open-Source Personal Video Creation
Paris
Learn how to combine FLUX1.dev image generation with a text‑to‑video diffusion model, covering pipeline architecture, code walkthrough, scaling,…
Training custom controlnets for virtual staging
Paris
Exploring custom controlnet training for virtual staging: methods, architecture variations, and comparative results on converting empty rooms into…
x2 faster diffusion model in 3 lines of code
Paris
This talk demonstrates how to use pruna to compress text-to-image diffusion models in three lines of code, doubling…
Stable Diffusion - Transform your casual photos into professional portraits.
Singapore
Learn how to use ComfyUI and an open‑source Stable Diffusion model to convert casual photos into polished portrait…
Generating "stories" from your digital trace data
Paris
The talk explores generating visual stories from Google Takeout data using open-weight transformer models, highlighting data pipelines, ML…
StableGen - diffusion powered texturing within Blender
Prague
Live demo of StableGen Blender plugin: from untextured model to AI‑generated textures using prompts, control maps, ComfyUI workflow,…