Members-Only
Recent Talks & Demos are for members only
You must be an AI Tinkerers active member to view these talks and demos.
AI Art vs Human Art
Exploring how traditional artists incorporate AI, comparing human‑created and AI‑enhanced works, and sharing hands‑on student projects that blend both approaches.
In this demo, Helen, the speaker, will pose a thought-provoking question: “Works by Human Artists vs. AI Artists.” She will then share her experiences as a traditional medium artist who uses AI to enhance her work. Additionally, she will discuss her community initiatives with students focused on AI creation.
- CanvaCanva is the leading visual communication platform: a drag-and-drop design tool for creating anything from social media graphics to professional presentations.Canva is the global visual communication platform, launched in 2013 by founders Melanie Perkins and Cliff Obrecht: it democratizes design. The platform leverages a drag-and-drop interface and an extensive template library, enabling over 220 million monthly active users to create high-impact visuals without specialized training. Users produce everything from social media posts and presentations to websites and videos, utilizing features like Magic Write (AI-powered copywriting) and the comprehensive Brand Kit (for Enterprise clients). This efficient, freemium model positions Canva as the essential tool for rapid, on-brand content creation across all business functions.
- SunoSuno is a generative AI platform: it creates full, professional-quality songs—complete with vocals, instrumentation, and lyrics—from a simple text prompt.Suno is a leading generative AI music creation platform, developed by Suno, Inc., out of Cambridge, Massachusetts. The core function is simple: users input a text prompt, and the system generates a complete, realistic song, including vocals and instrumentation, often in under a minute. Since its wide release in December 2023 and a partnership with Microsoft Copilot, the platform has rapidly evolved, with the latest V4 model (November 2024) capable of producing full four-minute tracks. The technology democratizes music production, allowing anyone to create commercial-quality pop, electronic, or blues tracks without any musical expertise.
- LLMLarge Language Models (LLMs) are deep learning models, built on the Transformer architecture, that process and generate human-quality text and code at scale.LLMs are a class of foundation models: massive, pre-trained neural networks (often with billions to trillions of parameters) that leverage the self-attention mechanism of the Transformer architecture (introduced in 2017) to predict the next token in a sequence. Trained on vast datasets (e.g., Common Crawl's 50 billion+ web pages), these models—like GPT-4, Gemini, and Claude—acquire predictive power over syntax and semantics. They function as general-purpose sequence models, enabling critical applications such as complex content generation, language translation, and automated code completion (e.g., GitHub Copilot). Their core value: generalizing across diverse tasks with minimal task-specific fine-tuning.
- Diffusion modelsGenerative AI models that synthesize high-fidelity data (e.g., images, audio) by learning to iteratively reverse a fixed, step-by-step noise addition process.Diffusion models operate on a two-part mechanism: a forward diffusion process and a reverse sampling process. The forward process systematically corrupts training data—like a clean image—by adding Gaussian noise over hundreds or thousands of steps until only pure noise remains. The model then trains a neural network, typically a U-Net, to master the reverse process: iteratively predicting and removing that noise to reconstruct the original data distribution. This denoising capability, starting from a random noise seed, allows for the generation of entirely new, high-quality samples. Key commercial examples, like OpenAI's DALL-E 2 and Stability AI's Stable Diffusion, leverage this core technology for state-of-the-art text-to-image synthesis.
- CapCutThe free, all-in-one video editor from ByteDance: designed for high-quality, short-form content creation across mobile, desktop, and web.CapCut is a powerful, accessible video editing platform developed by ByteDance (TikTok's parent company). It offers a user-friendly interface for creators at all skill levels, boasting over a billion downloads on the Google Play store. The technology provides a comprehensive suite of features: standard editing (trim, merge, speed control), advanced tools (keyframe animation, chroma key, optical flow slow-motion), and AI capabilities (auto captions, text-to-speech, background removal). It is optimized for rapid production and one-click sharing to platforms like TikTok and Instagram Reels, allowing users to export high-resolution videos up to 4K HDR.
Related projects
AI Architect
Hong Kong
This talk explores how AI transforms architectural design by streamlining processes, enhancing creativity, and delivering personalized, efficient solutions…
Generative Art for Advertising
Hong Kong
Explore the creative process and technical strategies used to develop a generative art advertisement for HKEX, offering practical…
Virtual Try-On: Video Model Magic
Hong Kong
The talk demonstrates how video‑based models combine AI fitting algorithms with real‑time rendering to enable accurate, on‑screen clothing…
AI for Cultural Preservation: Bridging Generative AI and Classical Methods to Decode East Asian Archives
Hong Kong
Discover how we convert millions of East Asian archival texts into structured, searchable databases using layout extraction, domain‑specific…
Tech Noir I-Ching
Hong Kong
Learn how to create short video loops using Sora and Midjourney, apply Jet Set Radio prompts, and avoid…
Creative AI Off-Roading: Unconventional Workflows for Imaginative Collaboration
Prague
Explore unconventional AI workflows that prioritize creativity and experimentation, demonstrating how blended AI tools can generate imaginative art…