Members-Only
Recent Talks & Demos are for members only
You must be an AI Tinkerers active member to view these talks and demos.
Bodhi App: Local LLMs
Learn how Bodhi App runs open‑source LLMs locally, offering privacy and cost savings without requiring API, CLI, or development expertise.
Bodhi App allows you to run LLMs locally, saving you cost and giving you complete privacy for your data.
Whereas ollama gives you ability to do the same, Bodhi is targeted to wider audience and does not assume that you understand API, CLI, frontend/backend, and unlocks the power of open source LLMs for everyone with a device.
BodhiApp runs local GGUF LLMs using llama.cpp with OpenAI APIs.
- Bodhi AppBodhi is the AI-powered smart building operations platform, unifying IoT hardware and software to automate energy management, security, and seamless guest experiences across hospitality and commercial properties.Bodhi delivers a unified, AI-driven automation platform for smart building operations (hotels, commercial spaces, and residences). The system integrates proprietary smart hardware—like the Bodhi Thermostat and Bridge—with robust software to optimize energy consumption and enhance security protocols (access control). For hospitality clients, Bodhi streamlines the guest experience, managing everything from room automation to service requests via a single interface. This proactive, integrated approach allows property managers to cut operational costs, maximize efficiency, and preemptively address issues before they impact guest satisfaction.
- OllamaDeploy and run open-source Large Language Models (LLMs) like Llama 3 and Mistral locally on your machine: achieve private, cost-effective AI via a simple command-line interface.Ollama is the essential tool for running LLMs locally: consider it the Docker for AI models. It packages complex models and dependencies into a single, easy-to-use application for macOS, Linux, and Windows systems. You get immediate access to models like Gemma 2 and DeepSeek-R1 via a straightforward CLI or REST API. This local-first approach guarantees data privacy and security, eliminating cloud dependency and high API costs. Ollama also optimizes performance on consumer hardware using techniques like quantization, ensuring efficient execution even on standard desktops.
- APIThe Application Programming Interface (API) is the digital contract that allows two separate software systems to communicate and exchange data, typically JSON, securely over a network.An API is the essential communication layer: it defines the methods (GET, POST, DELETE) and the data structures (often JSON) for two distinct software applications to interact. This interface acts as a secure intermediary, managing authentication (via API keys or OAuth 2.0) and ensuring only authorized data is exchanged between the client and server. For example, the Stripe API handles billions of dollars in payments by exposing a single endpoint for a charge request, while the Google Maps API allows a third-party application to request and display complex map data, saving millions of development hours and enabling rapid feature deployment across the modern web.
- CLIThe Command Line Interface (CLI): Your direct, text-based terminal for executing commands and automating system operations with maximum efficiency.CLI is the essential interface for system administration and development: It bypasses the overhead of a Graphical User Interface (GUI) for faster, scriptable workflows. Shells like Bash, Zsh, and PowerShell interpret typed commands (e.g., `ls -l`, `git commit -m`) to manage files, execute programs, and control hardware. The core advantage is automation: Complex, multi-step tasks can be chained and executed instantly via scripts, delivering significant time savings for repetitive operations (often 10x faster than manual GUI clicks).
- Open-source LLMsOpen-source LLMs (like Meta's Llama 3 and Mistral AI's models) provide developers with full model weights, enabling complete customization, on-premises security, and zero vendor lock-in.Open-source LLMs democratize advanced AI: they release the model weights and architecture, allowing developers to deploy and modify the technology directly. This offers critical advantages over proprietary APIs: full code transparency, enhanced data security via on-premises deployment, and reduced operational costs. Key players include Meta's Llama 3 (available in 8B and 70B parameter sizes) and Mistral AI, which drives innovation with efficient, high-performance models for enterprise and edge use cases. This shift empowers teams to fine-tune models for specific domain tasks, moving past the limitations of closed-source black boxes.
Related projects
Running llama3 locally without a GPU
Dubai
This talk demonstrates running Llama3 locally on an NPU laptop without a GPU, explores its limitations and opportunities,…
LLM Gateways, a Software Engineering Perspective
Amman
Explore LLM Gateways like LiteLLM, examining engineering concepts like load balancing, moderation, and abstraction for production inference deployments.
Run Local, open source AI
Singapore
Learn how to run open-source models like Llama3, Mistral, and Gemma locally using Jan.ai and Cortex.so, with practical…
Alert creation and debugging using AI
Bengaluru
Learn to convert English alert descriptions into PromQL queries and use statistical analysis with LLMs to debug alerts,…
Beyond One-Size-Fits-All: Building Intelligent LLM Selection Systems
Sydney
Learn how to deploy an OpenAI‑compatible LLM router that classifies prompts, selects the appropriate model, and optimizes cost,…
Amharic Llama and Llava (open source multimodal llm for low resource language)
New York City
Presentation covers an open‑source multimodal LLM for Amharic, its architecture, training pipeline, and a data‑augmentation technique to overcome…