Members-Only
Recent Talks & Demos are for members only
You must be an AI Tinkerers active member to view these talks and demos.
Dobr.AI: AI Interview Architecture
This talk explores Dobr.AI's interview engine architecture, focusing on multi-model integration, a unique memory system, and orchestration for leading technical interview conversations.
A deep dive into Dobr.AI’s interview engine architecture, where we’ll explore how we’ve built a domain-expert AI that leads technical interviews rather than simply responding to prompts. We’ll talk through our approach to combining capabilities of multiple models, a unique memory system and combined a specialized orchestration architecture designed to lead conversations.
AI voice agents automate technical interviews, delivering dynamic, project-based skill assessments.
- DobrDobr is a specialized data recovery platform designed to restore corrupted or deleted forensic artifacts from encrypted storage volumes.Engineered for high-stakes digital forensics, Dobr utilizes a proprietary carving engine to reconstruct fragmented files across APFS and NTFS systems. The toolkit bypasses standard file system limitations to recover over 450 specific file types (including SQLite databases and localized metadata) even after a full factory reset. By leveraging multi-threaded scanning, Dobr reduces indexing time by 40 percent compared to legacy recovery tools: ensuring rapid, bit-perfect extraction for investigators and data recovery professionals.
- LLMsLarge Language Models (LLMs) are Transformer-architecture deep learning systems (e.g., GPT-4, Llama 3) trained on massive text corpora to generate, summarize, and reason over human language at scale.LLMs are advanced deep learning models, specifically Generative Pre-trained Transformers (GPTs), designed to process and generate human-like text. They are trained on vast, multi-trillion-token datasets, giving them billions of parameters to learn complex linguistic patterns (syntax, semantics). This scale enables emergent capabilities: few-shot learning, code generation, and complex reasoning. Key examples include OpenAI's GPT-4, Google's Gemini, and Meta's Llama 3. LLMs power applications from conversational AI (ChatGPT) to automated content creation, fundamentally shifting how machines handle unstructured language.
- Memory systemThis core subsystem manages the entire data storage hierarchy, enabling the CPU to access instructions and active data at nanosecond speeds.The Memory System is the engine for data access, a tiered architecture critical for performance. At the top, Level 1 and Level 2 SRAM caches provide the fastest access (sub-nanosecond) for the CPU’s most immediate needs. Main memory, typically high-speed DDR5 DRAM, handles the active working set of the operating system and applications, offering high bandwidth (e.g., 51.2 GB/s for DDR4-3200 in dual-channel) and capacity (up to 64GB per module). Below this lies non-volatile storage (SSD/Flash), which ensures data persistence. The system’s primary job is to minimize latency and maximize throughput across these tiers, preventing the CPU from stalling on data fetch operations.
- OrchestrationOrchestration is the coordinated execution of multiple automated tasks and systems: it sequences complex workflows across disparate domains for a unified, end-to-end result.Orchestration moves beyond simple automation. It is the centralized management layer that coordinates automated tasks across multiple systems, applications, and services. Think of it as the conductor for your entire IT workflow: it ensures individual automations (like a provisioning script) execute in the correct sequence across different domains (like network, storage, and compute). Tools such as Kubernetes handle container orchestration; Jenkins manages CI/CD pipelines. A comprehensive solution like Red Hat Ansible Automation Platform then integrates these tools, eliminating manual handoffs and reducing provisioning errors. This process delivers faster application deployment, consistent configuration, and significant operational efficiency at scale.
Related projects
AI agents to conduct recruiting interviews
Bengaluru
This talk explores building modular AI agents that conduct dynamic, unbiased recruiting interviews using LLMs and cognitive architectures…
BPOs in the future post Agentic AI Era
Delhi
Explains how cloud‑native autonomous voice systems replace traditional BPOs, delivering scalable, multilingual customer experience and why India can…
PitchPerfect
Bengaluru
This talk demonstrates building an AI agent for dating on Hinge using ADB for automation, OpenCV for UI…
Agents: Thinking Fast and Slow
Bengaluru
Learn how agents dynamically retrieve and structure context, use iterative and ontology‑guided reasoning, and orchestrate complex SaaS workflows…
A smart file browser
Bengaluru
This talk covers a desktop file browser with smart folder triggers, AI file creation, and offline TTS/STT using…
Trabuli beauty
Bengaluru
Demonstrating a prototype that uses stable diffusion and computer vision to generate synthetic faces wearing brand lipsticks, enabling…