Members-Only
Recent Talks & Demos are for members only
You must be an AI Tinkerers active member to view these talks and demos.
Tendermind- Tender Analyser
Demo of Tendermind, an LLM‑driven tender analyzer that extracts structured KPIs, uses Pydantic for JSON handling, and showcases agentic AI components.
I started working on this at a Hackathon, with my fellow team, due to other commitments they left at some point, so I am building it further. Tendermind analyses the tender documents with help of LLMs and gives the dashboard an Important KPIs as the output, currently it works well with simple tenders. for example I input tenders info and it shows different KPIs such as feasibility, deadlines. its a completely working app, just hosted locally for moment, code on github.
TenderMind: Flask app extracts tender data using Cohere/LangChain AI.
- TendermindTendermind is an assistive technology app: it delivers personalized visual schedules and task management for individuals who learn or think differently.This platform provides a customized, image-based planner designed to manage daily routines and complex tasks. The technology adapts to the user's specific thinking and ability, offering multiple display options and a library of thousands of images (including PCS® symbols) or user-uploaded photos for activities. Its core mission is clear: increase user independence and overall well-being by simplifying schedule comprehension and task management, ultimately leaving more energy for meaningful activities.
- GitHubHost Git repositories and enable massive-scale collaboration (pull requests, issue tracking) for over 100 million developers.GitHub is the world's dominant web-based platform for Git repository hosting and collaborative software development. Built on Linus Torvalds' Git version control system, the platform facilitates 'social coding' by providing essential tools like pull requests, forking, and issue tracking. It currently serves over 100 million developers, managing a massive ecosystem of public and private codebases. Microsoft acquired the company in 2018 for $7.5 billion, solidifying its role as the central hub for open-source and enterprise-level version control.
- PydanticPydantic is Python's most-used data validation library: it enforces data schemas using standard type hints and boasts a Rust-core for exceptional speed.Pydantic is the premier data validation and parsing library for Python. It mandates data structure using pure, canonical Python type annotations, drastically reducing boilerplate code. With over 360M monthly downloads, Pydantic is battle-tested: all FAANG companies and major frameworks (FastAPI, SQLModel, LangChain) rely on it for robust data handling. Its core validation logic is written in Rust, ensuring high performance. Pydantic models also generate JSON Schema, facilitating seamless integration and documentation for API development.
- LLMsLarge Language Models (LLMs) are Transformer-architecture deep learning systems (e.g., GPT-4, Llama 3) trained on massive text corpora to generate, summarize, and reason over human language at scale.LLMs are advanced deep learning models, specifically Generative Pre-trained Transformers (GPTs), designed to process and generate human-like text. They are trained on vast, multi-trillion-token datasets, giving them billions of parameters to learn complex linguistic patterns (syntax, semantics). This scale enables emergent capabilities: few-shot learning, code generation, and complex reasoning. Key examples include OpenAI's GPT-4, Google's Gemini, and Meta's Llama 3. LLMs power applications from conversational AI (ChatGPT) to automated content creation, fundamentally shifting how machines handle unstructured language.
- RAGRAG (Retrieval-Augmented Generation) is the GenAI framework that grounds LLMs (like GPT-4) on external, verified data, drastically reducing model hallucinations and providing verifiable sources.RAG is a critical GenAI architecture: it solves the LLM 'hallucination' problem by inserting a retrieval step before generation. A user query is vectorized, then used to query an external knowledge base (e.g., a Pinecone vector database) for relevant document chunks (typically 512-token segments). These retrieved facts augment the original prompt, providing the LLM (e.g., Gemini or Llama 3) the specific, current, or proprietary context required. This process ensures the final response is accurate and grounded in domain-specific data, avoiding the high cost and latency of full model retraining.
Related projects
RAGistotle - a multi modal RAG
Dubai
This talk demonstrates a multi-modal RAG system handling text, audio, video, images, data files, and YouTube links, integrating…
Demo: Manus — The AI Agent That Thinks Before It Acts
Cincinnati
A live demonstration of Manis, an AI agent that visualizes its reasoning steps, showing how it plans and…
AllyTime – AI supported psychological coaching
Hamburg
This talk demonstrates an AI-powered psychological coaching platform featuring voice transcription, summarization, interactive chats, and privacy-focused AI tools…
Tender Bidding: Building a Multi-Doc Tender Agent
Amman
See a production system automating tender analysis and offer generation via scraping, RAG, vector databases, and prompt engineering,…
Summarise.live
Amsterdam
Learn how Summarise.live creates concise, accurate summaries of long videos and podcasts, using personalized algorithms to retain key…
CRM AI Agent to handle and respond to customer emails
Hamburg
Learn how to build an AI email agent that classifies inquiries, retrieves knowledge, generates replies or tickets, and…