.

Members-Only

Recent Talks & Demos are for members only

Exclusive feed

You must be an AI Tinkerers active member to view these talks and demos.

April 01, 2025 · Lausanne

inlook.ai: Visual Statistical Search

Learn how a visual search engine transforms statistical queries into trusted, multilingual results with instant interactive charts and tables, handling both specific and ambiguous requests.

Overview
Tech stack
  • TypeScript
    TypeScript is an open-source superset of JavaScript: it adds static typing and compiles to clean, standards-based JavaScript.
    TypeScript is a high-level, open-source language developed by Microsoft: it acts as a superset of JavaScript, adding a powerful static type system. This system enables compile-time type checking, catching errors before runtime (a critical benefit for large-scale applications). The TypeScript Compiler (TSC) reliably transpiles all code into clean, standards-based JavaScript (ES3 or newer), ensuring compatibility across any browser or host environment (Node.js, React.js, etc.).
  • React
    React is an open-source JavaScript library for building dynamic user interfaces (UIs).
    React is a component-based JavaScript library, developed by Meta (Facebook), engineered for building fast, declarative UIs. It mandates a one-way data flow and utilizes a Virtual DOM mechanism to ensure efficient, predictable updates to the user interface. Developers construct complex UIs by composing small, encapsulated components; this architecture promotes code reusability and simplifies state management across large applications. The library employs JSX (a syntax extension) to integrate HTML-like markup directly within JavaScript logic, supporting development for both web (React DOM) and native mobile platforms (React Native).
  • Next
    Next.js is the full-stack React framework: it delivers high-performance web applications via hybrid rendering and powerful, Rust-based tooling.
    This is the React Framework for production: Next.js enables you to build full-stack web applications with zero configuration and maximum efficiency. It supports a hybrid rendering approach (Server-Side Rendering, Static Site Generation, and Incremental Static Regeneration) for optimal speed and SEO performance. Key features include React Server Components, Server Actions for running server code directly, and the App Router for advanced routing and nested layouts. Developed by Vercel, it leverages Rust-based tools like Turbopack and the Speedy Web Compiler for the fastest possible builds and a superior developer experience.
  • Python
    Python: The high-level, general-purpose language built for readability, powering everything from web backends to advanced machine learning models.
    Python is the high-level, general-purpose language prioritizing clear, readable syntax (via significant indentation), ensuring rapid development for any team . Its ecosystem is massive: use it for robust web development with frameworks like Django and Flask, or leverage its power in data science with libraries such as Pandas and NumPy . The Python Package Index (PyPI) provides thousands of community-contributed modules, offering immediate solutions for tasks from network programming to GUI creation . The language is actively maintained by the Python Software Foundation (PSF), with the stable release currently at Python 3.14.0 (as of November 2025) .
  • LLM API
    Programmatic access to cutting-edge models (e.g., GPT-4o, Gemini 2.5 Flash) via a RESTful endpoint: drive text generation, summarization, and code completion.
    The LLM API is a standardized, high-performance interface, typically a RESTful HTTP endpoint, that integrates large language models (LLMs) like OpenAI's GPT-4o or Google's Gemini 2.5 Pro into any application. This interface abstracts the complex model infrastructure, enabling core tasks: real-time content generation, document summarization (handling up to a 1-million token context window), and function calling for agentic workflows. Developers authenticate with an API key and pay per token, ensuring predictable, scalable costs. Using fast models, such as Gemini 2.5 Flash, keeps latency low for high-frequency tasks, making advanced AI capabilities instantly deployable across web services and enterprise systems.

Related projects