Members-Only
Recent Talks & Demos are for members only
You must be an AI Tinkerers active member to view these talks and demos.
finciples (Financial Principles)
This talk explores finciples.ai, an AI platform combining multiple LLMs and investment philosophies to deliver personalized stock analysis and recommendations.
finciples.ai: An AI-powered platform that enhances investment decisions by combining traditional market analysis with personalized, principles-based insights from legendary investors. This unique platform offers,
• Analysis based on the investment philosophies of Warren Buffett, Peter Lynch, Ray Dalio, and more.
• The ability to incorporate your own custom investment principles and checklists.
• Integration of traditional fundamental, technical, and sentiment analysis.
• Personalized recommendations based on data and investment principles.
• Educational resources to help investors learn from successful strategies.
• AI-powered application of these principles using multiple LLMs, including OpenAI, Llama, Mistral, Gemini and others
• Incorporates AI frameworks including Langchain, Langflow and Crew AI and RAG patterns
The platform leverages multiple LLMs to score stocks against predefined investment principles.
- OpenAI APIOpenAI API: Your direct gateway to cutting-edge AI models (GPT-4o, DALL-E 3, Whisper), enabling scalable, multimodal intelligence integration into any application.The OpenAI API provides authenticated, programmatic access to a powerful suite of generative AI models. Developers leverage REST endpoints and official libraries (Python, Node.js) to integrate capabilities like advanced text generation (GPT-4o), image creation (DALL-E 3), and speech-to-text transcription (Whisper). This platform is engineered for scale, supporting millions of daily requests for tasks from complex reasoning to real-time customer support agents, ensuring your application gets reliable, state-of-the-art intelligence.
- llamaMeta's open-weights LLM family optimized for high-performance local deployment and custom fine-tuning across 8B to 405B parameter scales.Llama 3.1 delivers state-of-the-art performance through a flagship 405B parameter model trained on 15 trillion tokens. It supports a 128k context window: ideal for analyzing massive datasets or long-form documentation. Developers utilize Llama for diverse tasks (multilingual translation, Python code generation, and complex reasoning) while maintaining data sovereignty via local hosting. The ecosystem includes the Llama Stack for agentic workflows and optimized weights for 8B and 70B models, ensuring high throughput on consumer hardware or enterprise clusters.
- MistralFrontier AI models (LLMs) from Paris: delivering top-tier performance and efficiency through open-source innovation and optimized architecture.Mistral AI is the Paris-based frontier AI startup, founded in April 2023 by ex-Google DeepMind and Meta researchers (Arthur Mensch, Guillaume Lample, Timothée Lacroix). We challenge opaque 'big AI' with a mission to democratize advanced models: focusing on open-source, efficiency, and performance. Our technology, including the 123B parameter Mistral Large 2 and sparse Mixture of Experts (MoE) architecture, consistently delivers state-of-the-art results at significantly lower costs. We provide enterprise-grade solutions (Mistral AI Studio, Le Chat) for custom deployment, fine-tuning, and full data control. We are scaling fast: a $14 billion valuation confirms our position as a global leader in accessible, powerful generative AI.
- GeminiGoogle's natively multimodal AI model: understands and operates across text, code, audio, image, and video.Gemini is Google's most capable and general AI model, engineered from the ground up to be natively multimodal: it seamlessly understands and combines information across text, code, audio, image, and video inputs. The technology is optimized for flexibility, running efficiently on everything from data centers to mobile devices. It is deployed in three key sizes: Ultra (for highly complex tasks), Pro (for broad scaling), and Nano (for efficient on-device tasks). Developers access this power via the Gemini API to build next-generation applications.
- LangChainThe open-source framework for building and deploying reliable, data-aware Large Language Model (LLM) applications.LangChain is the essential framework for engineering LLM-powered applications: it simplifies connecting models (like GPT-4 or Claude) to external data, computation, and APIs. The platform provides a modular set of components—Chains, Agents, Tools, and Memory—allowing developers to quickly build complex workflows like Retrieval-Augmented Generation (RAG) pipelines and sophisticated conversational agents. Its Python and JavaScript libraries, combined with LangChain Expression Language (LCEL), offer a standardized interface for rapid prototyping and moving applications to production with confidence.
Related projects
Artecon - A hotspot for AI
Seattle
Learn how to run CPU‑based ML models with low latency, using small public models and post‑processing, then bundle…
AI Startup Scout
Seattle
A sub‑agent scans Hacker News, flags potential startup stories, adds founder data from external sources, and generates concise…
Tables and AI
Seattle
Learn how to use spreadsheet tables to define, prioritize, and evaluate LLM rules and prompts, enabling stakeholders to…
Demo: Dendron - AI-Powered Analysis for Technology/AI Adoption
Miami
Live demo shows multi‑agent LLMs mapping business processes and technology landscapes, maintaining context across analysis and quantifying confidence…
AI APIs and Output Wrangling
Atlanta
This talk explores best practices for interacting with AI APIs, covering prompt design, idempotence, schema management, error detection,…
Wrangling Cline for Coding
Atlanta
This talk explores practical techniques for effectively using agentic coding tools like Cline to enhance programming efficiency and…