Fri Aug 01 2025


LLMio
React.js
Vercel AI SDK
Supermemory
MCP
Search
This open-source chat application is compatible with any model provider, integrates with various web search services, and includes advanced memory functions to support and retain information for any LLM you select. It also offers MCP support.
LLMio

Description
LLMio is a cutting-edge open-source chat application designed to revolutionize the way you interact with language models (LLMs). It serves as an all-in-one platform for managing conversations, executing tasks, and exploring information seamlessly. With LLMio, users can create unique chat experiences, explore the capabilities of various LLMs, and engage in advanced memory functions to retain information across sessions.
Note: LLMio is fork of intern3.chat which was ambisious project but is not longer maintained. I have plans to add more features and improve the existing ones. So forked it and will be maintaining it going forward to keep it up to date with latest LLMs and features.
- This project is still in early stages of development and I will be adding more features and improving the existing ones. SO there might be some bugs and issues. If you find any, please report them on the GitHub repository.
Features
-
Multi-model support (Gemini, OpenAI, Claude, Groq, and more): Choose the best model per chat or message with a pluggable provider layer and easy switching/fallbacks.
-
BYOK API key system (Native → OpenRouter → In-house credits): Prioritized key resolution per provider and workspace; keys are scoped and never stored server-side by default.
-
In-house credits for select models (no API key required): Try LLMio instantly using hosted credits with rate limits—ideal for demos, onboarding, and quick tests.
-
Custom AI prompt configuration: Tune system prompts, temperature, max tokens, tools, and safety settings per thread; save reusable presets.
-
Web Search integration (Brave Search + Firecrawl): Blend live search with site crawling and summarization; returns citations and source snippets for grounded answers.
-
Image generation (fal.ai and GPT-Image-1): Text-to-image and image-to-image with size/quality controls, upscaling, and inline previews.
-
HTML/Mermaid/React artifact previews: Render structured outputs safely in a sandbox with copy/export support for code and diagrams.
-
Native voice input (Groq ASR): Record directly in the input box with fast Whisper-based transcription, VAD, and multi-language support.
-
MCP over HTTP/SSE: Connect to Model Context Protocol tools and data sources with automatic tool discovery and permission prompts.
-
Supermemory integration: Persist important facts across sessions; entity/intent extraction with controls to review, pin, or forget memories.
-
File attachments (code, text, PDFs, images): Drag-and-drop multi-file uploads with chunking, OCR for PDFs/images, inline previews, and RAG-style retrieval.





Experience features
- Resumable streams for reliable message delivery
- Edit & Regenerate any message
- Copy messages with one click
- Thread management with sidebar navigation
- Folder organization for chat threads
- CMD+K search bar for quick chat navigation
Customization & UI
- Beautiful, modern interface with multiple theming options
- Normal/Wide chat view options
- Responsive design for all devices
Notes
- Only a limited number of models offer free usage without an API key, and these are subject to rate limits. For optimal performance and access to more models, it's advisable to provide your own API keys for the specific models you wish to use.