AI-Native from Day One
Not bolted on. Built in. Semantic search, embeddings, RAG, and multi-provider AI — all native FLIN functions. No libraries, no SDKs, no API wrappers.
AI as a Language Feature
Other frameworks bolt AI on. FLIN bakes it in. Six AI capabilities, zero configuration.
Semantic Text Type
Add semantic before any text field and FLIN auto-generates embeddings on save. One keyword makes your data AI-searchable.
Vector Search
search "query" in Entity — find records by meaning, not keywords. Returns results ranked by semantic similarity, not string matching.
RAG
Retrieval-Augmented Generation. Semantic search finds relevant documents, injects them as context, and streams an AI answer — all in native FLIN code.
ask_ai()
Call any LLM provider with one function. Pass a prompt, get an answer. Provider, model, and parameters — all configurable, all optional.
AI Gateway
Built into the admin console at /_flin. Monitor provider stats, check API key configuration, and test prompts — all in one dashboard.
Keyword + Semantic
Side-by-side comparison of keyword and semantic search. Best of both worlds — exact matches when you need them, meaning-based results when you don't.
See It in Action
From data model to AI-powered search in 15 lines of FLIN.
// 1. Mark a field as semantic — embeddings auto-generated on save entity Document { title: text @required content: semantic text // ← This is the magic category: text } // 2. Search by meaning — not keywords results = search "how to configure SSL certificates" in Document // 3. Ask any LLM — one function, 8 providers answer = ask_ai("Summarize these search results", [ "provider": "claude", "model": "claude-sonnet-4-20250514" ]) // 4. Or use keyword search for exact matches exact = keyword_search("SSL", "Document", "content", 10)
Auto-Embeddings
Every time you save an entity with a semantic text field, FLIN generates and indexes embeddings automatically.
8 Providers
OpenAI, Claude, Gemini, Mistral, DeepSeek, Groq, Ollama, OpenRouter. Switch providers with one config change.
Local or Cloud
Use Ollama for fully local embeddings and inference, or connect to any cloud provider. Your data, your choice.
8 AI Providers. One Function.
ask_ai() supports every major LLM provider. Switch between them without changing your application code.
OpenAI
GPT-4o, GPT-4o Mini, o1, o3
Embeddings + ChatClaude
Claude Opus, Sonnet, Haiku
ChatGemini
Gemini 2.0 Flash, Pro
ChatMistral
Mistral Large, Medium, Small
ChatDeepSeek
DeepSeek V3, R1
ChatGroq
LLaMA, Mixtral — ultra-fast
ChatOllama
Any local model — fully offline
Embeddings + ChatOpenRouter
200+ models via one API key
ChatRAG Without the Plumbing
In other frameworks, RAG means stitching together a vector database, an embedding API, a chunking strategy, and an LLM client. In FLIN, it's a page.
// RAG in FLIN: search → context → stream entity KnowledgeBase { title: text @required content: semantic text } query = "" fn askWithContext() { // Find the 5 most relevant documents docs = search query in KnowledgeBase // Ask the AI with those documents as context answer = ask_ai(query, [ "provider": "claude", "context": docs ]) } <form submit={askWithContext()}> <input bind={query} placeholder="Ask anything..." /> <button>Ask</button> </form>
Build AI-Powered Apps Today
Add semantic text to any field. Call ask_ai() from any page. Ship AI features in minutes, not weeks.
curl -fsSL https://flin.sh | bash