If you're running local LLMs through Ollama, finding the right model is annoying. The official model page scrolls forever, capability tags are inconsistent, and there's no way to sort by context window or size without doing it in your head.
So I built Ollama Models Explorer — a small Next.js + Tailwind app that pulls the full Ollama model catalog and gives you a real, fast table. Search by name, filter by capability (chat, vision, embedding), sort by name, size, or context length by clicking the column header. Dark-themed because that's where I live.
- Live demo: https://ollama-models-explorer.vercel.app/
- Repo: https://github.com/p32929/ollama_models_explorer
Stack
- Next.js (app router) + TypeScript
- Tailwind CSS + shadcn/ui for the table and inputs
- Lucide for icons
- Model data loaded from a local JSON file, so the deploy is fully static and instant
One non-obvious thing
The capability filter ANDs the pills together instead of ORing them. Most "filter by tag" UIs default to OR ("any of these tags") which is almost never what you want when you're hunting for a model that does both vision AND chat. Small detail, surprisingly different feel.
Also: sorting context window numerically meant parsing strings like 128k, 1M, 32768 into a consistent unit before comparing. Looks trivial, but the Ollama catalog mixes formats freely.
Try it
It's bare-bones on purpose — would love feedback on what filters or columns you'd actually use day-to-day. Stars / forks welcome if it's useful.
Open to building with sharp teams + solo founders — DMs and email open.
— Fayaz (github.com/p32929)
United States
NORTH AMERICA
Related News
What Does "Building in Public" Actually Mean in 2026?
19h ago
The Agentic Headless Backend: What Vibe Coders Still Need After the UI Is Done
19h ago
Why I’m Still Learning to Code Even With AI
21h ago
I gave Claude a persistent memory for $0/month using Cloudflare
1d ago
NYT: 'Meta's Embrace of AI Is Making Its Employees Miserable'
1d ago