
Originally published byDev.to
Four weeks in. Here's what we shipped for devs:
5 lines from install to production retrieval:
from hydra import Hydra
h = Hydra(api_key=...)
h.ingest(docs) # vector + graph, auto
h.retrieve(query) # tuned, not stitched
No embedding pipeline to maintain. No Neo4j schema to babysit. No cron job backfilling stale entities.
What that replaces in your repo:
- The Pinecone client + your chunker + your reranker + your eval harness
- The Neo4j driver + your entity extractor + your graph update job
- The "we'll tune this later" Notion doc that's been open for 8 months
Numbers that matter:
- 1β2M tokens/min ingestion. Benchmark it yourself, we'll give you an eval account
- BEIR results publishing soon. Spoiler: we like how they look.
- Multi-tenant isolation by design. No "oops we leaked your tenant's docs."
The part we've been burying:
BYOC. Full stack runs in your AWS account. One terraform apply, your endpoint. If our cloud dies, your retrieval doesn't. When compliance asks "where does
the data live?" the answer is "the VPC you're looking at."
What we're still honest about:
- Graph structure is good, not yet best-possible. Active research sprint right now.
- We don't do ACID and we're not planning to.
- If your problem fits in 10k docs and a single index, you don't need us. Go ship.
You should actually care if:
- You've built RAG, it works on 1k docs, falls apart at 1M.
- You've wired Pinecone + Neo4j yourself and know exactly how much of your Tuesday that costs.
- Your product is not search, but you keep becoming a search team anyway.
πΊπΈ
More news from United StatesUnited States
NORTH AMERICA
Related News
What Does "Building in Public" Actually Mean in 2026?
20h ago
The Agentic Headless Backend: What Vibe Coders Still Need After the UI Is Done
20h ago
Why Iβm Still Learning to Code Even With AI
22h ago
I gave Claude a persistent memory for $0/month using Cloudflare
1d ago
NYT: 'Meta's Embrace of AI Is Making Its Employees Miserable'
1d ago
