You've been doing this for a while now:
response = client.chat.completions.create(
model="gemini-2.0-flash",
messages=[{"role": "user", "content": f"Extract the following fields from this text: {fields}. Text: {text}"}]
)
data = json.loads(response.choices[0].message.content)
proposal = Proposal(**data)
Parse the response. Hope the JSON is valid. Add a retry. Add a fallback. Add validation. Repeat for every model in your app.
There's a better way.
Meet exomodel
exomodel is an open-source Python framework that turns your Pydantic models into autonomous agents. Instead of writing prompts that produce objects, you define the object β and it fills itself.
The paradigm shift:
| Old way | exomodel way |
|---|---|
| Write prompt β parse response β validate | Define schema β call .create()
|
| Manual JSON extraction | Native Pydantic validation |
| One prompt per model | Provider-agnostic, reusable |
| Fragile string parsing | Structured output, always |
Let's build something.
Prerequisites
- Python 3.9+
- An API key from any supported provider (Google, Anthropic, OpenAI, Cohere)
Install
pip install "exomodel[google]"
# or: exomodel[anthropic] | exomodel[openai] | exomodel[cohere] | exomodel[all]
Create a .env file:
MY_LLM_MODEL=google:gemini-2.0-flash
GOOGLE_API_KEY=your-key-here
The 10 lines
from exomodel import ExoModel
class Proposal(ExoModel):
client: str = ""
project_title: str = ""
budget: float = 0.0
timeline_weeks: int = 0
summary: str = ""
p = Proposal.create("Draft a proposal for Tesla β AI dashboard integration, 6 weeks, $45,000 budget")
print(p.to_ui(format="markdown"))
That's it. Run it.
## Proposal
**Client:** Tesla
**Project Title:** AI Dashboard Integration
**Budget:** 45000.0
**Timeline (weeks):** 6
**Summary:** A 6-week engagement to design and integrate an AI-powered...
exomodel sent your natural language input to the LLM, mapped the response to your schema, validated it with Pydantic, and returned a typed Python object. No prompt engineering. No JSON parsing.
Add business rules with RAG
What if your proposals need to follow company rules β minimum budget, forbidden industries, mandatory margins?
Create a proposal_rules.md file:
# Proposal Rules
- Minimum project budget is $10,000.
- Every proposal must include a 10% safety margin in pricing.
- We do not work with companies in the tobacco industry.
Now attach it to your model:
class Proposal(ExoModel):
client: str = ""
project_title: str = ""
budget: float = 0.0
timeline_weeks: int = 0
summary: str = ""
@classmethod
def get_rag_sources(cls):
return ["proposal_rules.md"]
The model now has context. You can validate against your own rules:
p = Proposal.create("Draft a 5k proposal for Philip Morris")
print(p.run_analysis())
# β This proposal violates company policy: budget below $10,000 minimum
# and client operates in the tobacco industry.
The LLM grounded its reasoning in your document, not its training data.
Update fields with natural language
Already created a proposal but the client changed the scope?
p.update_object("Increase the budget by 20% and extend the timeline to 8 weeks")
print(p.budget) # 54000.0
print(p.timeline_weeks) # 8
Or update a single field:
p.update_field("summary", "Make it more formal and concise")
Bulk creation with ExoModelList
Need to generate multiple structured objects at once?
from exomodel import ExoModel, ExoModelList
class LineItem(ExoModel):
name: str = ""
quantity: int = 0
unit_price: float = 0.0
class Invoice(ExoModelList[LineItem]):
pass
invoice = Invoice()
invoice.create_list("10 MacBook Pros at 2499, 5 Dell monitors at 599, 3 mechanical keyboards at 189")
print(invoice.to_csv())
name,quantity,unit_price
MacBook Pro,10,2499.0
Dell Monitor,5,599.0
Mechanical Keyboard,3,189.0
How it works under the hood
When you call .create(), exomodel:
- Introspects your Pydantic schema (field names, types, defaults)
- If
get_rag_sources()is defined, chunks and indexes those documents into an in-memory vector store - Builds a structured prompt with your schema and any RAG context
- Sends it to your configured LLM provider
- Validates the response against your Pydantic model
- Returns a typed instance β with usage tracking built in
Everything goes through LangChain under the hood, so provider-switching is a one-line .env change.
Expose methods as agent tools
Need the LLM to call methods on your object, not just fill fields? Use @llm_function:
from exomodel import ExoModel, llm_function
class Proposal(ExoModel):
client: str = ""
budget: float = 0.0
discount: float = 0.0
@llm_function
def apply_discount(self, percentage: float):
"""Apply a percentage discount to the budget."""
self.discount = percentage
self.budget = self.budget * (1 - percentage / 100)
p = Proposal.create("Draft a 50k proposal for Tesla")
p.master_prompt("Apply a 15% discount for a long-term partnership")
print(p.budget) # 42500.0
print(p.discount) # 15.0
master_prompt lets the LLM autonomously decide which tool to call β no routing logic needed.
Token usage
print(p.get_usage())
# {'prompt_tokens': 312, 'completion_tokens': 87, 'total_tokens': 399}
What's next
- Docs: https://exomodel.ai
- GitHub: https://github.com/exomodel-ai/exomodel
-
PyPI:
pip install exomodel
If this saved you from writing another prompt parser, give the repo a star β it helps more developers find it.
Have a use case you'd like to see covered? Drop it in the comments.
United States
NORTH AMERICA
Related News
What Does "Building in Public" Actually Mean in 2026?
20h ago
The Agentic Headless Backend: What Vibe Coders Still Need After the UI Is Done
20h ago
Why Iβm Still Learning to Code Even With AI
22h ago
I gave Claude a persistent memory for $0/month using Cloudflare
1d ago
NYT: 'Meta's Embrace of AI Is Making Its Employees Miserable'
1d ago