Fetching latest headlines…
How I Built an AI Morning Briefing That Runs Itself Every Day
NORTH AMERICA
πŸ‡ΊπŸ‡Έ United Statesβ€’May 11, 2026

How I Built an AI Morning Briefing That Runs Itself Every Day

0 views0 likes0 comments
Originally published byDev.to

The Problem

I'm a solopreneur. My work depends on knowing what's happening in tech: new frameworks, Product Hunt launches, funding rounds, AI tools. But I was spending 45 minutes every morning jumping between Hacker News, Reddit, Product Hunt, Substack, and Twitter. By the end of the day, I'd consumed a lot and acted on little.

I needed something that filtered the noise and told me: "this is what you need to know today, and this is why it matters for your business."

Not a link aggregator. An editor.

The Architecture (v5.0)

The system is called Tavily Intel Pulse. It runs as a cron job at 7 AM (Colombia time) and delivers a structured briefing in 5 sections:

  1. In 2 minutes β€” executive summary
  2. What really matters β€” top 3 news items with 3-part LLM analysis: The Fact / Why It Matters / Opportunity for You
  3. The ecosystem moves β€” 5 additional signals with quick impact
  4. An idea to steal β€” cross-signal insight connecting 2-3 signals
  5. Question of the day β€” provocation to reflect on your own work

Resilient 4-Phase Pipeline

PHASE 1: COLLECTION    β†’ tmp/f1_items_YYYYMMDD.json
PHASE 2: LLM ANALYSIS  β†’ tmp/f2_enriched_YYYYMMDD.json
PHASE 3: NOTION DELIVERY β†’ tmp/f3_notion_id_YYYYMMDD.txt
PHASE 4: NOTIFICATION  β†’ tmp/f4_telegram_sent_YYYYMMDD.txt

Key: Each phase saves its output before moving to the next. If the pipeline fails, the next run resumes from the failed phase, not from zero.

Phase 1 β€” Collection

Uses Tavily API (1,000 credits/month free tier) to search 20 daily sources:

  • Product Hunt launches
  • Hacker News front page
  • Reddit (r/AI_Agents, r/webdev, r/SaaS)
  • GitHub trending
  • Tech news (TechCrunch, The Verge)
  • Funding rounds (Crunchbase signals)

Phase 2 β€” LLM Analysis with Scoring

Each item goes through a scoring engine that extracts real metrics from content:

+45  MRR β‰₯ $5K/mo        +35  Upvotes β‰₯ 100        +30  Stars β‰₯ 1000
+25  Funding β‰₯ $10M       +20  Users β‰₯ 10K          +25  Mentions SaaS
+20  Mentions agent/MCP   +15  Mentions solopreneur  -12  Generic content

Caps prevent overflow: max 100 points. Threshold: 20/100.

Top 3 items get analyzed with GPT-4o-mini (cheap, fast, enough for editorial analysis) with Tavily deep extraction extract_depth="advanced".

Phase 3 β€” Notion Delivery

The briefing gets saved to a Notion database with:

  • Structured headline
  • Agent summary
  • Source, date, relevance, expected impact
  • Child blocks with the 5 formatted sections

Phase 4 β€” Telegram Notification

Short Markdown message:

πŸ“° *Morning Briefing: YYYY-MM-DD*

*{headline}*

❓ {question_of_the_day}

β†’ [View in Notion](https://notion.so/{page_id})

What I Learned Building It

1. Phase architecture is non-negotiable

In v4.0, if the LLM failed, everything crashed. In v5.0, each phase leaves a temp file. If Phase 2 fails, Phase 1 is still saved. The next run resumes from Phase 2. This saved me when Tavily rotated its API key at 6 AM.

2. Scoring must be transparent

It's not "magic AI that decides." It's a point system with clear rules I can adjust. If I want MRR to weigh more, I change a number. No model retraining.

3. Editorial format matters more than data volume

An aggregator gives you 50 links. An editor tells you: "This agent framework has 4,000 upvotes and solves exactly the problem you have with your Notion pipeline." The difference is analysis, not collection.

4. Cost is ridiculously low

  • Tavily: free tier (1,000 credits/month)
  • OpenAI GPT-4o-mini: ~$0.15 per briefing
  • Notion API: free
  • Telegram Bot: free

Total: less than $5/month for a daily personalized briefing.

The Skill That Orchestrates It

All of this is documented as a Hermes Agent skill (tavily-intel-pulse). The skill includes:

  • Full Python script (morning_briefing.py)
  • Cron job configuration
  • Notion database schema
  • Scoring system with caps
  • Error handling per phase
  • API key rotation

File structure:

~/.hermes/scripts/
β”œβ”€β”€ morning_briefing.py          # Main script v5.0
β”œβ”€β”€ data/
β”‚   └── dedup_history.json        # URL hash seen (3-day window)
└── tmp/
    β”œβ”€β”€ f1_items_YYYYMMDD.json
    β”œβ”€β”€ f2_enriched_YYYYMMDD.json
    β”œβ”€β”€ f3_notion_id_YYYYMMDD.txt
    └── f4_telegram_sent_YYYYMMDD.txt

Why This Matters for Solopreneurs

As independent builders, we don't have market intelligence teams. But we make product, pricing, and stack decisions every day. We need context, not just information.

This system gives me context in 2 minutes. It tells me which Product Hunt launch could be a competitor. It flags when an agent framework gains traction. It asks me things like: "Could your current product benefit from an MCP server?"

It doesn't replace thinking. It accelerates it.

Question of the Day

How much time do you spend every morning consuming information you don't act on?

Comments (0)

Sign in to join the discussion

Be the first to comment!