Michael Tromba
Michael Tromba

Can AI run a daily newsletter with zero human input?

A fully autonomous AI newsroom — researches, writes, fact-checks, and publishes a daily biotech digest with no human in the loop.

This one started with a conversation at Capital Factory, a co-working space in downtown Austin. I was talking to a biotech investor who wanted a way to get an automated daily digest of the biggest news and deals in the space. I'd built email newsletters before and was getting deep into AI automation at the time, so the question stuck with me — could I build a system that handles the entire editorial pipeline? Research, curation, writing, delivery. No human touching any of it.

That question turned into something much bigger than I expected. What started as a simple automation grew into a ten-stage pipeline that runs an entire newsroom autonomously every morning. It fans out eleven parallel research queries across anchor publications — STAT News, Endpoints, FierceBiotech, Nature — plus targeted sweeps for FDA activity, M&A deals, and R&D breakthroughs. An AI editor performs entity resolution across all the results, deduplicating stories that appeared in multiple sources, then curates the day's top stories while checking the last seven days of coverage to avoid repeating itself. Each curated story gets deep expansion research — targeted follow-up queries running in parallel to gather clinical trial details, analyst reactions, deal terms, whatever the story needs — then passes through a quality gate before moving forward.

From there, the system generates a full original article for each story — 800 to 1,800 words — and creates a featured image using Gemini, with OCR validation that rejects any images containing text and retries until it gets a clean one. The articles publish to a media hub at biotechmorning.com with full SEO, RSS, and Google News sitemap support.

The trickiest problem was trust — if AI is writing the content, how do you know it's accurate? After each article is generated, the system extracts every specific factual claim, generates independent verification queries, runs them through a separate research provider, evaluates whether each claim is substantiated, and revises the article to correct or strip anything it can't verify. The full trail — every claim, every verification query, every verdict — gets persisted to the database. Then when the newsletter is synthesized from the finished articles, the entire fact-checking process runs again on the newsletter copy, because summarization can introduce errors the originals didn't have. Two layers of independent verification, fully automated.

The whole pipeline runs on durable execution — every stage is idempotent and resumable, so if anything crashes mid-run, it picks up exactly where it left off. The system is a template now — swap out "biotech" for any industry and the same pipeline runs. That's what makes it interesting to me beyond this one newsletter.