Grant Discovery Agent

AI-powered funder research for the S.S. Jeremiah O'Brien — a Pivotal IQ pro-bono case study

100K+
Foundations Scanned
90
Prospects Found
$0
Total Cost
2–3 days
Dev Time
99
AI-Gen Tickets

What we built

We built a tool that helps a small nonprofit find the right funders. The client is the S.S. Jeremiah O'Brien — a WWII ship museum in San Francisco, one of the last two seaworthy Liberty Ships from the war. They need grant funding but have no staff to research who might fund them.

The problem is that there are over 100,000 private foundations in the US. Most tools for finding funders either cost thousands per year or rely on keyword searches that return bad results — a search for "maritime" turns up ocean conservation foundations, not organizations that fund ship museums.

What makes it different

We found that the best way to predict whether a foundation will fund you isn't reading their mission statement — it's looking at who they've already funded. Every private foundation files a tax return (Form 990-PF) listing every grant they made that year. If a foundation already gives money to ship museums, veteran memorials, and history museums, they're a much better prospect than one that just mentions "maritime" on their website.

We used AI to read through those grant lists and figure out which recipients are similar to our client. A human doing this work would need months. The AI did it in hours.

How it works, step by step

  1. Gather the data. A Python program downloads foundation records from free government databases — IRS filings, federal grant archives, museum and maritime grant histories. About 9 GB of raw data, all from public sources that anyone can access.
  2. Filter down. Out of 100,000+ foundations, most have no useful information or are clearly irrelevant. After filtering for data quality, we're left with about 3,800 that have enough grant history to analyze.
  3. Find similar ones. We use a technique called embedding — converting text descriptions into numbers that a computer can compare. This finds the ~500 foundations whose profiles are most similar to our client's mission. This step runs on the developer's own computer, no cloud services needed.
  4. AI reads the grant lists. For those 500 foundations, Claude (Anthropic's AI) reads each one's list of past grant recipients and classifies them: is this grantee a maritime org? A veterans group? A museum? This is where AI adds the most value — it has the general knowledge to recognize that "Swords to Plowshares" is a veterans service organization without being told.
  5. Score and rank. Foundations that already fund organizations like ours get ranked highest. About 90 foundations make it through as strong prospects.
  6. Write draft proposals. For the top prospects, the AI identifies what gaps our client should address before applying, then writes tailored draft proposals for each one.
  7. Deliver. The final output is a document the client's board can read — organized by tier, with a summary of each prospect and why they're a good fit.

How much it cost

Nothing. Zero dollars.

How it was built

An important detail: this work was done before the client even agreed to a pro-bono engagement. The PM reached out to the nonprofit, got the team on a call together, and facilitated the needs discovery conversation. The developer was in the room — hearing the client's constraints directly, picking up on things like the reality that there was no budget for ongoing costs. After the call, the team debriefed, and that was enough. There wasn't a formal requirements handoff or a feedback loop. The developer just had enough context to start exploring.

The PM's role was being the bridge — having the relationship, getting the right people in the same room, and making sure the technical person heard what mattered straight from the client. No spec, no wireframes. Just enough signal for a developer who knew how to move fast with AI.

The developer used AI tools for all of the coding:

In about 2–3 days of background coding, the system went from nothing to a working tool that processed 100,000 foundations and produced board-ready output — all before the word came back on whether the client was even going to say yes.

What happened when the client declined

The nonprofit ultimately told us they lacked the resources to work with us on a longer engagement. In a traditional consulting model, that would have been a painful outcome — three days of intense developer effort with nothing to show for it, and the kind of burnout and resentment that makes teams reluctant to do pro-bono work again.

But because AI handled the heavy lifting, the team still had energy left. We reformatted the web dashboard into a print-friendly board report and dropped it off — a fully usable deliverable at no cost to the client, even though the engagement never formally started.

And the bigger win: the pipeline isn't locked to one client. It's driven by a config file — swap in a different nonprofit's details and it works for them too. What started as speculative pre-engagement work became a reusable software asset for the consulting firm.

What this means for your organization

How It Works

A detailed look at the technology, methodology, and results

Project Brief

The Grant Discovery Agent was built in ~2–3 days as a Pivotal IQ pro-bono engagement for the S.S. Jeremiah O'Brien, a WWII Liberty Ship museum in San Francisco. It narrows 100,000+ US private foundations to ~90 high-confidence funding prospects by combining free IRS data, local vector embeddings, and AI-powered analysis. The entire system cost $0 to build and $0 to run.

1. The Problem

The National Liberty Ship Memorial operates the S.S. Jeremiah O'Brien — one of only two seaworthy WWII Liberty Ships. It's simultaneously a ship, a museum, a war memorial, and an educational institution. That multi-domain identity makes grant discovery hard: relevant to funders across maritime heritage, military history, museum education, and historic preservation, but doesn't fit neatly into any one.

102,000+ private foundations in the US. Three compounding difficulties: scale (70 working days to review at 1/min), data fragmentation (7+ independent sources), false positives (keyword matching wastes staff time on bad leads).

Key fact: The client had zero budget for tools and zero capacity for ongoing technical support. Every design decision was shaped by this constraint.

2. The Insight — Peer Salience

A foundation's grant history reveals more about its real priorities than its mission statement does.

IRS Form 990-PF grantee lists are a behavioral signal. The key question: does this foundation already fund organizations like ours? Defined across three peer categories:

A foundation that gave $50K to the USS Intrepid Museum and $30K to the National WWII Museum is a far stronger prospect than one whose mission statement mentions "maritime." The LLM knows "Swords to Plowshares" is a veteran services org without being told — keyword search cannot do this.

Key fact: This is a domain-specific innovation — applying AI reasoning to a data quality problem, not adapting a known grantwriting practice.

3. The Approach

Retrieval-then-rerank: cheap, broad filtering first, then expensive deep analysis on survivors.

Stage 1 — Local embeddings ($0): Nomic 1.5 running locally computes vector similarity. Narrows thousands to hundreds in minutes.

Stage 2 — LLM analysis (via Max subscription): Claude evaluates the filtered set through five BAML functions:

FunctionModelWhat It Does
ClassifyPeerSalienceHaikuCategorizes grantees as maritime/military/museum/unrelated
ScoreGrantSonnetRates mission fit, eligibility, competitive position, effort/reward
AnalyzeGapsSonnetIdentifies blocking/important/nice-to-have gaps
DraftProposalSonnetGenerates funder-aligned proposal sections
SynthesizeBriefingSonnetBoard-level executive briefing

Scoring is categorical, not numeric — the LLM picks "Direct / Adjacent / Tangential / None" for mission fit. Auditable and tunable.

Key fact: Haiku handles the highest-volume classification (~10x cheaper). Sonnet handles reasoning. All via Max subscription — $0 marginal cost.

4. The $0 Architecture

CategoryHow
LLM inferenceRouted through Max subscription via BAML-to-CLI adapter
Data sources100% free public data — IRS, Grants.gov, IMLS, NPS
EmbeddingsNomic 1.5 runs locally
InfrastructureSQLite on one machine
DevelopmentZero hand-coding — Claude did all implementation
DeliveryPrint-friendly webpage, no ongoing hosting
Key fact: The Max subscription isn't just a chat tool — it became the inference backend for a production data pipeline.

5. The Creative Solution

BAML + CLI adapter. BAML defines type-safe LLM function schemas. Its pluggable client architecture let us swap API calls for claude CLI invocations, routing all inference through the Max subscription. Hundreds of dollars in API costs → $0.

Lovable → Claude → lisa. Lovable scaffolded the React dashboard. Claude built everything functional. Lisa (a DAG-based scheduler for Claude Code) orchestrated 99 tickets across 9 dependency waves.

6. The Result

100,000+ foundations (IRS master file)
  → 3,800 with usable grant data
    → 500 with embedding similarity
      → 180 with mission-relevant peer grantees
        → 90 top prospects
          → 20 strong leads with draft proposals
Key fact: 23,400 lines of code across 213 source files. 8 data integrations, 5 AI functions, 15 API endpoints, 11 dashboard pages. Built in ~2–3 days.

7. The Process

No code was written by hand. The PM facilitated client need discovery. The developer used Lovable for UI scaffold, then directed Claude for everything else. Claude generated all 26 stories and 99 tickets following the RDSPI workflow (Research → Design → Structure → Plan → Implement → Review).

Lisa orchestration: 99 tickets in 9 waves, 2 Claude agents running in parallel. Artifact-driven phase detection, crash recovery, commit serialization.

Key fact: Human complaints drove the stories; Claude translated them into work items. 99 tickets with full RDSPI artifacts = auditable AI development at scale.

8. What's Next


Common Questions

Q: Why not just use a paid grant database like Foundation Directory Online?
Paid databases rely on keyword matching, which generates false positives (a search for "maritime" turns up ocean conservation, not ship museums). Peer salience analysis looks at actual grant histories — behavioral data, not descriptions — and finds funders that keyword search misses entirely. And it costs $0.
Q: How accurate are the results?
The scoring is categorical and auditable. The AI picks from constrained categories (Direct / Adjacent / Tangential / None) for mission fit, rather than generating opaque numeric scores. Every prospect in the final briefing includes a rationale you can verify.
Q: Does this really cost nothing to run?
Yes. All data sources are free public records. The AI processing runs through an existing subscription (not per-query billing). The system runs on a single laptop — no cloud hosting or servers.
Q: Can this work for our organization?
The pipeline is config-driven — swap in a different nonprofit's mission and peer categories, and it produces a tailored analysis. The architecture was designed for reuse from the start.
Q: What do we get at the end?
A board-ready briefing document with prioritized prospects organized by tier, each with a summary of why they're a fit, key contacts, and recommended next steps. No ongoing software to maintain.

Quick Reference

100K+
Foundation Universe
3,800
After Quality Filter
500
Embedding Match
180
Peer Classified
90
Top Prospects
20
With Drafts
LayerTechnology
Data pipelinePython 3.13, pandas, lxml
Data sourcesIRS (EO BMF, SOI, 990-PF), Grants.gov, IMLS, NPS
IntegrationCSV file (34 columns)
EmbeddingsNomic 1.5 (local, $0)
LLM frameworkBAML (structured extraction)
LLM inferenceClaude Sonnet + Haiku via CLI → Max subscription
DatabaseSQLite (Drizzle ORM)
API serverHono (Node.js)
FrontendReact 18, shadcn/ui, Recharts
Orchestrationlisa (Rust/Zellij DAG scheduler)

Detailed Reference

In-depth coverage of each aspect of the project

For a deeper dive into any topic from the case study or methodology, see the sections below.

Architecture Diagram