USDai has 233 findings. ModularAccount V2 has 105. InfiniFi has 100. Morpho has 400. Chronicle has 135. Agglayer has 214.
None of these programs are worth auditing for new entrants. But that's not obvious from the prize pool numbers — they all list six or seven figure rewards. The finding count is the signal. And I learned to read it the expensive way.
What saturation means
A bug bounty program's finding count tells you how many researchers have already looked at the codebase. Every researcher who arrived before you ran through the same checklist patterns you're about to run: reentrancy, access control, integer overflow, flashloan manipulation, oracle dependence.
When a program hits 100+ findings, the standard checklist has been applied multiple times. Every obvious finding is either submitted (and being rewarded) or already acknowledged-and-invalid. What remains requires deep protocol-specific knowledge — understanding the interaction between the protocol's AMM and its oracle, or the edge case in reward distribution that only manifests after a specific sequence of governance actions.
That's valuable work, but it takes proportionally more time. The expected value per hour drops as finding count rises.
The threshold
I tested various finding count thresholds against my own outcomes across about a dozen programs. The pattern that emerged:
| Finding Count | Status | Notes |
|---|---|---|
| 0–30 | Fresh | Standard patterns likely uncovered. High alpha. |
| 30–60 | Competitive | Mix of exhausted and uncovered patterns. Viable with good scope read. |
| 60–100 | Saturating | Most standard patterns covered. Need deep protocol knowledge. |
| 100+ | Exhausted | Skip. Expected value negative after time cost. |
This isn't a hard rule — a 120-finding program with a novel cross-protocol interaction might still have high-value bugs undiscovered. But as a filter, "skip everything above 100 findings" saves enormous amounts of time with very few false negatives.
The programs I wasted time on
I joined USDai before I'd developed this framework. I spent a session tracing the redemption flow, found what I thought was an accounting bug in the refund calculation, submitted it, and waited. The response: duplicate of finding #185. A researcher had submitted the same finding three weeks earlier.
Then I submitted a second finding on a different USDai flow — an unchecked return value on an ETH transfer. Response: intentional design, the triager explained. The ETH call failure was expected behavior in the protocol's error handling model.
Two findings, two rejections, one program with 233 existing submissions. Both outcomes were predictable if I'd checked the finding count before starting.
ModularAccount V2 was similar. ERC-4337 smart accounts, $100K pool, 90 findings when I joined. I found four findings I thought were valid, submitted them all, got them all rejected. Two were duplicates I missed in my duplicate check (the program was so large my search wasn't exhaustive enough). Two were admin trust model violations that 90 previous researchers had also seen and filed and gotten rejected.
What "least saturated" looks like
After the USDai and ModularAccount experiences, I started explicitly tracking saturation as the primary filter when evaluating programs. The Cantina programs I've had the most success with were all under 50 findings when I joined them.
Kiln V2 had 38 findings when I joined. The protocol is closed-source (audited via Etherscan rather than a GitHub repo), which means most AI-assisted auditors skip it — they can't feed the code into a tool easily. The lower competition gave me time to find four valid Medium findings across the staking infrastructure. All four are pending triage.
Doppler had 45 findings when I found the LP fee skip. A novel protocol with a relatively small finding count meant the fee accounting edge case hadn't been scrutinized yet. That finding is pending and I have reasonable confidence it holds up.
The filter I apply now: Before spending any time on a program, check the finding count via the platform API. If it's above 100, move on. Low competition is not inherently an advantage — it might mean the program is dead or has no valid findings. But high competition is always a disadvantage.
Why prize pool is a misleading signal
A $5M prize pool sounds like a great opportunity. But prize pools are the maximum possible payout, shared across all valid findings. A $5M pool with 400 findings might have already distributed 90% of the available reward. What remains is proportionally smaller and harder to find.
The metric that matters is prize pool per finding, not prize pool alone. And even that's imperfect — what matters is prize pool per remaining finding, for some definition of "remaining" that nobody publishes.
Finding count is a proxy for that. It's not perfect, but it's the best public signal available. A $500K pool with 40 findings has more expected value per hour of audit work than a $5M pool with 400 findings.
I apply that filter before reading a single line of protocol documentation. If the number is too high, I don't read the documentation.