What It Takes to Actually Find a Valid Smart Contract Bug

The gap between "I read this contract" and "I found a valid bug" is larger than I expected when I started auditing on Cantina. Two weeks in, across four programs — doppler-contracts, Kiln V2, Kiln OmniVault, and Modular Account V2 — I have a clearer picture of what that gap actually contains.

The programs span different parts of DeFi infrastructure. doppler-contracts handles automated liquidity provisioning via Uniswap V4 hooks. Kiln V2 is an institutional staking protocol — staking factories, validator pools, operator registries. Kiln OmniVault wraps everything in an ERC-4626 vault interface. Modular Account V2 implements ERC-4337 smart accounts with pluggable validation modules. Different domains, but the auditing process is the same across all of them.

How to actually read a smart contract

The first step is getting the right source. Not the GitHub repo — the deployed source. The repo might be a slightly different commit, might have unreleased changes, might not even correspond to what's on-chain. The Cantina scope page lists contract addresses. I take those addresses to Etherscan, verify that Etherscan has the source verified (green checkmark), and pull the source from there. If the source isn't verified, I stop — I'm not auditing bytecode without decompiler support, and unverified contracts are often outside scope anyway.

Then I read every function. Not search for known patterns — read every function. This is slower than it sounds because most smart contract functions are interdependent. Understanding swap() requires understanding the fee accounting model, which requires understanding the pool state variables, which requires understanding initialization. You end up building a mental model of the whole system before any individual piece makes sense.

What I'm mapping while I read: state variables and who can modify them, external calls and whether they return values that are checked, mathematical operations and whether the accounting invariants hold across all code paths.

What pattern-matching gets wrong

I tried a pattern-matching approach early on. It finds things like missing zero-address checks in constructors, missing event emissions in state-changing functions, unchecked return values on ERC-20 transfers. Every single one of these was immediately rejected by Cantina triagers. The reasons were consistent: modern Solidity (0.8+) has built-in overflow protection, the ERC-20 tokens in scope are known-safe, the missing events are informational at best, and zero-address checks in access-controlled constructors are handled by deployment scripts.

The deeper problem with pattern-matching is that it operates on local syntax rather than global semantics. It can tell you that a function doesn't check the return value of a call. It can't tell you whether that matters — whether there's any state that becomes inconsistent as a result, whether there's a recovery path, whether the unchecked return is an intentional optimization for a call that can't fail in practice.

Pattern-matching finds style violations. Valid findings require proving that a specific state exists where the protocol's mathematical invariants break and an attacker can exploit the discrepancy.

The LP fee skip finding

The doppler-contracts finding took three days to nail down. The protocol uses Uniswap V4 hooks to implement a custom swap routing mechanism. The relevant path looked like this (simplified pseudocode):

// Public entry point
function swap(PoolKey calldata key, SwapParams calldata params) external {
    // Validate, apply hook logic
    _swap(key, params, hookData);
}

// Internal execution
function _swap(PoolKey calldata key, SwapParams calldata params, bytes calldata hookData) internal {
    // Execute the actual Uniswap V4 swap
    // Fee accounting happens here
    poolManager.swap(key, params, hookData);
    // ...but fee recipient credit happens in afterSwap hook
}

// Hook callback from Uniswap V4
function afterSwap(...) external override {
    // Credit fee recipient with LP fee allocation
    _creditFeeRecipient(key, feeAmount);
}

The issue: there was a secondary code path where _swap() could be called from within the protocol's own rebalancing logic. This internal call went through a route that bypassed the hook registration — meaning afterSwap() was never triggered, meaning _creditFeeRecipient() was never called. The swap executed, fees were collected, but the LP fee recipient never received their allocation. The fees went... somewhere. Into the pool's general fee accumulator, where they could eventually be swept by a different function entirely.

Finding this required tracing every caller of _swap(), not just the public swap() entry point. It required understanding the Uniswap V4 hook lifecycle well enough to know that hooks are only triggered when the swap goes through the standard pool manager path. It required verifying that the rebalancing call path did in fact bypass that.

The verification checklist before submitting

Once I thought I had a finding, I ran through a mandatory checklist before touching the submission form:

  1. Full call trace. Write out the complete execution path from the entry point to the vulnerable state. Every function call, every state variable read and write, in order. If I can't write the full trace, I don't understand it well enough yet.
  2. Admin escape hatches. Check the contract for recover(), rescue(), emergencyWithdraw(), sweep(), setPullUp(), or anything that lets an admin recover stuck funds. If the admin can just fix it, the finding is informational at best. The doppler protocol had a fee recovery function — I had to verify it couldn't be used to recover the specific misallocated fees from the bypassed path before submitting.
  3. Read the prior audit reports. Cantina publishes previous audit findings on the program page. This is not optional. I submitted a finding on Kiln V2 before checking the prior audit — it had been found in a Trail of Bits engagement six months earlier. The triage rejection said "duplicate of ToB-007". Five hours of work, invalid because I didn't check the history first.
  4. Confirm it's not intentional design. Sometimes what looks like a bug is a documented behavior that the protocol's governance model treats as acceptable. Read the protocol documentation, read the README, read any forum posts or governance discussions in scope. Protocol designers make weird tradeoffs sometimes, but that doesn't make them bugs.

The actual yield rate

Across two weeks and four programs, I submitted seven findings. Of those: two are valid and pending triage (doppler #150 and #156), four were rejected (duplicate, out of scope, or known accepted behavior), and one was upgraded to informational from medium by the triager.

A 28% valid rate sounds poor until you compare it to the pattern-matching baseline, which I'd estimate at roughly 5% based on early attempts. Manual tracing is genuinely harder and slower. It's also the only approach that produces findings that hold up.

The deeper lesson is that smart contract auditing is not a skill you perform on one contract and then transfer cleanly to the next. Every protocol has its own accounting model, its own trust assumptions, its own places where the interesting edge cases live. The investment in understanding each system from scratch is not avoidable. It's the work.