The Admin Trust Problem: Why Most DeFi Access Control Findings Are Invalid

I spent three weeks auditing DeFi protocols and had finding after finding rejected. The rejections weren't from sloppy work — I had traced execution paths, identified missing checks, and written detailed impact analyses. The pattern in the rejection messages was always the same: "Admin trust model."

It took me longer than it should have to understand what that phrase actually meant. Once I did, I realized I'd been thinking about smart contract security wrong. Here's what I learned.

The counterintuitive rule

In traditional software security, missing access controls are almost always valid findings. An API endpoint that lets any user delete another user's data — that's a bug. Full stop.

DeFi protocols operate under a different security model. Most of them are built with an explicit assumption: the admin is trusted. This isn't an oversight or a lazy design choice. It's a deliberate tradeoff between decentralization ideals and operational reality.

When a protocol's documentation says "owner can update fee parameters" or "admin can pause the contract," they're not exposing a vulnerability. They're describing a trust relationship that users accept when they deposit funds. Auditors who report these admin capabilities as bugs are filing noise.

Here's where it gets subtle: the same missing access control can be a valid critical finding in one protocol and a non-finding in another, depending entirely on the trust model the protocol has documented and deployed.

What gets rejected

Let me show you the patterns I kept hitting. Each of these looked like a finding. None of them were.

Pattern 1: The missing setter

contract StakingPool {
    address public treasury;

    constructor(address _treasury) {
        treasury = _treasury;
    }

    // No setTreasury() function
    function _collectFees(uint256 amount) internal {
        IERC20(token).transfer(treasury, amount);
    }
}

My finding: "Treasury address is set at deployment and cannot be updated. If the treasury address is compromised or needs migration, fees are permanently misdirected."

Rejected. Why? Because immutability is often intentional. The protocol team explicitly chose not to include a setter to prevent admin rug pulls. A missing setter isn't a stuck-funds vulnerability — it's a design choice that makes the protocol more trustless. The correct framing would have been to ask: did they intend this to be immutable? If yes, it's not a bug.

Pattern 2: Missing factory revocation

contract Nexus {
    mapping(address => bool) public authorizedFactories;

    function addFactory(address factory) external onlyOwner {
        authorizedFactories[factory] = true;
    }

    // No removeFactory() function
    function deployPool(bytes calldata params) external {
        require(authorizedFactories[msg.sender], "Unauthorized");
        // ...
    }
}

My finding: "Once a factory is authorized, it cannot be revoked. A compromised factory retains permanent pool deployment rights."

Rejected. The admin can already deploy arbitrary pools directly. The factory is a convenience wrapper operating under the same trust level as the admin. Revoking factory access would require the admin to want to do so — and if the admin is trusted, they can address factory compromise through other means (like pausing the protocol). This finding adds no new attack surface beyond what the admin already has.

Pattern 3: Emergency withdrawal without access control

contract VaultConnector {
    function forceWithdraw(uint256 amount) external {
        // Withdraw from underlying protocol regardless of state
        _withdrawFromProtocol(amount);
        IERC20(asset).transfer(msg.sender, amount);
    }
}

This one I was more confident about. No access control on a withdrawal function looks terrible. But the triager's response: "This is an emergency mechanism for compliance-related asset freezes. Anyone should be able to withdraw their own proportional share."

The function was designed to handle regulatory edge cases — situations where the connected protocol might freeze assets, and users need a direct withdrawal path that bypasses normal accounting. The "anyone can call it" was deliberate. I should have traced the economic impact (could a user withdraw more than their share?) rather than just flagging the missing modifier.

The actual signal: economic impact

The rejections shared a common thread. In each case, I had identified a capability that existed — an admin could do something, or anyone could call something — but I hadn't demonstrated that this capability could be used to extract value from other users without their consent.

The real question in DeFi security isn't "who can call this function?" It's "can someone use this function to take money that isn't theirs?"

The correct framing: Admin trust model findings are only valid when the admin's capabilities exceed what users should reasonably expect, or when an unprivileged user can leverage a function to steal from other users. Missing access control is noise unless you can trace a path from the missing check to economic harm.

What does hold up

Once I understood the admin trust model, I got better at recognizing findings that survive it. Here's a table of the patterns that matter:

Pattern Verdict Why
Admin can set fee to 100% Usually Invalid Users trust admin with fee param. Covered by front-running disclosure.
Anyone can call liquidate(user) before threshold Valid Unprivileged user steals collateral from healthy position. Real economic harm.
Admin can drain all funds immediately Context-dependent Valid if protocol markets itself as non-custodial. Invalid if admin custody is documented.
Missing reentrancy guard on withdraw() Valid Any user can exploit, not just admin. Classic Checks-Effects-Interactions violation.
No event for admin parameter change Invalid Missing events are informational. No economic impact.
Integer overflow in fee calculation Valid Can redirect user funds regardless of admin intent. Math bugs survive trust model.
Whitelist check missing on mint Context-dependent Check audit history. May be intentional design (see: CapyFi Coinspect CAPY-02).

The audit report shortcut

The fastest way to avoid wasted work is to read the protocol's existing audit reports before writing a single finding.

I learned this after spending two hours documenting what I thought was a whitelist bypass vulnerability in a lending protocol. Then I downloaded their audit PDF and found it on page 7: "The access restriction on mint is an intentional design decision that dictates who can supply liquidity." Two auditors, six months prior, had flagged it, been told it was intentional, and documented the decision. I was the third person to "find" it.

Protocol with 2+ audits = low marginal alpha. The obvious patterns — access control, zero-address validation, event emissions — have all been seen. If they survived, they survived intentionally. Your time is better spent on protocol-specific logic: the interaction between two modules, the accounting invariant that only holds under specific conditions, the edge case in reward distribution that nobody tested.

The five-question checklist

Before writing up any access control finding, I now run through these:

  1. Can a non-admin user exploit this? If only the admin can trigger the behavior, and the protocol documents admin trust, stop here.
  2. What's the maximum economic impact? If you can't put a dollar figure on it, you don't understand the impact yet.
  3. Is there an escape hatch? Check for recover(), rescue(), emergencyWithdraw(), sweep(). These signal that "stuck funds" is an intended non-issue.
  4. Is this in the audit history? Download every audit PDF. If it was acknowledged-and-won't-fix, submitting it again is spam.
  5. Does the protocol documentation contradict this? "Admin can update parameters" in the docs is not a bug report. It's a feature description.

The finding that survives: Unprivileged user + missing check + traceable path to other users' funds + not in audit history + not documented as intentional design. All five conditions must hold.

Where the real bugs live

After internalizing all of this, my focus shifted. Access control findings are a red herring most of the time. The findings that hold up tend to be in less obvious places:

Cross-module interactions. When Protocol A's accounting assumptions don't match Protocol B's behavior, you get state divergence. No individual module has a bug — the bug is in the composition. These are harder to find and rarely appear in audit checklists.

Reward distribution edge cases. LP fee calculations, reward accrual, and share-based accounting all have cliff behaviors at boundary conditions. What happens when the first depositor withdraws before any yield? What happens at zero liquidity? Auditors often test the happy path and miss the boundary.

Incorrect assumptions about external protocols. A DeFi protocol that integrates Aave assumes Aave's reserve factor is bounded. A protocol integrating Uniswap V3 assumes tick math won't overflow at extreme prices. These assumptions break. The bugs are real but require deep knowledge of the external system's behavior.

Precision loss with economic consequences. Floor-vs-ceiling rounding in fee calculations can silently accumulate value on one side. The critical word is "with economic consequences" — rounding that favors the protocol over users at scale is a bug. Rounding that has zero accumulated effect is not.

The bottom line

The admin trust model isn't a loophole that protocols hide behind. It's a fundamental architectural decision about what security properties a protocol offers its users. Understanding it doesn't make you less effective as an auditor — it makes you faster, because you stop chasing patterns that won't pay out and focus on the logic that actually matters.

The best DeFi security researchers I've observed submit fewer findings with higher hit rates. Volume without calibration is noise. After building that calibration — the hard way — that's the only approach that makes sense.