We’re launching V12, our autonomous Solidity auditor. Unlike existing tools, V12 consistently finds real bugs: high and critical severity vulnerabilities. It does so by combining AI and LLMs with traditional static analysis techniques. We tested V12 in live Zellic audits, in audit competitions, and in the wild, discovering 39 findings. Meanwhile, in our internal benchmarks, V12’s performance saturates our current suite of tests made of real, impactful bugs we found in past audits.
While not a substitute for audits, V12’s current bug-finding ability matches or exceeds most junior auditors and some auditing firms. Throughout this post, we’ll explore examples of bugs that V12 found. Many of these bugs were found either in live audit competitions or were missed by previous auditors. V12 currently supports Solidity, but we will add support for additional languages and L1s in the future, including Rust on Solana.
Beyond a powerful bug-finding engine, V12 includes a fully-featured, intuitive, and familiar frontend. In this interface, you can launch audits and review discovered bugs, with a workflow similar to Linear or Github Issues. You can also export selected issues to a Markdown, JSON, or a PDF report. This makes it easy for teams to adopt V12 into existing workflows or scan third-party projects they’re planning to integrate.
V12 is already trusted by the best teams in the space: our design partners include LayerZero, Starkware, Axiom, Avantis, Initia, Alkimiya, and Succinct. Today, we’re launching a closed beta for V12—which you can access here↗. After this initial phase, we will fully release V12 to the public for free.
Why V12?
We founded Zellic because audits sucked. In 2021, audits were expensive and slow. Major firms constantly missed obvious, surface-level bugs, like missing access control or reentrancy. Our mission was simple: deliver actually good audits—better, faster, cheaper, no missed bugs.
While we’ve made good on that mission at Zellic (and recently, Code4rena↗ and Zenith↗), there are still no good solutions for teams who need small, quick reviews. This includes teams seeking continuous security—an audit for every pull request—or teams shipping small or incremental changes. There are also no good solutions for teams evaluating third-party contracts (e.g. tokens) for potential integration.
Earlier this year, we noticed some audit providers now underperform frontier LLMs. In general, low-quality auditors (1) suck and miss obvious bugs, (2) have unacceptable turnaround times, or (3) lack a streamlined, consistent customer experience. Meanwhile, these providers often charge $1,000s or $10,000s for small, simple reviews.
LLMs excel at finding surface-level coding mistakes, but they struggle to find deeper vulnerabilities like protocol design or business logic errors. Based on Zellic’s internal statistics from over 1,000 audits, roughly 70% of all bugs are coding mistakes1. Drilling down to just critical and high-severity vulnerabilities, the proportion remains similar. Therefore, we hypothesized that it would be possible to build an AI auditing tool that outperforms low-quality audit firms on finding simple but important bugs—while acknowledging that it will never be able to find all bugs or outperform the best providers.
In short: teams want a consistent, good security provider that reliably finds important, straightforward bugs. They want to have some assurance on short notice though it will not be perfect or at the same level as a high-quality audit.
This is our problem statement for V12. We don’t expect V12 to be perfect, but we want it to be at least as good as the worst auditing firms. We want teams to have a cheap, self-serve experience that makes security feel abundant and constantly accessible, while recognizing that it doesn’t replace a proper audit by a high-quality provider.
High-quality audits are still necessary. AI cannot find all bugs and the best humans still far outperform even the best AI systems. In crypto, even a single vulnerability can lead to a catastrophic, billion-dollar hack. Thus, teams should still have their code professionally audited by trusted providers. We just want to help teams—especially bootstrapped ones—reduce reliance on low-quality audits.
We will integrate V12 in:
- A free, standalone app↗ that anyone can use
- Zellic’s and Zenith’s audits, as an additional layer of quality assurance
- Code4rena’s audit competitions, where all V12-discovered issues will be marked out-of-scope as known issues—deterring AI spam, incentivizing wardens to submit higher quality issues, and ultimately saving clients time and money
- Code4rena’s bug bounties, in a similar fashion as in audit competitions
- A GitHub action for easy CI/CD integration
We hope you enjoy using V12. In the rest of this blog post, we’ll dive into how V12 performs in the real world, highlighting a number of illustrative case studies and bugs discovered by V12.
Evaluation
We believe that V12’s performance speaks for itself. Though we plan to improve V12 even more through additional funding, research, and development, V12 today is already an impressively powerful bug-finding tool. In this section, we’ll explore examples and case studies of how V12 outperforms existing tools, solutions, and providers.
V12 finds missed bugs that led to hacks
V12 consistently discovers surface-level vulnerabilities missed by auditing firms. While not indicative of all firms, some auditing firms consistently miss surface-level, easy-to-spot vulnerabilities. Because the bugs are relatively straightforward, these misses often lead to hacks. Here, we present a representative sample of bugs from real-world exploits that V12 detects but were missed by past audits. All bugs are found independently by V12 with no special prompting or human assistance.
Bug 1: 1inch (March 2025)
The bug is an integer overflow at an unchecked pointer addition operation from inline assembly, ultimately resulting in memory corruption. In total, $5 million USDT was stolen. According to rekt.news, the vulnerability was missed despite multiple past audits↗. V12 detects and reports this bug with no special prompting or assistance.
Vulnerability details
The contract writes a struct to memory at offset
, which is calculated as ptr
+ interactionOffset
+ interactionLength
23. However, because interactionLength
is fully attacker-controlled and the addition is unchecked, the attacker can control offset
to be essentially any offset. This ultimately allows the attacker to redirect the write’s location, allowing them to supply a fake instance of the struct in its place. The fake struct then ultimately enables the attacker to steal tokens from a victim contract. The vulnerable code is reproduced below:
function _settleOrder(bytes calldata data, address resolver, uint256 totalFee, bytes memory tokensAndAmounts) private {
// [...]
assembly {
// [...]
let interactionLengthOffset := calldataload(add(data.offset, 0x40))
let interactionOffset := add(interactionLengthOffset, 0x20)
let interactionLength := calldataload(add(data.offset, interactionLengthOffset))
// [...]
let offset := add(add(ptr, interactionOffset), interactionLength) // <-- BUG:
// Offset is controllable, enabling arbitrary writes (memory corruption)
mstore(add(offset, 0x04), totalFee)
mstore(add(offset, 0x24), resolver)
mstore(add(offset, 0x44), calldataload(add(order, 0x40))) // takerAsset
mstore(add(offset, 0x64), rateBump)
mstore(add(offset, 0x84), takingFeeData)
let tokensAndAmountsLength := mload(tokensAndAmounts)
memcpy(add(offset, 0xa4), add(tokensAndAmounts, 0x20), tokensAndAmountsLength)
mstore(add(offset, add(0xa4, tokensAndAmountsLength)), tokensAndAmountsLength)
// [...]
}
}
V12 reported this vulnerability with the following description:
In
_settleOrder
, two offsets (interactionLengthOffset
andinteractionOffset
) are loaded directly from user‐supplied calldata without any bounds checks. These values are then used in unchecked arithmetic to compute memory write locations (viamstore
,calldatacopy
, and custommemcpy
). A malicious caller can supply out‐of‐range offsets to cause writes outside of the intended memory region, resulting in arbitrary memory corruption.
Bug 2: SIR Trading (April 2025)
The bug is a misuse of transient memory (TSTORE
/TLOAD
) in the protocol’s Uniswap V3 swap callback. In total, $335K USD was stolen. According to rekt.news, this vulnerability was missed in the protocol’s audit↗. V12 detects and reports this bug with no special prompting or assistance.
Vulnerability details
Uniswap writes the pool address to transient storage slot 1, which the contract uses for access control. However, at the end of the callback, it reuses the same transient storage slot to store the amount of tokens it minted. The attacker uses this to maliciously call the swap callback from its attack contract, posing as a Uniswap pool, to drain the vault 45. The attack contract’s address coincides with the number of tokens minted, found through brute force. For example, if the contract mints 95759995883742311247042417521410689 tokens, the attacker would need to deploy their contract at 0x00000000001271551295307acc16ba1e7e0d4281. The vulnerable code is reproduced below:
function uniswapV3SwapCallback(int256 amount0Delta, int256 amount1Delta, bytes calldata data) external {
// Check caller is the legit Uniswap pool
address uniswapPool;
assembly {
uniswapPool := tload(1)
}
require(msg.sender == uniswapPool);
// Decode data
(
address minter,
address ape,
SirStructs.VaultParameters memory vaultParams,
SirStructs.VaultState memory vaultState,
SirStructs.Reserves memory reserves,
bool zeroForOne,
bool isETH
) = abi.decode(
data,
(address, address, SirStructs.VaultParameters, SirStructs.VaultState, SirStructs.Reserves, bool, bool)
);
// Retrieve amount of collateral to deposit and check it does not exceed max
(uint256 collateralToDeposit, uint256 debtTokenToSwap) = zeroForOne
? (uint256(-amount1Delta), uint256(amount0Delta))
: (uint256(-amount0Delta), uint256(amount1Delta));
// If this is an ETH mint, transfer WETH to the pool asap
if (isETH) {
TransferHelper.safeTransfer(vaultParams.debtToken, uniswapPool, debtTokenToSwap);
}
// Rest of the mint logic
require(collateralToDeposit <= type(uint144).max);
uint256 amount = _mint(minter, ape, vaultParams, uint144(collateralToDeposit), vaultState, reserves);
// Transfer debt token to the pool
// This is done last to avoid reentrancy attack from a bogus debt token contract
if (!isETH) {
TransferHelper.safeTransferFrom(vaultParams.debtToken, minter, uniswapPool, debtTokenToSwap);
}
// Use the transient storage to return amount of tokens minted to the mint function
assembly {
tstore(1, amount) // <-- BUG
}
}
V12 reported this vulnerability with the following description:
The callback uses non-standard assembly calls
tload(1)
to read the pool address and latertstore(1, amount)
with the same slot index to write the minted amount. This reuses the same temporary storage slot, corrupting the stored pool address and enabling future calls from unintendedmsg.sender
addresses.
Bug 3: MonoX (November 2021)
The bug is a lack of validation when performing a swap that allows the same asset to be used as both tokenIn
and tokenOut
, causing an incorrect token price to be stored. This resulted in $31 million USD being stolen across two chains. According to rekt.news↗, this simple bug was missed by multiple audit firms. V12 detects and reports this bug with no special prompting or assistance.
Vulnerability details
The swapOut
function does not validate that tokenIn
and tokenOut
are different. This allows an attacker to call swapOut
with the same token for both parameters, causing the second call to _updateTokenInfo
to overwrite the first call’s state changes and distort the token’s price 67.
// swap from tokenIn to tokenOut with fixed tokenOut amount.
function swapOut(address tokenIn, address tokenOut, address from, address to,
uint256 amountOut) internal lockToken(tokenIn) returns(uint256 amountIn) {
uint256 tokenInPrice;
uint256 tokenOutPrice;
uint256 tradeVcashValue;
(tokenInPrice, tokenOutPrice, amountIn, tradeVcashValue) = getAmountIn(tokenIn, tokenOut, amountOut);
address monoXPoolLocal = address(monoXPool);
amountIn = transferAndCheck(from,monoXPoolLocal,tokenIn,amountIn);
// uint256 halfFeesInTokenIn = amountIn.mul(fees)/2e5;
uint256 oneSideFeesInVcash = tokenInPrice.mul(amountIn.mul(fees)/2e5)/1e18;
// trading in
if(tokenIn==address(vCash)){
vCash.burn(monoXPoolLocal, amountIn);
// all fees go to buy side
oneSideFeesInVcash = oneSideFeesInVcash.mul(2);
}else {
_updateTokenInfo(tokenIn, tokenInPrice, 0, tradeVcashValue.add(oneSideFeesInVcash), 0);
}
// trading out
if(tokenOut==address(vCash)){
vCash.mint(to, amountOut);
// all fees go to sell side
_updateVcashBalance(tokenIn, oneSideFeesInVcash, 0);
}else{
if (to != monoXPoolLocal) {
IMonoXPool(monoXPoolLocal).safeTransferERC20Token(tokenOut, to, amountOut);
}
_updateTokenInfo(tokenOut, tokenOutPrice, tradeVcashValue.add(oneSideFeesInVcash), 0,
to == monoXPoolLocal ? amountOut:0 );
}
if(pools[tokenIn].vcashDebt > 0 && pools[tokenIn].status == PoolStatus.OFFICIAL){
_internalRebalance(tokenIn);
}
emit Swap(to, tokenIn, tokenOut, amountIn, amountOut, tradeVcashValue);
}
By swapping the Mono token for itself in a loop, an attacker can repeatedly overwrite its own price, inflating it arbitrarily. After pumping the price, the attacker swaps the overvalued Mono token for other assets in the pool, draining the funds.
V12 reported this vulnerability with the following description:
swapOut does not enforce that tokenIn and tokenOut differ. Allowing tokenIn == tokenOut triggers two sequential updates (_updateTokenInfo for tokenIn and tokenOut) on the same pool in one execution, leading to conflicting state changes, price distortion, and bypass of fee or minimum-pool-size checks.
Bug 4: Grim (December 2021)
The bug is a reentrancy vulnerability in the depositFor
function of the GrimBoostVault, where a malicious token can reenter the function when transferFrom
is called resulting in a stale pool balance. In total, over $30 million USD was stolen. The original audit firm↗ acknowledged missing the bug. V12 detects and reports this bug with no special prompting or assistance.
Vulnerability details
The depositFor
function transfers tokens from the user before updating their share balance:
function depositFor(address token, uint _amount,address user ) public {
strategy.beforeDeposit();
uint256 _pool = balance();
IERC20(token).safeTransferFrom(msg.sender, address(this), _amount);
earn();
uint256 _after = balance();
_amount = _after.sub(_pool); // Additional check for deflationary tokens
uint256 shares = 0;
if (totalSupply() == 0) {
shares = _amount;
} else {
shares = (_amount.mul(totalSupply())).div(_pool);
}
_mint(user, shares);
}
Since the transfer call is external and occurs before state updates, a malicious token contract can reenter depositFor
in its transferFrom
function multiple times before finally performing a legitimate deposit. All the reentrant calls will then finish and calculate shares based on the stale pool balance, allowing the attacker to mint shares multiple times for the same deposit amount.
V12 reported this vulnerability with the following description:
The function makes multiple external calls (
strategy.beforeDeposit()
,IERC20(token).safeTransferFrom
, andearn()
which callsstrategy.deposit()
) before performing the internal state update (_mint
). Without a reentrancy guard, a malicious token or strategy contract can reenter the vault and manipulate state during execution.
V12 finds bugs in the wild
V12 finds live, in-the-wild issues. With no special prompting or assistance, V12 independently discovers a issue in Pendle Finance. The affected contract is deployed to a few chains, including Arbitrum↗, though not currently in use.
The affected contract, ExpiredLpPtRedeemer↗, was added to the repo in June 2025. The relevant code is reproduced below:
// [...]
if (netPtIn > 0) {
_transferFrom(PT, msg.sender, address(this), netLpIn); // BUG
totalPtRedeem += netPtIn;
}
// [...]
The contract uses the wrong variable when transferring PT from the user. It mistakenly transfers netLpIn
tokens instead of netPtIn
. If these values are unequal, either the transaction will revert, or too many PT tokens will be transferred. In the latter case, the additional tokens could be skimmed by another user by supplying a crafted netLpIn
, or in an ideal scenario rescued by the contract owner using withdraw
.
We reported this to Pendle. The team responded that the discovery was an internal duplicate with a known issue that had not yet been published. Nevertheless, this feedback confirms the issue is valid, and the discovery demonstrates V12’s ability to find real bugs that warrant fixes, even in the wild.
V12 finds bugs in audit competitions
We tested V12 by deploying it in live code competitions on Cantina, Sherlock, and HackenProof. These competitions allow us to directly compare V12’s performance against human researchers on real codebases, while submitted vulnerabilities are validated by independent third parties. We exclude Code4rena competitions for two reasons: (1) a conflict of interests, as we own Code4rena↗; and (2) many competitions were for projects that V12 had already been run on during prior Zellic audits.
We entered 6 competitions total, and found 25 vulnerabilities. After judging, 2 vulnerabilities were rated high severity, 2 medium, 4 low, and 9 informational. Three vulnerabilities were rejected due to formatting issues during the submission process (otherwise high severity), and 2 vulnerabilities were marked as out-of-scope. All bugs are discovered independently by V12 without any special prompting or human assistance, apart from formatting changes to comply with submission requirements. Unlike typical human participants, we did not escalate findings or contest severity downgrades during the judging process.
Below, we highlight a representative sample of bugs found and accepted in live competitions.
Bug 1: High in Cantina competition
The bug is a lack of checks on two functions, allowing an attacker to drain other users’ rewards. The vulnerability was rated High severity, and was 1 of only 2 high severity vulnerabilities reported during the competition. In total, 69 out of 323 researchers also submitted this issue. V12 discovered and reported this vulnerability on its own, with no special prompting.
Vulnerability details
The vulnerable code is reproduced below:
function overrideReceiver(address overrideAddress, bool migrateExistingRewards) external whenNotPaused nonReentrant {
if (migrateExistingRewards) { _migrateRewards(msg.sender, overrideAddress); }
require(overrideAddress != address(0) && overrideAddress != msg.sender, InvalidAddress());
overrideAddresses[msg.sender] = overrideAddress;
emit OverrideAddressSet(msg.sender, overrideAddress);
}
// [...]
function removeOverrideAddress(bool migrateExistingRewards) external whenNotPaused nonReentrant {
address toBeRemoved = overrideAddresses[msg.sender];
require(toBeRemoved != address(0), NoOverriddenAddressToRemove());
if (migrateExistingRewards) { _migrateRewards(toBeRemoved, msg.sender); }
overrideAddresses[msg.sender] = address(0);
emit OverrideAddressRemoved(msg.sender);
}
An attacker can set their overrideAddress
to any other user, such as one with unclaimed rewards. Then, when they remove their override addres, they can migrate the existing rewards, draining the victim’s rewards.
V12 produces the following correct vulnerability report:
Since the
overrideAddresses
mapping does not enforce uniqueness, multiple validator owners can set the same override address viaoverrideReceiver
. All unclaimed rewards for those validators accumulate under that single override address. When one owner callsremoveOverrideAddress(true)
, its underlying_migrateRewards(from=override,to=msg.sender)
call moves the entireunclaimedRewards
balance— including funds originally destined for other validators— back to the caller, effectively stealing those other validators’ rewards.A validator can drain unclaimed rewards belonging to other validators who shared the same override address, resulting in total loss of funds.
Bug 2: High in Cantina Competition
The bug is an incorrect check of the isBelowPrice
flag, causing user orders to execute when the current price is on the wrong side of the specified threshold. The vulnerability was rated High severity, and was found by 15 other researchers during the competition. V12 discovered and reported this vulnerability on its own, with no special prompting.
Vulnerability details
The vulnerable code is reproduced below:
function executePriceDependent(PriceDependentRequest calldata req, bytes calldata signature)
public
payable
returns (bytes memory)
{
require(verifyPriceDependent(req, signature), KuruForwarderErrors.SignatureMismatch());
require(msg.value >= req.value, KuruForwarderErrors.InsufficientValue());
require(allowedInterface[req.selector], KuruForwarderErrors.InterfaceNotAllowed());
executedPriceDependentRequest[keccak256(abi.encodePacked(req.from, req.nonce))] = true;
(uint256 _currentBidPrice,) = IOrderBook(req.market).bestBidAsk();
require(
(req.isBelowPrice && req.price < _currentBidPrice) || (!req.isBelowPrice && req.price > _currentBidPrice),
PriceDependentRequestFailed(_currentBidPrice, req.price)
);
(bool success, bytes memory returndata) =
req.market.call{value: req.value}(abi.encodePacked(req.selector, req.data, req.from));
if (!success) {
revert ExecutionFailed(returndata);
}
return returndata;
}
The isBelowPrice
flag is incorrectly checked. When isBelowPrice
is true, the code checks that req.price < _currentBidPrice
, meaning the current price is above the threshold. Conversely, when isBelowPrice
is false, it checks that req.price > _currentBidPrice
, meaning the current price is below the threshold. This inverts the intended logic, causing orders to execute when the price is on the wrong side of the threshold.
V12 produces the following correct vulnerability report:
The function’s guard that enforces user‐specified price thresholds uses the wrong comparison operators. When
isBelowPrice=true
it requiresreq.price < currentPrice
(i.e. price above the threshold), and whenisBelowPrice=false
it requiresreq.price > currentPrice
(i.e. price below the threshold). This inverts the intended logic, so orders only execute when the on‐chain price is on the opposite side of the user’s limit.
Bug 3: Medium in HackenProof competition
The bug is a denial-of-service vulnerability in the processWithdrawalQueue
function, where an invalid KYC request can block the entire withdrawal queue. The vulnerability was rated Medium severity, and earned enough reputation to make it to number 5 on the leaderboard out of 50 hackers. V12 discovered and reported this vulnerability on its own, with no special prompting.
Vulnerability details
The vulnerable code is reproduced below:
function processWithdrawalQueue(
uint _len
) external onlyValidPrice onlyOperator {
uint256 length = withdrawalQueue.length();
require(length > 0, "empty queue!");
require(_len <= length, "invalid len!");
if (_len == 0) _len = length;
uint256 totalWithdrawAssets;
uint256 totalBurnShares;
uint256 totalFees;
for (uint count = 0; count < _len; ) {
bytes memory data = withdrawalQueue.front();
(
address sender,
address receiver,
uint256 shares,
bytes32 prevId
) = _decodeData(data);
_validateKyc(sender, receiver);
// [...]
_withdraw(
address(this),
receiver,
address(this),
trimmedAssets,
shares
);
// [...]
}
If an entry in the withdrawalQueue
has an invalid or unapproved KYC status, the _validateKyc
call reverts, aborting the function. This leaves the invalid entry at the front of the queue, permanently blocking processing of all subsequent withdrawals.
V12 produces the following correct vulnerability report:
The contract loops through a withdrawal queue and validates each entry’s KYC status before removing it. If an entry fails KYC, the internal _validateKyc call reverts, aborting the function without popping the invalid entry. That entry remains at the front of the queue, permanently blocking processing of all subsequent withdrawals.
Bug 4: Medium in Cantina competition
The bug is a denial-of-service vulnerability in the applySlashes
function, where a malicious validator can prevent themselves from being slashed. The vulnerability was rated Medium severity, and was found by 23 other researchers. V12 discovered and reported this vulnerability on its own, with no special prompting.
Vulnerability details
The vulnerable code is reproduced below:
function applySlashes(Slash[] calldata slashes) external override onlySystemCall {
for (uint256 i; i < slashes.length; ++i) {
Slash calldata slash = slashes[i];
// signed consensus header means validator is whitelisted, staked, & active
// unless validator was forcibly retired & ejected via burn: skip
if (isRetired(slash.validatorAddress)) continue;
if (balances[slash.validatorAddress] > slash.amount) {
balances[slash.validatorAddress] -= slash.amount;
} else {
// eject validators whose balance would reach 0
_consensusBurn(slash.validatorAddress);
}
emit ValidatorSlashed(slash);
}
}
function _consensusBurn(address validatorAddress) internal {
// [...]
// exit, retire, and unstake + burn validator immediately
// [...]
_unstake(validatorAddress, recipient, validator.stakeVersion);
}
function _unstake(
address validatorAddress,
address recipient,
uint8 validatorVersion
)
internal
virtual
returns (uint256)
{
// [...]
// send `bal` if `rewards == 0`, or `stakeAmt` with nonzero `rewards` added from Issuance's balance
Issuance(issuance).distributeStakeReward{ value: unstakeAmt }(recipient, rewards);
return unstakeAmt + rewards;
}
function distributeStakeReward(address recipient, uint256 rewardAmount) external payable virtual onlyStakeManager {
uint256 bal = address(this).balance;
uint256 totalAmount = rewardAmount + msg.value;
if (bal < totalAmount) {
revert InsufficientBalance(bal, totalAmount);
}
(bool res,) = recipient.call{ value: totalAmount }("");
if (!res) revert RewardDistributionFailure(recipient);
}
If applying slashes causes a validator’s balance to reach zero, _consensusBurn
is called to eject them. This calls Issuance::distributeStakeReward
, which transfers the unstaked amount to the validator. However, if the validator is a malicious contract that reverts on receiving the eth, the transfer will fail and revert the entire applySlashes
transaction. This prevents the validator from being slashed and also blocks all other slashes in the batch.
V12 produces the following correct vulnerability report:
The
applySlashes
function iterates over multiple validators to slash their stake by calling_consensusBurn
, which in turn invokesStakeManager._unstake
to transfer tokens. Because there is no try/catch or fault isolation around each_consensusBurn
call, if any external token transfer reverts (due to a malicious or faulty token/recipient), the entire batch operation reverts. This cancels all prior slashes in that transaction, effectively allowing a single failure to block slashes for all validators in the batch.
V12 performance in historical audit competitions
Testing V12’s performance in past audit competitions, human researchers consistently dupe on issues that V12 finds, including issues that were missed by prior audits.
Phi (Code4rena, October 2024)↗
V12 finds 6 out of the 7 high severity reported by human researchers.
Original and V12 reported findings
H-01:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Signature replay in signatureClaim results in unauthorized claiming of rewards | PhiFactory |
V12 | Missing EIP-712 Domain Separator Allows Cross-Contract/Chain Signature Replay | PhiFactory: |
Source | Finding Description |
---|---|
Ground Truth | PhiFactory::signatureClaim decodes (expiresIn, minter, ref, verifier, artId, chainId, data) but ignores the encoded chainId (note the skipped field after artId ). Because the function verifies keccak256(encodeData_) against phiSignerAddress without binding to block.chainid , a signature valid on one chain can be reused on others. Since the PhiNFT1155 -> Claimable::signatureClaim path properly substitutes block.chainid but PhiFactory::signatureClaim is callable directly, attackers can replay a signature cross-chain to claim valuable rewards tied to the same artId on a different chain, causing loss of funds. |
V12 | _validateArtCreationSignature verifies _recoverSigner(keccak256(signedData_), signature_) (ETH signed-message prefix only) where signedData_ encodes just (timestamp, string, bytes) and lacks any domain separator or contextual binding (e.g., contract address or chain ID). Identical payloads yield the same hash across contracts/chains, enabling cross-context signature replay to bypass checks and perform unauthorized actions across deployments. |
H-02:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Signature replay in createArt allows impersonation and royalty theft | PhiFactory::createArt |
V12 | CreateArt Signature Omits CreateConfig Fields, Allowing Parameter Tampering | PhiFactory::createArt |
Source | Finding Description |
---|---|
Ground Truth | PhiFactory::createArt verifies a signature that is not bound to the specific submitter nor to the CreateConfig fields. An attacker can frontrun a legitimate createArt tx, reuse the same signature, and supply their own config (e.g., set themselves as artist or royalties receiver, change royalty BPS, maxSupply, endTime, mintFee). Both the attacker’s and victim’s txs can succeed, but royalties flow to the attacker, enabling theft and other parameter abuses. |
V12 | The signature validated by _validateArtCreationSignature covers only signedData_ (expiresIn, URI, credData) and omits the CreateConfig struct (artist , receiver , maxSupply , mintFee , startTime , endTime , soulBounded ). Since createArt trusts createConfig_ without including it in the signed payload, any holder of a valid signature can replay it with arbitrary CreateConfig values to hijack art creation (e.g., assign themselves as artist/receiver, alter supply/fees/timing), causing unauthorized asset creation and economic loss. |
H-03:
Source | Finding Title | Location |
---|---|---|
Ground Truth | shareBalance bloating eventually blocks curator rewards distribution | Cred::_updateCuratorShareBalance |
V12 | Stale Zero Balances Remain in shareBalance Map | Cred::_updateCuratorShareBalance |
Source | Finding Description |
---|---|
Ground Truth | Cred tracks curator shares in mapping(uint256 => EnumerableMap.AddressToUintMap) shareBalance . When a curator sells all shares, _updateCuratorShareBalance sets the balance to 0 instead of deleting the key from the EnumerableMap . Over time, zero-balance entries accumulate, growing shareBalance[credId].length() . During rewards distribution, enumeration over all entries (via _getCuratorData and shareBalance[credId].at(i) ) performs many SLOAD s and can hit block gas limits, causing distribution to revert and potentially DoS curator rewards for active creds. |
V12 | In the sell branch (isBuy == false ), when currentNum - amount == 0 , the code removes the credId from per-address tracking but calls shareBalance.set(sender, 0) instead of shareBalance.remove(sender) . OpenZeppelin EnumerableMap.set does not prune keys at zero, so zero-value entries persist indefinitely. This bloats enumeration, misrepresents active curators, and inflates gas costs for distributions or governance that iterate over map keys. |
H-05:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Exposed _removeCredIdPerAddress & _addCredIdPerAddress lets anyone disrupt current and future holders | Cred::_addCredIdPerAddress, Cred::_removeCredIdPerAddress |
V12 | Missing Access Control on Internal Credential Management Functions | Cred::_addCredIdPerAddress and Cred::_removeCredIdPerAddress |
Source | Finding Description |
---|---|
Ground Truth | _addCredIdPerAddress and _removeCredIdPerAddress are exposed so that anyone can call them, enabling griefing of users trying to buy/hold/sell cred shares by arbitrarily adding/removing cred IDs from user mappings and disrupting holder state. |
V12 | The functions are declared public with no access-control modifiers, allowing any external account to add or remove credential IDs for arbitrary addresses. This breaks integrity of per-address credential tracking, enabling unauthorized assignments or revocations that can corrupt higher-level logic dependent on these mappings. |
H-06:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Reentrancy during cred creation enables theft of contract Ether via share/refund loop | Cred::createCred (via _createCredInternal / buyShareCred) |
V12 | Reentrancy in createCred Leading to Duplicate Credential IDs | Cred::createCred |
Source | Finding Description |
---|---|
Ground Truth | During createCred , the sender auto-buys 1 share; excess ETH is refunded. External calls during this flow allow reentrancy into both cred creation and trading before credIdCounter is incremented, letting an attacker overwrite the pending cred and pivot between whitelisted bonding curves (buy cheaply, then overwrite/sell expensively), effectively draining the Cred contract’s Ether. |
V12 | createCred → _createCredInternal writes to creds[credIdCounter] and then calls buyShareCred , which performs external ETH transfers (refund via safeTransferETH ) and IPhiRewards.deposit before incrementing credIdCounter . With no reentrancy guard, an attacker can reenter during these external calls and reuse the same credIdCounter to create duplicate credentials, corrupt state, misallocate shares/royalties, and siphon ETH via refunds/deposits. |
H-07:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Unrestricted changes to token settings allow artists to alter critical features | PhiFactory::updateArtSettings |
V12 | External Mutable Soul-Bound Flag Allows Revocation of Soul-Bound Status | PhiNFT1155::soulBounded |
Source | Finding Description |
---|---|
Ground Truth | PhiFactory::updateArtSettings lets the artist modify critical parameters (e.g., URI, royalty fee, and the soulBounded flag) at any time, meaning previously assumed-immutable features of the art can be changed post-deployment. |
V12 | PhiNFT1155.soulBounded(tokenId) defers entirely to PhiFactory.artData(...).soulBounded , which the artist can toggle later via updateArtSettings . Because the NFT contract doesn’t store or enforce an immutable snapshot, an artist can revoke or alter soul-bound status post-mint, undermining trust and downstream assumptions. |
Munchables (Code4rena, July 2024)↗
V12 finds 4 out of 5 high severity issues reported by human researchers.
Original and V12 reported findings
H-01:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Single plot can be occupied by multiple renters | LandManager::transferToUnoccupiedPlot |
V12 | Missing plotId update on plot transfer | LandManager::transferToUnoccupiedPlot |
Source | Finding Description |
---|---|
Ground Truth | transferToUnoccupiedPlot lets a user move a staked token to a new plot but fails to update toilerState.plotId . The contract continues to think the token occupies the original plot, enabling inconsistent state where one plot appears occupied while the token has been moved, allowing multiple renters per plot and breaking occupancy logic. |
V12 | The function updates plotOccupied and latestTaxRate when transferring but never assigns toilerState[tokenId].plotId = newPlotId . This leaves a stale plot reference and emits events with the wrong plotId , causing inconsistent state and misleading data for consumers relying on toilerState.plotId and FarmPlotTaken . |
H-02:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Invalid validation in _farmPlots function allowing a malicious user repeated farming without locked funds | LandManager::_farmPlots |
V12 | Off-by-One Error in _farmPlots Boundary Check | LandManager::_farmPlots |
Source | Finding Description |
---|---|
Ground Truth | _farmPlots uses if (_getNumPlots(landlord) < _toiler.plotId) to invalidate stale toilers. When a landlord unlocks all funds, _getNumPlots should be 0 , but plot 0 remains farmable because the condition doesn’t catch it. As a result, a user can keep farming and collecting rewards from a plot after funds are unlocked, creating unfair, repeatable rewards. |
V12 | The boundary check compares numPlots < plotId instead of verifying plotId >= numPlots . Since valid plot IDs are 0..numPlots-1 , when plotId == numPlots the toiler isn’t marked dirty and continues accruing rewards on a non-existent plot—enabling repeated farming without locked funds. |
H-03:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Miscalculation in _farmPlots could prevent unstaking all NFTs | LandManager::_farmPlots |
V12 | Unchecked Negative Bonus Leads to uint256 Underflow in schnibblesTotal Calculation | LandManager::_farmPlots |
Source | Finding Description |
---|---|
Ground Truth | _farmPlots computes finalBonus from realm and rarity bonuses; when finalBonus < 0 , the mixed signed/unsigned math makes the intermediate negative, which is then cast to uint256 , producing a huge schnibblesTotal . The subsequent schnibblesLandlord multiplication overflows and reverts. Because unstakeMunchable() uses forceFarmPlots() , the revert blocks users from unstaking their NFTs. |
V12 | finalBonus (sum of REALM_BONUSES and RARITY_BONUSES ) is not clamped; values below -100 make (int256(schnibblesTotal) + int256(schnibblesTotal) * finalBonus) negative. Casting to uint256 underflows to a massive value, enabling arbitrarily large rewards and draining funds since no validation bounds or clamps the bonus before applying it. |
H-04:
Source | Finding Title | Location |
---|---|---|
Ground Truth | in farmPlots() an underflow in edge case leading to freeze of funds (NFT) | LandManager::_farmPlots |
V12 | Unsigned Integer Underflow in _farmPlots Causing Tenant DoS | LandManager::_farmPlots |
Source | Finding Description |
---|---|
Ground Truth | When PRICE_PER_PLOT increases, _getNumPlots(landlord) decreases. If a renter’s plotId is now out of range, _farmPlots sets timestamp = plotMetadata[landlord].lastUpdated (which can be older than toilerState.lastToilDate set at stake time). The subsequent (timestamp - lastToilDate) underflows, causing perpetual revert during farming and unstakeMunchable , effectively freezing the NFT/funds. |
V12 | If _getNumPlots(landlord) < toiler.plotId , _farmPlots resets timestamp to plotMetadata[landlord].lastUpdated . When that value is less than toilerState[tokenId].lastToilDate , computing (timestamp - lastToilDate) underflows and reverts. Because lastUpdated isn’t bumped when plot counts shrink, tenants can be DoSed from farming/unstaking until state is corrected. |
Virtuals (Code4rena, April 2025)↗
V12 finds 4 out of 6 high severity issues reported by human researchers. Two issues, H-03 and H-04 were missed by the prior audits89.
Original and V12 reported findings
H-01:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Lack of access control in AgentNftV2::addValidator() enables unauthorized validator injection and causes reward accounting inconsistencies | AgentNftV2::addValidator |
V12 | Missing Access Control on addValidator | AgentNftV2::addValidator |
Source | Finding Description |
---|---|
Ground Truth | AgentNftV2::addValidator() has no access control. An attacker can predict the next virtualId via AgentFactoryV2::executeApplication() using IAgentNft(nft).nextVirtualId() and preemptively call addValidator() to append a validator to _validators[virtualId] . Later, AgentNftV2::mint() invokes _addValidator() again, duplicating the validator for the same virtualId and causing reward/score accounting inconsistencies. |
V12 | addValidator is public with no role/owner checks, so any account can add any address as a validator for any virtualId . The call also initializes that validator’s score, allowing arbitrary injection and manipulation of the validator set, undermining governance and reward distributions. |
H-03:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Public ServiceNft::updateImpact call leads to cascading issue | ServiceNft::updateImpact |
V12 | Missing Access Control on updateImpact allows unauthorized state modification | ServiceNft::updateImpact |
Source | Finding Description |
---|---|
Ground Truth | ServiceNft::updateImpact is public , so anyone can call it with arbitrary virtualId /proposalId . When admins change datasetImpactWeight , attackers can front-run or repeatedly call updateImpact to skew _impacts and _maturities in their favor. Because both AgentRewardV2::_distributeContributorRewards and Minter::mint read ServiceNft::getImpact , adversaries can inflate their rewards or reduce others’, causing cascading economic distortions. |
V12 | updateImpact lacks any access control, allowing external callers to modify _impacts and _maturities and emit SetServiceScore for arbitrary IDs. This enables unauthorized manipulation of service scores and maturities, corrupting governance, reward distributions, and any downstream logic that relies on accurate impact values. |
H-04:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Public ContributionNft::mint leads to cascading issues / loss of funds | ContributionNft::mint |
V12 | Missing Existence Check Allows Invalid ParentId in mint | ContributionNft::mint |
V12 | Unvalidated datasetId in mint allows arbitrary dataset IDs | ContributionNft::mint |
Source | Finding Description |
---|---|
Ground Truth | ContributionNft::mint is unguarded, letting proposers mint their own NFTs with arbitrary fields (coreId , newTokenURI , parentId , isModel_ , datasetId ). These bad inputs cascade into ServiceNft::mint /updateImpact , AgentRewardV2::_distributeContributorRewards , Minter::mint , and AgentDAO::_calcMaturity , corrupting cores/models, impacts, maturities, and reward flows—leading to loss of funds and protocol mis-accounting. |
V12 | The function accepts any parentId and writes parent/child links without verifying the parent token exists, allowing bogus genealogies that can break downstream logic. |
V12 | When isModel_ == true , any datasetId is recorded without validation/whitelisting, enabling arbitrary or nonexistent dataset links that distort impact calculations and downstream consumers. |
H-06:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Missing prevAgentId update in promptMulti causes transfer to address(0) | AgentInference::promptMulti |
V12 | Improper Caching of agentTba in promptMulti Allows Token Loss | AgentInference::promptMulti |
Source | Finding Description |
---|---|
Ground Truth | promptMulti() caches agentTba based on prevAgentId but never updates prevAgentId inside the loop. If the first agentId equals the default prevAgentId (0), agentTba is never set before the first transfer, leading to safeTransferFrom(sender, address(0), amount) and loss of tokens. Repeated or reordered agentId s can also reuse a stale agentTba , causing transfers to the wrong address. |
V12 | The loop fails to update prevAgentId after fetching agentTba , so a stale (possibly zero) agentTba can be reused across iterations. If an agentId == 0 is included and virtualInfo(0).tba returns address(0) , subsequent transfers in the batch are sent to the zero address (burned), resulting in irreversible token loss. |
Basin (Code4rena, July 2024)↗
V12 finds all 2 out of 2 high severity issues reported by human researchers.
Original and V12 reported findings
H-01:
Source | Finding Title | Location |
---|---|---|
Ground Truth | WellUpgradeable can be upgraded by anyone | WellUpgradeable::_authorizeUpgrade / upgradeToAndCall |
V12 | Public upgradeToAndCall Allows Unauthorized Upgrades | WellUpgradeable::upgradeToAndCall |
Source | Finding Description |
---|---|
Ground Truth | In the UUPS pattern, _authorizeUpgrade must restrict access (e.g., onlyOwner ). Here it’s overridden without such access control, so upgradeTo / upgradeToAndCall can be invoked by any address to point the proxy at an arbitrary (potentially malicious) implementation, compromising the contract. |
V12 | upgradeToAndCall is publicly callable and _authorizeUpgrade only checks delegatecall context and registry membership, not msg.sender . With no owner/RBAC guard, any external account can trigger upgrades to approved implementations, enabling malicious upgrades, fund theft, or DoS. |
H-02:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Incorrectly assigned decimal1 parameter upon decoding | Stable2::decodeWellData |
V12 | Decimal Default Bug in decodeWellData | Stable2::decodeWellData |
V12 | Decimal Defaulting Typos Lead to Mis‐Scaling in decodeWellData | Stable2::decodeWellData |
Source | Finding Description |
---|---|
Ground Truth | decodeWellData(bytes) decodes (decimal0, decimal1) and intends to default zeros to 18. It correctly fixes decimal0 == 0 → 18, but erroneously checks decimal0 == 0 again instead of decimal1 == 0 , so decimal1 is never defaulted. This makes decimal1 incorrectly stay at its decoded value (including 0), causing downstream scaling/mispricing in functions that rely on these decimals. |
V12 | Due to a copy-paste error, both guards check decimal0 == 0 . When callers provide decimal1 == 0 , it remains 0 instead of defaulting to 18; reserve scaling then uses 10^(18 − 0) for token1, massively inflating reserves and corrupting rates. |
V12 | The typo leaves decimal1 uninitialized to 18 when zero, so all normalization/denormalization (LP supply, prices, reserve/rate calcs) is mis-scaled if inputs are (x, 0) , leading to over/under-valuation and potential loss of funds. |
Blackhole (May 2025)↗
V12 finds all 2 out of 2 high severity issues reported by human researchers. H-01 was missed by the prior audit10.
Original and V12 reported findings
H-01:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Router address validation logic error prevents valid router assignment | GenesisPoolManager::setRouter |
V12 | Inverted Zero-Address Check in setRouter | GenesisPoolManager::setRouter |
Source | Finding Description |
---|---|
Ground Truth | setRouter(address _router) uses require(_router == address(0), "ZA") , which only allows setting the router to the zero address. This prevents the owner from assigning a valid non-zero router and effectively inverts the intended check (should be != address(0) ), breaking DEX interactions that rely on a proper router. |
V12 | The zero-address guard is inverted: it enforces _router to be address(0) rather than rejecting it. As a result, the router cannot be updated to a functional non-zero address, disabling features that depend on a valid router and potentially causing launches to fail. |
H-02:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Reward token in GaugeFactoryCL can be drained by anyone | GaugeFactoryCL::createGauge (via createEternalFarming) |
V12 | Missing Access Control on createGauge in GaugeFactory | GaugeFactory::createGauge |
Source | Finding Description |
---|---|
Ground Truth | createGauge is publicly callable. Each call seeds a new Algebra eternal farming incentive via createEternalFarming , which safeApprove s the eternal farming contract to pull a hardcoded 1e10 of _rewardToken from GaugeFactoryCL . If the factory is pre-funded, an attacker can repeatedly call createGauge to drain _rewardToken into attacker-chosen farms, then stake/claim from those farms. |
V12 | createGauge lacks onlyOwner/onlyRole checks, allowing anyone to deploy/register gauges. Unrestricted gauge creation enables arbitrary/spam gauges that can divert or misuse incentives and disrupt distributions. |
Lambo.win (Code4rena, December 2024)↗
V12 finds all 2 out of 2 high severity issues reported by human researchers. Both were missed by the prior audit11.
Original and V12 reported findings
H-01:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Loss of User Funds in VirtualToken’s cashIn Due to Incorrect Amount Minting | VirtualToken::cashIn |
V12 | Incorrect minting of virtual tokens in cashIn for ERC-20 deposits | VirtualToken::cashIn |
Source | Finding Description |
---|---|
Ground Truth | In cashIn(uint256 amount) , ERC-20 deposits transfer amount from the user, but the contract mints using msg.value instead of amount . For ERC-20 calls, msg.value is 0, so users receive 0 virtual tokens while their ERC-20 tokens are taken, causing loss of funds/credit. |
V12 | cashIn always calls _mint(msg.sender, msg.value) . In the ERC-20 branch (underlyingToken != NATIVE_TOKEN ), msg.value == 0 , so zero virtual tokens are minted despite _transferAssetFromUser(amount) moving funds, leading to locked assets and DoS for ERC-20 depositors. |
H-03:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Calculation for directionMask is incorrect | LamboRebalanceOnUniwap::_getQuoteAndDirection |
V12 | Misclassification of WETH Trades in _getQuoteAndDirection | LamboRebalanceOnUniswap::_getQuoteAndDirection |
Source | Finding Description |
---|---|
Ground Truth | _getQuoteAndDirection assumes WETH is always token1 and sets the directionMask accordingly. Because Uniswap orders token0/token1 lexicographically by address, WETH can be token0 , so the function can choose the wrong zero-for-one vs one-for-zero direction, yielding incorrect rebalancing behavior and potential losses. |
V12 | The function sets directionMask = (tokenIn == weth) ? _BUY_MASK : _SELL_MASK , ignoring cases where tokenOut == weth . As a result, WETH-buy trades are misclassified as SELL and routed to the wrong execution path, causing failed swaps or adverse outcomes. |
TraitForge (Code4rena, July 2024)↗
V12 finds 1 out of 6 high severity issues reported by human researchers.
Original and V12 reported findings
H-01:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Wrong minting logic based on total token count across generations | TraitForgeNft::mintWithBudget |
V12 | mintWithBudget enforces mint cap on global counter instead of per-generation counter | TraitForgeNft::mintWithBudget |
Source | Finding Description |
---|---|
Ground Truth | mintWithBudget uses the global _tokenIds (total minted ever) in while (budgetLeft >= mintPrice && _tokenIds < maxTokensPerGen) . Because _tokenIds doesn’t reset per generation, once total mints reach maxTokensPerGen , further mints are blocked even if the current generation hasn’t reached its own cap, breaking per-generation supply logic. |
V12 | The loop guards against _tokenIds < maxTokensPerGen instead of the per-generation counter (e.g., generationMintCounts[currentGeneration] ). After the global count hits the cap, new generations can’t mint via mintWithBudget , causing a denial of minting and potential revenue loss despite unused capacity in the current generation. |
DODO Cross-Chain DEX (Sherlock, June 2025)↗
V12 finds 2 out of 5 high severity issues reported by human researchers.
Original and V12 reported findings
H-02:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Any attacker can steal accumulated ZRC20 tokens from GatewayTransferNative | GatewayTransferNative::withdrawToNativeChain |
V12 | Potential Missing msg.value Check in withdrawToNativeChain | GatewayTransferNative::withdrawToNativeChain |
Source | Finding Description |
---|---|
Ground Truth | withdrawToNativeChain skips validating msg.value when zrc20 == _ETH_ADDRESS_ . Attackers can craft calls that bypass transferFrom (native path) and claim arbitrary native-token amounts without depositing ETH, enabling theft of accumulated ZRC20 through malicious message crafting. |
V12 | When bridging native ETH (zrc20 == _ETH_ADDRESS_ ), the function does not enforce msg.value == amount . An attacker can pass a nonzero amount but send zero ETH, yet downstream logic uses amount , effectively minting/misallocating bridged value and causing loss of funds. |
H-05:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Unauthorized Claim of Non-EVM Chain Refunds in claimRefund | GatewayTransferNative::claimRefund, GatewayCrossChain::claimRefund |
V12 | ClaimRefund Authorization Bypass | GatewayTransferNative::claimRefund |
Source | Finding Description |
---|---|
Ground Truth | In claimRefund(bytes32 externalId) , if refundInfo.walletAddress.length != 20 (i.e., non-EVM addresses like BTC), receiver defaults to msg.sender . The auth check `require(bots[msg.sender] |
V12 | The function branches on walletAddress.length == 20 to derive a recipient; otherwise it leaves receiver = msg.sender , so the require(msg.sender == receiver) trivially holds. Since callback handlers store unvalidated walletAddress bytes, an attacker can ensure a non-20-byte value and then call claimRefund to siphon the refund. |
Lend (Sherlock, June 2025)↗
V12 finds 10 out of 28 high severity issues reported by human researchers.
Original and V12 reported findings
H-01:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Drainage of the LEND token reserves through repeated claims of the same rewards | CoreRouter::claimLend |
V12 | Double-claim of LEND rewards due to uncleared lendAccrued | CoreRouter::claimLend |
Source | Finding Description |
---|---|
Ground Truth | claimLend() transfers accrued LEND via grantLendInternal(holder, accrued) but never resets lendStorage.lendAccrued[holder] . Since the return value from grantLendInternal (remaining amount) is ignored, the same accrued balance can be claimed again on subsequent calls, allowing perpetual draining of LEND reserves. |
V12 | claimLend accrues rewards into lendAccrued (via distributeBorrowerLend / distributeSupplierLend ) and then pays them with grantLendInternal , but lendAccrued is never cleared or reduced. On later calls, the old value remains and is paid again (plus new accruals), enabling repeated claims that steal excess LEND from the protocol. |
H-03:
Source | Finding Title | Location |
---|---|---|
Ground Truth | User can evade liquidation and bridge funds by exploiting cross-chain borrow/collateral invariant | LendStorage::borrowWithInterest |
V12 | Denial of Service via Cross-Chain Borrow/Collateral Invariant Violation | LendStorage::getMaxLiquidationRepayAmount |
Source | Finding Description |
---|---|
Ground Truth | A user supplies collateral on Chain A and borrows on Chain B, then supplies dust on B and borrows dust on A. This populates both crossChainBorrows and crossChainCollaterals for the same user–asset on both chains. The invariant `borrows.length == 0 |
V12 | addCrossChainBorrow and addCrossChainCollateral can both leave non-empty arrays for the same user–token pair. Because no mutual-clearing occurs, the invariant checked in borrowWithInterest fails, making downstream calls like getMaxLiquidationRepayAmount revert and DoSing liquidations for that position. |
H-07:
Source | Finding Title | Location |
---|---|---|
Ground Truth | CoreRouter Prone to Fund Depletion or Trapping Due to Miscalculated Redemption Payouts | CoreRouter::redeem |
V12 | Mismatched Exchange Rate Usage in CoreRouter.redeem Leads to Incorrect Underlying Transfer | CoreRouter::redeem |
Source | Finding Description |
---|---|
Ground Truth | redeem pre-computes expectedUnderlying using _lToken.exchangeRateStored() and then calls _lToken.redeem(_amount) . It blindly transfers expectedUnderlying to the user without checking how much underlying the LToken actually sent. If the real redeemed amount is less, CoreRouter overpays (reserve drain); if more, excess tokens get stuck in CoreRouter. |
V12 | redeem uses a potentially stale exchangeRateStored() before redeem() (which accrues interest and uses an updated internal rate). The precomputed expectedUnderlying can differ from the actual redeemed amount, causing users to be underpaid or CoreRouter to leak/strand funds due to the mismatch. |
H-08:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Cross-chain liquidations blocked by incorrect maxLiquidationAmount calculation | LendStorage::getMaxLiquidationRepayAmount (uses ::borrowWithInterest) |
V12 | Overly Strict Collateral Filter Causes Underestimation in borrowWithInterest | LendStorage::borrowWithInterest |
Source | Finding Description |
---|---|
Ground Truth | getMaxLiquidationRepayAmount computes the cap using borrowWithInterest , which only sums borrows originating from the current chain. It omits borrows whose destination is the current chain. This underestimates the user’s outstanding borrow and makes _validateAndPrepareLiquidation reject valid cross-chain liquidations because repayAmount appears to exceed maxLiquidationAmount . |
V12 | In the collateral path of borrowWithInterest , the filter requires both destEid == currentEid and srcEid == currentEid , so collateral entries originating from other chains (srcEid != currentEid ) are skipped. This undercounts debt backed by local (destination) collateral, leading to inaccurate borrow totals and causing liquidation logic that depends on this value to fail. |
H-09:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Outdated Exchange Rate Utilization | CoreRouter::supply |
V12 | Over-crediting lTokens due to stale exchangeRateStored | CoreRouter::supply |
Source | Finding Description |
---|---|
Ground Truth | supply() reads a stale exchange rate with _lToken.exchangeRateStored() before calling mint() . Since mint() accrues interest and updates the rate, using the pre-mint rate to compute mintTokens over-credits users with more lTokens than actually minted, creating value leakage. |
V12 | supply() computes mintTokens = (_amount * 1e18) / exchangeRateStored() taken before LErc20.mint() (which accrues interest and updates the rate). The mismatch credits excess lTokens to users, inflating balances and draining protocol value over time. |
H-18:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Incorrect srcEid check in borrowWithInterest() | LendStorage::borrowWithInterest |
V12 | borrowWithInterest ignores cross-chain collateral due to incorrect EID filter | LendStorage::borrowWithInterest |
Source | Finding Description |
---|---|
Ground Truth | On the source chain (A), borrowWithInterest filters borrows with borrows[i].srcEid == currentEid . But borrow records on chain A store destEid = currentEid (with srcEid set to the remote chain B). This mismatch causes the function to ignore existing cross-chain borrows on chain A and return zero, breaking accounting and downstream logic. |
V12 | In the collateral branch on the destination chain, the function requires srcEid == currentEid && destEid == currentEid . Valid cross-chain collateral destined for the local chain has destEid == currentEid but srcEid != currentEid , so it’s erroneously skipped, undercounting borrowed amounts and risking incorrect liquidations. |
H-21:
Source | Finding Title | Location |
---|---|---|
Ground Truth | User can redeem collateral immediately after initiating the borrow, leading undercollateralization | CrossChainRouter::borrowCrossChain; CoreRouter::redeem |
V12 | Under-collateralized Cross-Chain Borrow due to Lack of Collateral Escrow | CrossChainRouter::borrowCrossChain |
Source | Finding Description |
---|---|
Ground Truth | After borrowCrossChain adds collateral tracking and sends the cross-chain message, it does not lock the user’s collateral on the source chain. Before the remote borrow is finalized, the user can call CoreRouter::redeem and withdraw collateral because the liquidity check only accounts for recorded borrows (the pending cross-chain borrow isn’t yet reflected). This lets users end up undercollateralized once the destination-chain borrow executes. |
V12 | borrowCrossChain relies on a view-only collateral calculation (getHypotheticalAccountLiquidityCollateral ) and sends the message without staking/locking assets. In the async window, users can remove or transfer collateral, so the destination chain mints debt against collateral that no longer exists, creating undercollateralized positions and solvency risk. |
H-22:
Source | Finding Title | Location |
---|---|---|
Ground Truth | The liquidation validation logic is wrong | CrossChainRouter::_checkLiquidationValid |
V12 | Liquidation Simulation Sign Error in _checkLiquidationValid | CrossChainRouter::_checkLiquidationValid |
Source | Finding Description |
---|---|
Ground Truth | On cross-chain liquidation, Chain B computes seizeTokens (collateral to take) and sends it as payload.amount to Chain A. Chain A’s _checkLiquidationValid then calls getHypotheticalAccountLiquidityCollateral(sender, destlToken, 0, payload.amount) , treating payload.amount as an additional borrow instead of seized collateral. This misuses the parameter semantics and can falsely mark healthy positions as liquidatable, since it asks “what if the user borrowed this much more?” rather than modeling collateral seizure. |
V12 | _checkLiquidationValid passes the repay/seize amount as borrowAmount to getHypotheticalAccountLiquidityCollateral , which increases hypothetical debt rather than reducing it. This sign inversion inflates sumBorrowPlusEffects , potentially flagging solvent accounts as under-collateralized and enabling unwarranted liquidations. |
H-25:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Incorrect destEid Value in _handleLiquidationSuccess Prevents Liquidation Completion | CrossChainRouter::_handleLiquidationSuccess |
V12 | Incorrect EID Parameters Causing Collateral Lookup Mismatch | CrossChainRouter::_handleLiquidationSuccess |
Source | Finding Description |
---|---|
Ground Truth | _handleLiquidationSuccess calls lendStorage.findCrossChainCollateral(...) with destEid hardcoded to 0 , so the lookup never matches records stored with real source/destination EIDs. The collateral record can’t be found, preventing liquidation finalization and leaving debt/collateral in limbo. |
V12 | The function supplies incorrect chain identifiers (uses local EID and destEid=0 ) when searching for the cross-chain collateral entry. Since records are saved with actual srcEid /destEid , the lookup fails, causing liquidation completion to revert/abort and resulting in a denial of service and potential financial loss. |
H-27:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Incorrect Collateral Check Logic in CoreRouter.sol#borrow() | CoreRouter::borrow |
V12 | Unrestricted Initial Borrow Without Collateral | CoreRouter::borrow |
Source | Finding Description |
---|---|
Ground Truth | borrow() correctly gets (borrowed, collateral) from getHypotheticalAccountLiquidityCollateral(msg.sender, LToken(_lToken), 0, _amount) but then replaces the solvency check with a recalculated borrowAmount using interest indices. If currentBorrow.borrowIndex == 0 (first borrow in that market), it sets borrowAmount = 0 , turning the check into require(collateral >= 0) , which always passes—allowing undercollateralized borrows. |
V12 | For new borrowers, currentBorrow.borrowIndex is zero, so borrowAmount is forced to 0 and require(collateral >= borrowAmount) trivially succeeds. This bypass enables an initial borrow without sufficient collateral, risking protocol fund drain and bad debt. |
Burve (Sherlock, April 2025)↗
V12 finds 2 out of 9 high severity issues reported by human researchers.
Original and V12 reported findings
H-03:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Incorrect Netting Logic Leads to Excessive Withdrawal Amounts | VaultE4626Impl::commit |
V12 | Incorrect subtraction order leads to gross withdrawals | VaultE4626Impl::commit |
Source | Finding Description |
---|---|
Ground Truth | During trimBalance , both a deposit and withdrawal can be queued, then commit attempts to net them. In the branch where assetsToWithdraw > assetsToDeposit , the code zeroes assetsToDeposit before subtracting it from assetsToWithdraw , leaving the withdrawal unnetted. The vault then withdraws the gross amount and decrements totalVaultShares excessively, causing users to receive too much and breaking accounting. |
V12 | When both deposit and withdrawal are pending, the commit netting logic sets assetsToDeposit = 0 and then subtracts it from assetsToWithdraw , so no netting occurs. The vault processes the full withdrawal amount, violating the netting invariant and enabling excess withdrawals that can drain funds and corrupt totalVaultShares . |
H-06:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Fee Bypass in ValueFacet.removeValueSingle | ValueFacet::removeValueSingle |
V12 | Incorrect Fee Calculation Leading to Zero Tax on Withdrawal | ValueFacet::removeValueSingle |
Source | Finding Description |
---|---|
Ground Truth | In removeValueSingle , the fee realTax is computed using removedBalance before it’s assigned (defaults to 0 ). The correct numerator should be realRemoved . Because removedBalance is zero at that point, realTax becomes 0 , letting users withdraw without paying the intended fee and causing protocol revenue loss. |
V12 | The function calculates realTax = mulDiv(removedBalance, nominalTax, removedNominal) while removedBalance is still zero instead of using realRemoved . This makes the fee always zero, so withdrawals are never charged, enabling fee bypass and economic leakage. |
Crestial (Sherlock, March 2025)↗
V12 finds the only high severity issue reported by human researchers.
Original and V12 reported findings
H-01:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Anyone approving BlueprintV5 can be drained via public payWithERC20 | Payment::payWithERC20 |
V12 | Unrestricted payWithERC20 allows arbitrary safeTransferFrom | Payment::payWithERC20 |
Source | Finding Description |
---|---|
Ground Truth | payWithERC20 is public and performs token.safeTransferFrom(fromAddress, toAddress, amount) with no authorization tying msg.sender to fromAddress . Any attacker can call it and move tokens from any address that has approved the BlueprintV5/Payment contract, draining approved users. |
V12 | The function imposes no restriction on msg.sender , allowing anyone to specify arbitrary fromAddress and toAddress . If fromAddress has granted allowance to the contract, the call will transfer funds via safeTransferFrom , enabling unauthorized balance drains due to missing access control. |
Mellow (Sherlock, July 2025)↗
V12 finds 2 out of 6 high severity issues reported by human researchers.
Original and V12 reported findings
H-01:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Consensus.checkSignatures doesn’t check duplication of signers | Consensus::checkSignatures |
V12 | Duplicate signature exploitation bypasses multisig threshold | Consensus::checkSignatures |
Source | Finding Description |
---|---|
Ground Truth | checkSignatures only checks signatures.length >= threshold and verifies each signature against the registered signer set, but never enforces that signer addresses are unique. An attacker can submit the same signer’s valid signature multiple times to meet the threshold and bypass the intended multisig requirement for deposits/redeems. |
V12 | The function validates each signature independently without tracking seen signers, so repeated signatures from a single authorized key satisfy the count-based threshold. This omission lets a single signer execute actions meant to require multiple distinct signers, breaking the multisig security model. |
H-05:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Incorrect performance fee calculation in FeeManager | FeeManager::calculateFee |
V12 | Performance Fee Calculation Overflow Allows Fee Shares Exceed Total Shares | FeeManager::calculateFee |
Source | Finding Description |
---|---|
Ground Truth | calculateFee computes fee shares as shares = mulDiv((minPriceD18_ - priceD18), performanceFeeD6 * totalShares, 1e24) , incorrectly treating the price difference as directly convertible to shares. This misapplies units (price vs. shares), producing nonsensical results (e.g., fee shares equaling total shares) depending on price scales. |
V12 | The formula multiplies (minPriceD18 - priceD18) (scaled 1e18+) by performanceFeeD6 * totalShares and divides by 1e24 . With large price deltas, the fixed divisor is insufficient, so computed fee shares can exceed totalShares , enabling over-collection and breaking share accounting. |
Notional Exponent (Sherlock, July 2025)↗
V12 finds 2 out of 11 high severity issues reported by human researchers.
Original and V12 reported findings
H-06:
Source | Finding Title | Location |
---|---|---|
Ground Truth | DoS in _initiateWithdrawImpl due to overflow on ++s_batchNonce | DineroWithdrawRequestManager::_initiateWithdrawImpl |
V12 | Nonce Overflow Causes RequestId Collision | DineroWithdrawRequestManager::_initiateWithdrawImpl |
Source | Finding Description |
---|---|
Ground Truth | _initiateWithdrawImpl builds a requestId using a nonce = ++s_batchNonce , where s_batchNonce is a uint16 . Once it reaches 65535 , the pre-increment overflows and reverts, causing initiateWithdraw to fail for everyone—locking withdrawals (DoS) for WETH routed through this manager. |
V12 | The function packs a nonce into the top 16 bits of requestId but doesn’t bound s_batchNonce to 16 bits. When s_batchNonce exceeds 2^16-1 , high bits are truncated during packing, causing different nonces to map to the same requestId , breaking uniqueness and request tracking. |
H-11:
Source | Finding Title | Location |
---|---|---|
Ground Truth | Missing Slippage Protection in Expired PT Redemption Causes User Fund Loss | PendlePTLib::redeemExpiredPT |
V12 | Hard-coded zero slippage bound in PendlePTLib.redeemExpiredPT | PendlePTLib::redeemExpiredPT |
Source | Finding Description |
---|---|
Ground Truth | When PTs are expired, _redeemPT calls PendlePTLib.redeemExpiredPT , which invokes sy.redeem(..., minTokenOut = 0, ...) . With no slippage guard, SY contracts that perform external swaps can return far fewer tokens due to price impact, MEV, or unfavorable rates—affecting both instant redemption and withdraw initiation flows and causing direct user losses. |
V12 | redeemExpiredPT hard-codes minAmountOut to 0 in IStandardizedYield.redeem , disabling slippage protection. This lets price manipulation or adverse market movement force redemptions at highly unfavorable rates, delivering arbitrarily low output and exposing users to value loss. |
Superfluid (Sherlock, June 2025)↗
V12 finds 1 out of 2 high severity issues reported by human researchers.
Original and V12 reported findings
Source | Finding Title | Location |
---|---|---|
Ground Truth | Staked tokens inside FluidLocker can be withdrawn without calling Unstake | FluidLocker::provideLiquidity |
V12 | Unchecked FLUID withdrawal leads to underflow in getAvailableBalance | FluidLocker::getAvailableBalance (impacted by ::provideLiquidity) |
Source | Finding Description |
---|---|
Ground Truth | provideLiquidity doesn’t validate against getAvailableBalance() , letting staked tokens be moved out to a DEX position without calling unstake . After the 6-month tax-free period, the owner withdraws that liquidity, yet the locker still counts the tokens as staked—so rewards keep accruing on tokens no longer held, corrupting reward distribution integrity. |
V12 | getAvailableBalance() returns FLUID.balanceOf(this) - _stakedBalance , but provideLiquidity can withdraw FLUID without reducing _stakedBalance or checking supAmount ≤ getAvailableBalance() . If FLUID balance drops below _stakedBalance , the subtraction underflows (Solidity ≥0.8), causing reverts and potential DoS for functions relying on this view. |
Zero Lend (ImmuneFi, February 2024)↗
V12 finds 2 out of 3 critical severity issues, and 5 out of 9 high severity issues reported by human researchers.
Original and V12 reported findings
29031 - [SC - Critical]
Source | Finding Title | Location |
---|---|---|
Ground Truth | VestedZeroNFT tokens can be directly stolen via unchecked split() | VestedZeroNFT::split |
V12 | Unauthorized NFT Split Without Ownership Verification | VestedZeroNFT::split |
Source | Finding Description |
---|---|
Ground Truth | split() lets any caller split an existing tokenId and mints a new token to msg.sender with an arbitrary fraction, without verifying ownership of tokenId . An attacker can pass someone else’s tokenId (e.g., fraction = 1 ) to siphon nearly all value from the victim’s vesting NFT. |
V12 | The function calls _requireOwned(tokenId) only to check existence and ignores its returned owner, never comparing it to msg.sender . This missing ownership check allows anyone to split another user’s NFT and receive the newly minted fractional token, enabling direct theft of vested value. |
29062 - [SC - Critical]
Source | Finding Title | Location |
---|---|---|
Ground Truth | Attacker can steal locked balance of staked NFT via unauthorized unstake | OmnichainStaking::unstakeToken |
V12 | Missing Ownership Check Allows Unauthorized Unstaking of NFTs | OmnichainStaking::unstakeToken |
Source | Finding Description |
---|---|
Ground Truth | unstakeToken lets anyone burn voting power equal to an NFT’s tokenPower and then withdraw that NFT from the staking contract, without verifying that the caller is the original staker/owner. Because different lock configs can yield equal power, an attacker can lock fewer tokens for longer to match power, burn that amount, and unstake a victim’s higher‐balance, shorter‐duration NFT—stealing the larger locked balance and forcing the victim into the attacker’s inferior position. |
V12 | The function burns ERC20 staking tokens equal to tokenPower[tokenId] and transfers the NFT to msg.sender but never checks that msg.sender staked or owns tokenId . No tokenId → staker mapping exists, so any address holding the requisite staking tokens can burn them and withdraw arbitrary NFTs they never deposited, enabling theft of staked NFTs. |
28910 - [SC - High]
Source | Finding Title | Location |
---|---|---|
Ground Truth | Bool check wrong in registerGauge prevents pool registration | PoolVoter::registerGauge |
V12 | Reversed isPool Conditional Prevents New Pool Registration | PoolVoter::registerGauge |
Source | Finding Description |
---|---|
Ground Truth | The registerGauge function uses an incorrect boolean check around the code that pushes a new _asset into _pools and sets isPool[_asset] = true . Because the condition is wrong, the initialization block never runs for fresh pools, so they are never registered. |
V12 | registerGauge guards _pools.push(_asset) and isPool[_asset] = true with if (isPool[_asset]) instead of if (!isPool[_asset]) . Since new assets default to false , the block never executes, leaving new pools unrecorded and breaking downstream logic that relies on the pool list. |
29101 - [SC - High]
Source | Finding Title | Location |
---|---|---|
Ground Truth | Staking in BaseLocker is broken | BaseLocker::_createLock |
V12 | Unqualified safeTransferFrom call preserves msg.sender leading to revert | BaseLocker::_createLock |
Source | Finding Description |
---|---|
Ground Truth | When _stakeNFT is true, the locker mints the NFT to itself and then calls safeTransferFrom(address(this), address(staking), tokenId, data) . Because this is an internal call, msg.sender remains the user, so ERC721’s authorization check fails (the locker contract, not the user, owns the NFT). As a result, staking during lock creation reverts for normal users, breaking staking flows and potentially freezing funds in integrators. |
V12 | _createLock invokes the public safeTransferFrom without qualifying it (internal jump), so msg.sender stays as the EOA instead of the contract. ERC721’s _checkAuthorized then reverts since the EOA isn’t approved for the contract-owned token. This makes “create-and-stake” lock operations fail consistently. |
29012 - [SC - High]
Source | Finding Title | Location |
---|---|---|
Ground Truth | Votes manipulation in PoolVoter via duplicate pool entries | PoolVoter::vote / PoolVoter::_vote / PoolVoter::reset |
V12 | Missing uniqueness validation allows duplicate pool votes to leak weight | PoolVoter::vote |
Source | Finding Description |
---|---|
Ground Truth | vote() lets a voter include the same pool multiple times. Each duplicate increases totalWeight and weights[_pool] , but votes[_who][_pool] is overwritten with only the last weight. When reset() is called, it subtracts only that last stored weight, leaving prior increments stuck—allowing an attacker to loop vote/reset and artificially inflate a pool’s weight and skew rewards/governance. |
V12 | The array of pools isn’t deduplicated. _vote() adds weight for every occurrence while votes[user][pool] keeps only the last value; reset() then subtracts once, so extra weight remains. Attackers can repeatedly vote with duplicate pool entries to permanently ratchet up weights[pool] and totalWeight . |
29189 - [SC - High]
Source | Finding Title | Location |
---|---|---|
Ground Truth | ZeroLendToken doesn’t allow whitelisted users to transfer | ZeroLendToken::_update |
V12 | Logic Inversion in Pause Mechanism Freezes Whitelisted Senders | ZeroLend::_update |
Source | Finding Description |
---|---|
Ground Truth | _update(from, to, value) uses require(!paused && !whitelisted[from], "paused"); . This condition reverts whenever whitelisted[from] is true, preventing whitelisted users from transferring at all (even when not paused). Intended behavior—allowing whitelisted senders to transfer while paused—should use logic that doesn’t block them, e.g., if (paused) require(whitelisted[from], "paused"); . |
V12 | The pause gate is require(!paused && !whitelisted[from]) , which inverts the whitelist exemption. Because the condition requires both “not paused” and “not whitelisted,” any whitelisted sender always fails the check. The intended logic is `require(!paused |
29095 - [SC - High]
Source | Finding Title | Location |
---|---|---|
Ground Truth | The lockers supply can be arbitrarily inflated via merge() | BaseLocker::merge (via _depositFor) |
V12 | Supply Inflation via Merge in BaseLocker | BaseLocker::merge |
Source | Finding Description |
---|---|
Ground Truth | BaseLocker tracks global supply and increments it in _depositFor . When merging two lockers (merge() → DepositType.MERGE_TYPE ), the code still does supply += _value even though it’s just combining existing locks (no new tokens are deposited). Repeated merges let an attacker drift supply upward indefinitely, desync accounting (e.g., rewards based on supply ), and potentially cause overflows or economic manipulation. |
V12 | merge() calls _depositFor with MERGE_TYPE , which unconditionally increases supply while the MERGE path skips any token transfer. By repeatedly merging balances into a target NFT, an attacker can inflate the reported total supply without adding tokens, corrupting invariants and reward/governance calculations that rely on accurate supply . |
V12 finds bugs during Zellic audits
V12 identifies some, though not most, bugs that Zellic finds during our audits. These include critical and high severity issues that are independently discovered by both V12 and Zellic’s security researchers.
To evaluate V12’s bug-finding ability, we run V12 independently of human researchers during Zellic’s EVM audits. We then compare the results of V12 and human auditors at the end of each audit. We treat Zellic researchers’ performance as a ground truth oracle; i.e., we assume they do not miss bugs. (For our team, this is a pretty good assumption!) We assess for each bug found in the audit whether V12 discovered it or not.
V12 identified 24 validated vulnerabilities: 2 critical, 6 high, 8 medium, and the rest were low or informational. The bugs are all found independently by V12 with no special prompting or human assistance.
Below, we highlight three representative vulnerabilities found in recent audits where V12’s analysis matched findings from human auditors. Not all bugs could be discussed, due to client confidentiality restrictions. The examples below are all from publicly disclosed reports.
Bug 1: (Critical) Broken ownable functionality in token contract
The vulnerable contract is a token which implements Ownable and AccessControl. Ownable’s internal ownership check is overridden to use AccessControl’s admin role as the source of truth, creating a denial-of-service risk.
/**
* @dev Implementation of the _checkOwner function required by Ownable.
* Uses the existing access control mechanism to check if the caller has the DEFAULT_ADMIN_ROLE.
*/
function _checkOwner() internal view virtual override {
_checkRole(DEFAULT_ADMIN_ROLE);
}
V12 independently discovers a vulnerability that breaks admin-only functionality if the contract’s ownership is transferred. This issue was concurrently found by human auditors. After the audit, the client ultimately implemented code changes to address this vulnerability, which suggests the issue is valid, regardless whether it was found by humans or V12. Below, we show the two descriptions (one human-written, the other written by V12) for comparison.
Original and V12 reported findings
Source | Finding Title | Location |
---|---|---|
Ground Truth | Overridden ownable functionality can lead to admin lockout | NameToken |
V12 | Desynchronization Between Ownable and AccessControl Leading to Admin Lockout | NameToken |
Source | Finding Description |
---|---|
Ground Truth | The NameToken contract overrides the Ownable contract’s internal ownership check (_requireCallerIsContractOwner() ) to use the AccessControl contract’s DEFAULT_ADMIN_ROLE check instead. However, when ownership is transferred via Ownable::transferOwnership , the new owner does not automatically receive the DEFAULT_ADMIN_ROLE . |
V12 | The contract overrides Ownable’s internal ownership check (_requireCallerIsContractOwner ) to use AccessControl’s DEFAULT_ADMIN_ROLE check. However, when ownership is transferred via Ownable.transferOwnership , the new owner does not automatically receive the DEFAULT_ADMIN_ROLE . As a result, the new owner cannot satisfy the role check and owner-only functions become inaccessible. Similarly, if the original admin renounces the DEFAULT_ADMIN_ROLE or ownership, no account holds DEFAULT_ADMIN_ROLE and all protected functionality is permanently locked. |
Source | Finding Impact |
---|---|
Ground Truth | This will result in the new owner not being able to satisfy the role check, making owner-only functions inaccessible. In the worst-case scenario, if the original admin renounces the DEFAULT_ADMIN_ROLE immediately on transferring ownership, no account will hold the DEFAULT_ADMIN_ROLE , and thus all protected functionality will be permanently locked forever. We think the likelihood of this issue occurring is high, and thus the impact is Critical. |
V12 | Critical: owner-only functions can become permanently inaccessible after ownership transfer or role renouncement, effectively locking out all admin functionality and potentially stalling the contract. |
V12 correctly identifies the bug’s location and impact and aptly describes the vulnerability.
Bug 2: (Critical) Reentrancy vulnerability in order filling mechanism
The vulnerable contract is in a DeFi protocol that allows users to fill orders for synthetic positions. The vulnerable code has reentrancy risks due to external calls before state updates.
function _fillOrder(IPools.Order memory order, bytes memory signature, uint256 fraction, bytes32 orderHash) internal {
//...
_validateOrderFillConditions(order, orderHash, fraction);
//...
uint256 sharesToFill = (order.shares * fraction / FULL_FILL);
_mint(longRecipient, longTokenId, sharesToFill, ""); // External call
_mint(shortRecipient, shortTokenId, sharesToFill, ""); // External call
//...
orderFillPercentages[orderHash] += fraction; // State update after external calls
}
V12 independently discovers a reentrancy vulnerability that could allow attackers to over-fill orders beyond 100%. This issue was concurrently found by human auditors, and the client implemented code changes to address it. We show the two descriptions (one human-written, the other written by V12) for comparison below.
Original and V12 reported findings
Source | Finding Title | Location |
---|---|---|
Ground Truth | Reentrancy via ERC-1155 | Pools |
V12 | Reentrancy Risk Due to External Calls Before State Update | _fillOrder |
Source | Finding Description |
---|---|
Ground Truth | The functions fillOrderFOK and fillOrdersFOK could be used by takers to fill orders signed by makers such that the percentage of an order filled is always less than or equal to 100%. These functions internally call _fillOrder , which validates the percentage before calling _mint (to mint an ERC-1155 that represents the position) and finally updates the order fill percentage. The _mint function internally calls IERC1155Receiver(to).onERC1155Received on the receiver’s address. This could be abused to reenter the fillOrderFOK and fillOrdersFOK functions. |
V12 | The function performs multiple external calls (such as minting ERC1155 tokens via _mint , which triggers acceptance checks on recipient contracts, and transferring collateral and fee tokens using SafeERC20.safeTransferFrom ) before updating critical state (orderFillPercentages ). This ordering may allow a malicious contract acting as a recipient to reenter the _fillOrder function (or another related function) during one of those external calls, potentially enabling double fills or manipulation of collateral transfers. |
Source | Finding Impact |
---|---|
Ground Truth | As the value of orderFillPercentages is updated after the mint function and verified before the mint, reentrancy could be used to create an order with fill percentage more than 100%, and therefore the maker would pay more than what they expected for an order if they have approved enough funds to the contract. |
V12 | An attacker could exploit this vulnerability to repeatedly fill the same order, draining collateral or fees multiple times, thereby breaking the intended invariants of the order’s fill percentage and asset accounting. |
V12 correctly identifies the reentrancy vulnerability and provides comprehensive reasoning about how the attack would work. Both analyses recognize that the core issue is the violation of the checks-effects-interactions pattern, where state updates occur after external calls that could trigger reentrancy.
Bug 3: (High) Partial state update causing token loss in order settlement
The vulnerable code is part of a cross-chain intents network. The vulnerable code is reproduced below:
function _settleOrder(bytes32 orderId, address filler) internal {
// [...]
bool successLock = balances[order.offerer][order.inputToken].decreaseLockedNoRevert(
uint128(order.inputAmount)
);
bool successUnlock = balances[filler][order.inputToken].increaseUnlockedNoRevert(
uint128(order.inputAmount)
);
if (!successLock || !successUnlock) {
return;
}
// [...]
}
V12 independently discovers a vulnerability where failed balance updates can lead to inconsistent state and/or token loss. This issue was concurrently found by human auditors, and the client implemented code changes to address it. We show the two descriptions (one human-written, the other written by V12) for comparison below.
Original and V12 reported findings
Source | Finding Title | Location |
---|---|---|
Ground Truth | Incorrect state update on settlement failure | _settleOrder |
V12 | Partial state update leads to permanent token loss when filler credit fails | _settleOrder |
Source | Finding Description |
---|---|
Ground Truth | The _settleOrder function settles a single order and updates balances. When balance updates fail, it skips settling the current order by returning early. However, an incorrect balance state could occur in this scenario — if successLock is false but successUnlock is true, the function returns after increasing the filler’s unlocked balance, creating an inconsistent state where the filler’s unlocked amount increases, the offerer’s locked amount remains unchanged, and the order status remains unchanged. |
V12 | The _settleOrder function calls two non-reverting BalanceUtils helpers: decreaseLockedNoRevert on the offerer and increaseUnlockedNoRevert on the filler. If the locked‐token decrease succeeds but the unlocked‐token increase fails, the function returns early without reverting the state. As a result, the offerer’s locked tokens are removed and never credited back or forwarded to the filler, causing permanent loss of tokens. |
Source | Finding Impact |
---|---|
Ground Truth | Incorrect state updates on settlement failure in the _settleOrder function could result in inconsistent balance accounting. |
V12 | An attacker or benign filler could trigger a failure in the token credit step (e.g. overflow check) and cause the offerer to lose locked tokens permanently. This results in direct financial loss and undermines trust in the contract. |
V12 correctly identifies the root cause and provides a more detailed analysis of the attack vector, though it describes the opposite failure case from the human auditors (decrease succeeds but increase fails vs. decrease fails but increase succeeds). Both identify the core issue of non-atomic state updates leading to inconsistent balances.
V12 finds bugs in unaudited code that led to hacks
V12 is able to find vulnerabilities that are responsible for major hacks in the past. Although some hacks depend on extremely complex vulnerabilities, a significant portion of hacks stem from relatively straightforward coding mistakes. We present below a representative sample of real-world exploit vectors that V12 is able to detect.
Uranium: $57,200,000
The bug is a coding mistake where a constant was not updated, resulting in an incorrect constant product check. This resulted in $57.2 million USD being stolen 12. V12 detects and reports this bug with no special prompting or assistance.
Vulnerability details
The UraniumPair
contract was forked from Uniswap V2, and the swap function was modified to change the fee structure. However, the constant product check was not updated to reflect the new fee structure and was left as 1000**2
instead of 10000**2
. This incorrect exponent makes the invariant check 100 times too small, allowing 98% of the liquidity to be drained in a single transaction. The vulnerable code is reproduced below:
function swap(uint amount0Out, uint amount1Out, address to, bytes calldata data) external lock {
require(amount0Out > 0 || amount1Out > 0, 'UraniumSwap: INSUFFICIENT_OUTPUT_AMOUNT');
// [...]
{ // scope for reserve{0,1}Adjusted, avoids stack too deep errors
uint balance0Adjusted = balance0.mul(10000).sub(amount0In.mul(16));
uint balance1Adjusted = balance1.mul(10000).sub(amount1In.mul(16));
require(balance0Adjusted.mul(balance1Adjusted) >= uint(_reserve0).mul(_reserve1).mul(1000**2), 'UraniumSwap: K');
}
V12 reported this vulnerability with the following description:
The swap function applies fee adjustments using a factor of 10000 (subtracting amountIn·16 after multiplying balances by 10000) but then verifies the constant-product invariant against reserve0·reserve1·(1000²) instead of (10000²). This incorrect exponent makes the invariant check 100× too lax, allowing an attacker to flash-swap nearly all liquidity and return only ~1% of the required amount to satisfy the mis-scaled K check.
Eleven: $4,500,000
The bug is a logic error in the emergencyBurn
function, where the contract allows users to withdraw their tokens without burning their shares. This resulted in $4.5 million USD being stolen 13. V12 detects and reports this bug with no special prompting or assistance.
Vulnerability details
When the emergencyBurn
function is called, it transfers the full token balance corresponding to the caller’s shares back to the user but does not burn those shares or update the user’s debt. This allows users to call emergencyBurn
and then proceed with a regular withdraw
to withdraw the same tokens again, effectively draining the vault. The vulnerable code is reproduced below:
/**
* @dev Function to exit the system. The vault will withdraw the required tokens
* from the strategy and pay up the token holder. A proportional number of IOU
* tokens are burned in the process.
*/
function withdraw(uint256 _shares) public {
claim(msg.sender);//TODO double check inhereted correctly
_burn(msg.sender, _shares);
uint avai = available();
if(avai<_shares) IMasterMind(mastermind).withdraw(nrvPid, (_shares.sub(avai)));
token.safeTransfer(msg.sender, _shares);
emit Withdrawn(msg.sender, _shares, block.number);
updateDebt(msg.sender);
}
function emergencyBurn() public {
uint balan = balanceOf(msg.sender);
uint avai = available();
if(avai<balan) IMasterMind(mastermind).withdraw(nrvPid, (balan.sub(avai)));
token.safeTransfer(msg.sender, balan);
emit Withdrawn(msg.sender, balan, block.number);
}
V12 reported this vulnerability with the following description:
The emergencyBurn function transfers the full token balance corresponding to the caller’s shares back to the user but does not burn those shares. Users retain their vault shares after withdrawal, allowing them to repeatedly call emergencyBurn and drain tokens indefinitely.
SuperRare: $710,000
The bug is an incorrect access control check in the updateMerkleRoot
function, allowing anyone to call it and set a new Merkle root. This resulted in $710,000 USD being stolen. V12 detects and reports this bug with no special prompting or assistance.
Vulnerability details
The updateMerkleRoot
function checks that the caller is not the owner or a specific address, instead of checking that the caller is the owner or that specific address. This logic error allows anyone to call updateMerkleRoot
and set a new Merkle root, allowing fraudulent proofs to be accepted. The vulnerable code is reproduced below:
function updateMerkleRoot(bytes32 newRoot) external override {
require((msg.sender != owner() || msg.sender != address(0xc2F394a45e994bc81EfF678bDE9172e10f7c8ddc)), "Not authorized to update merkle root");
if (newRoot == bytes32(0)) revert EmptyMerkleRoot();
currentClaimRoot = newRoot;
currentRound++;
emit NewClaimRootAdded(newRoot, currentRound, block.timestamp);
}
V12 reported this vulnerability with the following description:
The
updateMerkleRoot
function uses an incorrect access‐control check, so anyone can call it to bumpcurrentRound
and set a newcurrentClaimRoot
. An attacker can repeatedly create a malicious Merkle root containing only their own address with a large token allocation, callupdateMerkleRoot
to advance the round, then callclaim
with a valid proof and drain tokens. SincelastClaimedRound
is only compared againstcurrentRound
, each new round lets the attacker claim again, enabling them to drain the contract’s entire token balance.
Dexodus Finance: $300,000
The bug is a validation error in the performUpkeep
function, where the contract does not check the freshness of price data before using it. This resulted in $300,000 USD being stolen 14. V12 detects and reports this bug with no special prompting or assistance.
Vulnerability details
The performUpkeep
function decodes and verifies signed Chainlink reports, but never checks the report’s expiresAt
(or any timestamp) before using the price data. This allows attackers to replay old valid signatures to manipulate the price. The vulnerable code is reproduced below:
function performUpkeep(bytes calldata performData) external {
// Decode incoming performData
(bytes[] memory signedReports, bytes memory extraData) = abi.decode(
performData,
(bytes[], bytes)
);
// [...]
// Verify the report
bytes memory verifiedReportData = verifier.verify(report, abi.encode(feeTokenAddress));
uint256 _currentPrice;
if (dxMarkets[key].specificTimeFrame) {
// Decode verified report data into PremiumReportRWA struct
PremiumReportRWA memory verifiedReport = abi.decode(
verifiedReportData,
(PremiumReportRWA)
);
require(verifiedReport.status == MarketStatus.Open);
_currentPrice = _scalePriceToTokenDecimals(
USDC,
verifiedReport.price
);
// [...]
}
V12 reported this vulnerability with the following description:
The performUpkeep function decodes and verifies signed Chainlink reports, but never checks the report’s expiresAt (or any timestamp) before using the price data. It fully relies on an external IVerifierProxy implementation whose freshness enforcement is unknown. If that verifier does not strictly reject stale reports, attackers can replay old valid signatures to manipulate the price passed into futuresCore.modifyPosition.
Nitron: $950,000
The bug was an oracle manipulation vulnerability, where dontated tokens could be used to inflate the redemption rate. This resulted in $950,000 USD being stolen 15. V12 detects and reports this bug with no special prompting or assistance.
Vulnerability details
The totalAssets
function returns the balance of fsGLP held by the vault, including any tokens that were donated directly to the contract. As the vault was being used as an oracle for pricing dGLP, an attacker could donate fsGLP to inflate the redemption rate and manipulate the oracle price. The vulnerable code is reproduced below:
/** @dev See {IERC4626-totalAssets}. */
function totalAssets() public view virtual returns (uint256) {
return fsGLP.balanceOf(address(this));
}
V12 reported this vulnerability with the following description:
CarbonWrappedGLP calculates the per-share asset value using totalAssets(), which returns fsGLP.balanceOf(address(this)). Because there’s no internal accounting to distinguish official deposits from arbitrary token transfers, anyone can send fsGLP directly to the contract. This inflates totalAssets and thus the per-share value used by conversion functions.
Discussion
How does V12 compare with security professionals?
V12 is excellent at finding straightforward, surface-level coding mistakes. V12 is not intended for finding deep, complex vulnerabilities that we predict will become the primary focus for human researchers.
V12 excels at simple bug classes like broken access control, input validation, reentrancy, unchecked arithmetic, and denial of service. On the other hand, V12 struggles to find issues like logic bugs, economic problems, design flaws, or bugs that require specific insights like cryptographic mistakes, L1 implementation details, or cross-chain or cross-protocol interactions.
Our belief is that qualified security professionals should always outperform V12. Nonetheless, our examples show that the kinds of bugs V12 finds are often missed, though not by top auditors. V12 also often finds many of the bugs found in auditing contests. In short, V12 does not replace human researchers, though we believe it will raise the bar.
In general, given the correspondence between computer programs and mathematical proofs↗, we believe the problem of vulnerability research is likely “AGI-complete”, meaning that there will always be bugs that remain out of reach for AI systems, unless AGI is achieved. More practically, even today there are entire classes of vulnerabilities that we find to be out of reach for even the best AI systems, including V12.
How should security professionals use V12?
Used by the best researchers, V12 enhances their existing expertise and skill. We recommend first manually reviewing the code in its entirety before using V12 last. This prevents researchers’ thought process from being influenced by V12; e.g., not inventing ideas (like an attack vector) that would have otherwise surfaced. V12 acts as (1) an additional layer of assurance and (2) a source of additional inspiration.
Used by mediocre or inexperienced researchers, V12 is mostly a crutch. We recommend against the use of V12 by junior researchers. It can lead to overreliance and, in the long term, hindered development of innate security research skills.
At Zellic, we’ve developed a protocol for using V12 in our EVM audits based on these principles. The audit proceeds normally without assistance from V12 until around the halfway point, when we run V12 on the codebase. This timing is deliberate. It gives human auditors time to familiarize themselves with the codebase (so they can quickly assess which V12 findings merit investigation) while avoiding “pollution”↗ of their creative process, preserving their ability to find bugs guided by their innate intuition.
Used in this way, V12 enhances rather than replaces human auditors. V12 may suggest issues that aren’t themselves vulnerabilities but inspire lines of thinking that lead to the discovery of a genuine vulnerability. This provides an additional layer of protection for our clients while ensuring that audit quality when using V12 is no worse than without it.
Roadmap
We plan to continue adding features to V12 in the future. These include POC generation—which will further eliminate false positives—CI/CD integration, support for Rust and Solana, and fuzzing.
Conclusion
We hope you enjoyed this post. If you haven’t already, try V12 here!↗ V12 is currently in closed beta while we conduct additional testing and refine the experience based on feedback from our design partners. We’re prioritizing access for Zellic’s existing clients, but if you’d like to try V12, feel free to reach out↗ for early access. We plan to fully release V12 in Q4 2025. Finally, if you’re excited about our mission—raising the bar for security—and want to work with the best hackers in the world: we’re hiring!↗