Prisma’s $11.6M Exploit Was a Trust Trap and Olympix Would Have Triggered It
On March 28, 2024, Prisma Finance was exploited for over $11.6M through a vulnerability in its MigrateTroveZap contract, code designed to help users migrate troves during a system upgrade. The attack wasn’t complex. It was a straight-line path built on assumptions that should never exist in permissionless systems.
The exploiters bypassed the intended migration flow and directly invoked the flashloan entry point. Because the callback function onFlashLoan() blindly trusted the calldata and lacked origin validation, attackers were able to spoof migration logic, hijack troves, and extract collateral.
In total, three attacker addresses executed variations of the exploit, draining over 3,400 ETH worth of assets. The incident forced Prisma to pause protocol operations, cut total TVL nearly in half, and trigger an emergency response from the team and security partners.
Later, the lead exploiter sent an on-chain message claiming to be a whitehat and expressed intent to return funds. But the architecture mistake remains.
This wasn’t a coding error. It was a systems-level failure to treat trust boundaries seriously, and a prime case study in how “helper” contracts often become critical attack surfaces.
Prisma’s MigrateTroveZap contract was built to automate trove migration across manager contracts. It offered a one-click path for users to close their existing trove, take out a flashloan to cover debt, and reopen an equivalent position in a new manager — all without touching collateral manually.
That convenience came with a critical flaw. The system assumed onFlashLoan() would only ever be called by its own internal flow. But Ethereum doesn’t enforce call context. Anyone could call flashLoan() directly and inject data that the Zap contract would trust blindly.
Here’s how the attack unfolded.
Step 1: Attacker Initiates a Flashloan
The attacker calls mkUSD.flashLoan() with a large borrow amount corresponding to an existing user’s trove debt. They set the receiver address to MigrateTroveZap and attach spoofed calldata mimicking a legitimate migration.
Flashloan amount: 1,442,100 mkUSD
Target trove: 0x56A201…291ED
Collateral held: 1,745.08 wstETH
Step 2: Callback Executes with Spoofed Data
The flashLoan() function sends the loaned amount and triggers the onFlashLoan() function on the receiver. Prisma’s Zap contract receives the call and executes it without checking whether this came from its own migrateTrove() logic.
The attacker crafts calldata that instructs MigrateTroveZap to:
Close the victim’s trove
Reopen a new one in the same trove manager
Use only a fraction of the original collateral
Victim’s new trove is reopened with:
Collateral: 463.18 wstETH
Debt: 1,443,398 mkUSD
Leftover: 1,281.9 wstETH now trapped in the Zap contract
Step 3: Attacker Extracts the Remaining Collateral
With the excess collateral sitting in the Zap contract and unclaimed, the attacker creates a dummy trove under their own address. They then initiate a second fake migration, this time assigning themselves the full leftover wstETH as the collateral for the new trove.
Collateral assigned: 1,282.79 wstETH
Debt: 2,001.8 mkUSD
After creating the new position, the attacker immediately closes the trove and walks away with the 1,282.79 wstETH.
Step 4: Clean Exit
The attacker repays the flashloan and routes stolen funds to secondary addresses. Some of the funds were funneled through Tornado Cash. Others were redistributed to wallets that later claimed whitehat intent.
The Vulnerability: What Went Wrong
The Prisma exploit was not the result of a novel bug. It was the result of classic architectural negligence: trusting calldata in a high-privilege context without enforcing call integrity.
The MigrateTroveZap contract’s onFlashLoan() function decoded calldata and executed privileged operations — closeTrove() and openTrove() — with no guardrails. It assumed the flashloan could only be initiated by the contract’s internal migrateTrove() function. That assumption was false.
Here’s where Prisma went wrong:
1. Unvalidated Entry Point
flashLoan() is a public function. Anyone could call it and set the receiver to MigrateTroveZap. The contract accepted this call without confirming whether it came from a trusted internal flow.
2. Blind Calldata Decoding
The onFlashLoan() function directly decoded attacker-supplied bytes into actionable parameters: the user address, trove managers, fee slippage, collateral amount, and hint addresses. There were no checks on whether the account owned the trove, whether the migration was internally initiated, or whether the collateral amount made sense.
3. No Caller or Context Checks
There were no require statements to assert that:
The msg.sender was a known contract
The calldata matched an internally approved format
The user was migrating their own position
As a result, the function was effectively a backdoor. Any attacker who could call flashLoan() and spoof calldata could perform arbitrary trove operations on any user that had approved the Zap contract.
Design Failure, Not Code Flaw
The Prisma hack was not a missed require() or a typo in arithmetic. It was a failure in system design. Every line executed as written. That’s the problem.
The architecture trusted that onFlashLoan() would only be called via a specific flow — initiated by migrateTrove(), triggered through flashLoan(), and executed in a controlled path. But Ethereum doesn’t enforce internal vs external origin. If a function can be called, it will be.
This is where most “audited” contracts still fail: they assume control flow instead of enforcing it.
No Enforcement of Trusted Paths
Prisma’s developers assumed the only path to onFlashLoan() would be via the intended helper call. But the flashLoan() function was callable by anyone. That meant any attacker could push arbitrary bytes into the Zap contract and trigger collateral-moving logic.
The bug wasn’t that validation was missing. It’s that the entire design hinged on implicit trust in how the system would be used.
Auditors Missed It Because It Wasn’t Hidden
This wasn’t a buried logic bug. It was cleanly written, clearly documented functionality. Which is exactly why it passed audits. The auditors reviewed code paths under assumed intent, not under malicious conditions. This is a failure of threat modeling, not of code review.
The Dangerous Assumption
If your contract:
Performs privileged state changes,
Accepts untrusted calldata, and
Lacks explicit checks on context, permissions, or invariants,
Then you have built an attacker interface.
That is exactly what happened here.
Security Model Breakdown
At the heart of this exploit was a security model that delegated control without constraining execution. Prisma’s contracts combined delegated approvals, flashloan logic, and calldata execution into a system that implicitly trusted users and ignored enforcement.
This created multiple privilege escalation paths.
1. Delegate Approval Without Scoped Permission
Users gave MigrateTroveZap blanket approval via setDelegateApproval() to operate on their Troves. That approval was:
Global (not scoped to migration)
Permanent (unless manually revoked)
Unrestricted (no check on call origin or execution context)
This meant the Zap contract could perform closeTrove() and openTrove() on behalf of any user who opted into the migration flow — at any time, for any reason.
2. Calldata-Controlled State Transitions
The Zap contract did not read trove state from storage. Instead, it relied on externally passed calldata to dictate:
Which user to act on
How much collateral to use
What trove manager to route to
This gave attackers full control over both who was targeted and what value was stolen.
3. No Boundaries Between Initiator and Victim
The flashloan flow allowed the attacker to:
Borrow debt tokens under their own account
Target troves owned by others
Create new troves under their control
Capture residual collateral from unrelated accounts
At no point did the system enforce that the initiator of the migration was also the owner of the trove being acted on. That’s a complete breakdown of access control.
4. Residual Asset Exposure
The Zap contract temporarily held unclaimed collateral during migration. But there was no mechanism to reconcile or return this value if the migration was spoofed. This made the Zap contract a honeypot for leftover collateral — available to whoever claimed it next.
That is not an implementation bug. It is a design failure to define custody, authorization, and cleanup in a multi-step, high-value flow.
Prevention: What Would Have Stopped This
The Prisma exploit did not require a novel fix. The safeguards needed were standard practice in any system handling delegated authority and external callbacks. Here’s what would have stopped it cold.
1. Validate Call Origin
The Zap contract should have enforced that onFlashLoan() could only be executed as part of an internally initiated migrateTrove() flow. That means checking:
msg.sender is the debt token contract
tx.origin matches the user migrating
The trove being acted on belongs to the caller
No trusted call path should exist without explicit checks. Never assume intent from a function call alone.
2. Reject Malformed Calldata
The Zap contract should have validated:
collateral and debt match the actual trove state
troveManagerFrom and troveManagerTo are not identical
The account executing onFlashLoan() is the owner of the trove being migrated
Without this, the attacker was able to pass arbitrarily forged values into a function that directly moved funds.
3. Enforce Reentrancy Guards
The contract should have used a nonReentrant modifier on any function that handles flashloan callbacks. Even if not directly reentered, this would have added a layer of defense against unexpected execution paths.
4. Track and Assert Call Context
Stateful execution context should have been set during the migrateTrove() call, then checked during the callback. For example:
Store the expected user and trove addresses in contract storage
Clear them after callback
Abort if onFlashLoan() is called without that context
This pattern enforces that internal callbacks only happen during valid flows.
5. Secure Residual Value
The contract should have never left unclaimed collateral sitting in its own balance. At minimum:
Return leftover tokens to the original user
Lock tokens until explicitly claimed
Require additional authorization to use them in a new trove
The residual wstETH was not protected, which made the second half of the exploit trivial.
These are not expensive solutions. They are table stakes for any system that handles user assets through intermediated logic.
Olympix Would Have Caught It
This exploit type sits in a blind spot between logic correctness and access control. Most audit tooling wouldn’t catch it because the code paths were valid. The logic was consistent. The failure was contextual. But it is exactly the kind of issue dynamic mutation testing is built to expose.
A mutation test that directly calls flashLoan() with the Zap contract as receiver would reveal:
onFlashLoan() executes without internal flow
Calldata can be spoofed
Residual assets are left unclaimed
These are not unit test assertions. These are behavioral mutations that mimic real-world adversarial input.
2. Assertion Coverage Analysis
Our Insufficient Assertion Detector (in development) flags test functions that:
Don’t assert on custom variable types
Lack validation of actor identities or message senders
Assume “happy path” execution with no malformed inputs
In Prisma’s case, test coverage likely validated successful migrations but never tested invalid initiators or spoofed calldata. That’s an assertion gap.
3. Context-Invariant Path Execution
Static analyzers often trace call graphs. But they don’t enforce context constraints. We are working on context-sensitive fuzzing that mutates:
msg.sender across calls
calldata payloads independently of expected user flows
state mutations across flashloan lifecycle
These patterns are how real exploits emerge. They need to be baked into automated test frameworks.
Broader Implications
This exploit wasn’t just about a single contract mistake. It exposed systemic design patterns in DeFi that quietly expand attack surfaces. Prisma wasn’t alone in making this mistake — it was just next in line.
1. Delegate Approvals Are Dangerous Defaults
When users approve a contract to act on their behalf without granular control or expiration, they inherit the contract’s full risk profile. If that contract has a logic flaw, every user is exposed until they manually revoke.
DeFi protocols still overuse delegate approvals without providing dashboards, revocation tools, or scoped permissions. That’s not just a UX issue — it’s a security failure.
2. Flashloans Break Context Assumptions
Contracts that rely on internal call flows or implicit trust boundaries break under flashloan pressure. Flashloans let attackers simulate any internal execution flow with precision timing and custom data.
If your contract assumes any function will only be called “the right way,” a flashloan will prove you wrong.
3. Zap Contracts Are Critical Attack Surfaces
Migration helpers, vault zaps, and position routers handle privileged flows but rarely receive the same scrutiny as core contracts. That’s a blind spot. These contracts often:
Aggregate user assets
Execute on behalf of multiple actors
Encode multi-step state transitions
Which makes them ideal for both users and attackers.
4. Protocol Pauses Can’t Save You After the Fact
Prisma’s pause mechanism kicked in quickly — but the funds were already gone. Relying on emergency breaks is not a replacement for enforcing trusted execution upfront.
Security can’t be reactive. It has to be compositional and preventative.
Takeaways for Builders
Security isn’t just about code correctness. It’s about execution integrity. Prisma’s exploit delivered a clear set of lessons for teams building composable, delegated, or flashloan-exposed systems.
1. Never Trust Calldata in High-Privilege Callbacks
If your function:
Executes state transitions
Decodes calldata
Relies on context from another contract
Then it must validate the initiator, the data, and the call path. If it doesn’t, you’ve built a permissionless exploit interface.
2. Don’t Assume Internal Execution Flow
The EVM doesn’t care how you meant your contract to be used. Every public or external function can be called by anyone, in any order, with any data. Design for abuse, not just use.
3. Delegate Approvals Must Be Scoped
If your system requires delegation, enforce:
Function-level access
Expiry or revocation paths
Audit visibility for users
Global approvals with no controls are attacker gold.
4. Simulate the Adversary
Your test suite should:
Call functions in isolation
Forge calldata outside expected formats
Recreate critical flows under incorrect assumptions
Mutation testing is not optional anymore. If your test suite doesn’t mutate actor, data, and order, it is incomplete.
5. Secure Residual Value Paths
Every contract that handles user assets should:
Validate state cleanup
Reject execution with stale or excess value
Lock, refund, or time out trapped tokens
Residual value is one of the most overlooked risk surfaces in DeFi.
Closing Insight
The Prisma exploit wasn’t sophisticated. It was clean, obvious, and devastating — because the system trusted the happy path.
In DeFi, every public function is an attack surface. Every migration helper is a vault. Every approval is a blank check. If your architecture doesn’t account for adversarial usage from the start, then you haven’t built a secure system — you’ve built a honeypot with good intentions.
Smart contract security is no longer about catching bugs. It’s about killing assumptions.
Olympix: Your Partner in Secure Smart Contracts
Olympix provides advanced Solidity analysis tools to help developers identify and fix vulnerabilities before they become critical exploits.
Get started today to fortify your smart contracts and proactively shield them from exploits in the evolving Web3 security landscape.
The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.
A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!
Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.
Follow-up: Conduct a follow-up review to ensure that the remediation steps were effective and that the smart contract is now secure.
Follow-up: Conduct a follow-up review to ensure that the remediation steps were effective and that the smart contract is now secure.
In Brief
Remitano suffered a $2.7M loss due to a private key compromise.
GAMBL’s recommendation system was exploited.
DAppSocial lost $530K due to a logic vulnerability.
Rocketswap’s private keys were inadvertently deployed on the server.
Hacks
Hacks Analysis
Huobi | Amount Lost: $8M
On September 24th, the Huobi Global exploit on the Ethereum Mainnet resulted in a $8 million loss due to the compromise of private keys. The attacker executed the attack in a single transaction by sending 4,999 ETH to a malicious contract. The attacker then created a second malicious contract and transferred 1,001 ETH to this new contract. Huobi has since confirmed that they have identified the attacker and has extended an offer of a 5% white hat bounty reward if the funds are returned to the exchange.