PIP-15: Adding support for EIP-4337 Bundled Transactions

PIP-15: Adding support for EIP-4337 Bundled Transactions


Pratik Patil

Type: Interface

Table of Contents:

  • Abstract
  • Motivation
  • Specification
    • Sample Requests
    • Sample Responses
  • Backward Compatibility
  • Security Considerations
  • References
  • Reference Implementation
  • Copyright


The Ethereum researchers have put forth a proposal for a new RPC endpoint called eth_sendRawTransactionConditional (implemented as bor_sendRawTransactionConditional), which is designed to facilitate the bundled transactions introduced in EIP-4337. This proposal has been successfully implemented in PR#945 within the Bor client in the bor namespace. The plan is to incorporate this endpoint into Bor’s Mumbai and Mainnet versions to provide support for bundled transactions.


EIP-4337 aims to enable account abstraction while maintaining decentralization and censorship resistance. It strives to uphold a level of decentralization comparable to the underlying chain’s block production process. The proposal achieves this by dividing validation and execution into separate steps when bundling transactions for submission to an alternative mempool. However, this separation introduces a potential challenge for bundlers working on their L2 transactions.

The problem arises from the time delay between the bundlers’ simulated transaction validation and the final inclusion of those transactions by sequencers. This delay poses a risk of reverting bundle submissions. The reason for this risk is the lag between the initial submission of a transaction and its ultimate inclusion, during which a smart contract account’s storage can change and invalidate the transaction. When bundled transactions are reverted, it results in a loss of value for the bundler. Consequently, this discourages the operation of these services until the issue is effectively addressed.

To tackle this problem, Ethereum researchers introduced the bor_sendRawTransactionConditional endpoint. This enhanced endpoint extends the functionality of the standard eth_sendRawTransaction endpoint by incorporating additional options that enable users to define specific valid ranges for block height, timestamps, and knownAccounts.

In this context, knownAccounts refers to a map of accounts with expected storage that must be verified before executing the transaction. By utilizing this feature, sequencers can now reject transactions that fail to meet the specified conditions for inclusion during the initial validation stage. Consequently, this effectively mitigates the risk associated with potential changes in in-account storage between the validation and execution phases of the transaction.

The bor_sendRawTransactionConditional API is privileged and can only be used if the bundler is connected to the validator (block producer). The conditional transactions will not be broadcasted to the peers, so if the bundler sends the conditional transaction to the sentry node, it will not be executed. There is a bidirectional trust involved between the validator and the bundler. The validator trusts that the bundler is not going to spam him, and the bundler trusts that the validator will not front-run the transaction.


The reference implementation in Bor adds support for the bor_sendRawTransactionConditional API. It also adds checks to validate the ranges for block height, timestamps, and knownAccounts. These checks are performed at two locations, one at the API level (when the transaction is sent to the Bor client), and other in the worker module (when the transaction gets picked up from the transaction pool for inclusion in the block). The number of the slots/accounts in the knownAccounts structure is expected to be below 1000 entries, else the transaction will be rejected.

Sample Requests

    "jsonrpc": "2.0",
    "id": 1,
    "method": "bor_sendRawTransactionConditional",
    "params": [
            "blockNumberMax": 12345,
            "knownAccounts": {
                "0xadd1": "0xfedc....",
                "0xadd2": { 
                    "0x1111": "0x1234...",
                    "0x2222": "0x4567..."

// A curl request will look something like
curl localhost:8545 -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0", "id": 1, "method":"bor_sendRawTransactionConditional", "params": ["0x2815c17b00...", {"knownAccounts": {"0xadd1": "0xfedc...", "0xadd2": {"0x1111": "0x1234...", "0x2222": "0x4567..."}}, "blockNumberMax": 12345, "blockNumberMin": 12300, "timestampMax": 1700000000, "timestampMin": 1600000000}]}'

Sample Responses

// A successful response will contain the transaction hash
// For example


// An unsuccessful response will contain an error with code -32003 or -32005 (or some other code) with an explanation
// For example

{"jsonrpc":"2.0","id":1,"error":{"code":-32003,"message":"out of block range. err: current block number 201 is greater than maximum block number: 55"}}

{"jsonrpc":"2.0","id":1,"error":{"code":-32003,"message":"out of time range. err: current block time 1690287327 is less than minimum timestamp: 0xc019eee948"}}

{"jsonrpc":"2.0","id":1,"error":{"code":-32003,"message":"storage error. err: Storage Trie is nil for: 0xAdd1Add1aDD1aDD1aDd1add1add1AdD"}}

{"jsonrpc":"2.0","id":1,"error":{"code":-32005,"message":"limit exceeded. err: an incorrect list of knownAccounts"}}

{"jsonrpc":"2.0","id":1,"error":{"code":-32005,"message":"limit exceeded. err: number of slots/accounts in KnownAccounts 1100 exceeds the limit of 1000"}}

{"jsonrpc":"2.0","id":1,"error":{"code":-32000,"message":"transaction type not supported"}}

Backward Compatibility

This PIP has no effect on the consensus rules of the network and aims to add one additional API only which is backward compatible.

Security Considerations

  • Requiring validators to check this knownAccounts list before execution could exacerbate the resource usage issue caused by reading large amounts of state. Effectively, a submitter to this endpoint could cause high CPU usage for a validator without ever paying gas, also resulting in denial of service.
    • This is mitigated by adding a condition (which is also mentioned in the spec) that the number of slots/addresses in known accounts should be less than 1000.
  • A validator isn’t strictly required to follow this protocol, so a validator could insert a transaction before the bundle which changes the gas characteristics of the bundle, and thus causing issues for the relayer which has submitted the bundle. In effect they could create a scenario where the relayer cannot be fully reimbursed for the gas they pay.
    • This technical issue already exists without this endpoint, and, in any event, the relayer could just stop using that validator.


Reference Implementation


All copyrights and related rights in this work are waived under CC0 1.0 Universal.


I have serious misgivings about this. It provides spammers / bots with a “free look” mechanism that is paid for with validator compute… which is the one resource we should be precious with.

I would be against any account abstraction proposal that doesn’t impute some form of cost on the entity using up the validator’s processing power.

This particular mechanism would be very easy to abuse. Hundreds of reverted spam transactions on chain would become hundreds of thousands of unexecuted (but simulated) transactions off chain. This would be 10x worse than the sunflower farms load.


As an example of how this could be abused, please consider how “stat arb bots” (IE statistical arbitrage algorithmic traders / Market Makers) currently operate on Polygon.

Here is an example of a “Top of Block” auction in which the competing stat arb bots incremented their GasPrice to try and have their transaction processed first in the block. Note that on block explorers like polygonscan, the transaction order is listed in reverse and the first transaction is therefore at the bottom of the list.

This is called a “Priority Gas Auction” and is a unique type of MEV in that unlike backruns or liquidations, it is rare for stat arb bots to “spam” multiple, distinct copies of the same transaction.

The reason there isn’t any spam is simple: as you can see from the image, each transaction is very expensive.

Unlike backrun MEV and liquidation MEV, which on Polygon require the bot to have a GasPrice that matches the GasPrice of a user or oracle, with these Priority Gas Auctions, the winner is whoever have the highest GasPrice that gets seen by the Validator.

There’s an auction battle that takes place in the Memory Pool - a bot will send out a transaction with a GasPrice of X and a competing bot will see it and send out their own transaction with a GasPrice of X+1. The first bot doesn’t want to lose, so they send out a new transaction with the same Nonce and From but with a GasPrice of X*1.1 (the 1.1 modifier is because the nodes require a 10% increase in GasPrice to overwrite a previous transaction from the same address / nonce).

Due to the transaction overwriting, this tends to not place too much strain on the Validators as most transactions get overwritten before they make it past the Validator’s Sentry nodes.

It is crucial for the bots to not have duplicate transactions, as their GasPrice is a function of how profitable the trade is. The higher the trade’s profit, the higher the stat arb bots will bid with GasPrice, with each bid overwriting the previous one.

How do they calculate the profit? To oversimplify, stat arb bots look at the price on a centralized exchange (Such as Binance) and then look at the price on a decentralized exchange (Such as quickswap). The greater the difference in the two prices, the more profitable the trade is. But remember that every trade on a DEX will change its price - therefore the first stat arb bot to trade on the DEX will change the DEX’s price so that there’s no profit left for the next stat arb bot to capture.

Now let’s imagine PIP-15 goes through as-is. How would the bots act differently?

The first thing they’d do is start bundling their own transactions. They could set the KnownAccounts parameter of their bundled transaction to hold the pre-arbitrage DEX price. This would make the entire auction free for all parties.

All nodes would use the previous storage value to check the transaction, and they would all pass. It wouldn’t be until the validator starts actually building the block and processing the transactions that the fails would take place. And these fails would be free.

Only the winner - the bot that lands the first transaction - would have to pay their gas fees, and they would only have to pay those gas fees knowing that they’d won and that it was a profitable transaction. Validators would start making significantly less money - no reverts means no more cost of reverts.

The second thing that would happen is that stat arb bots would start using the tactics of the backrun spammers. For the priority gas auctions, speed does have an advantage… and due to the default-random propagation of Bor (that’s also found in Erigon btw), the more transactions one makes, the greater the likelihood of it arriving quickly at the validator node.

So the stat arb bots will start generating hundreds of transactions and flooding the network with them, knowing that only one of them is valid and that the rest will be discarded without cost.

Meanwhile the backrunners / liquidation bots will start doing the same thing, only 10x worse.

Currently, the only variable that puts a ceiling on the number of transactions they spam is the ratio of the cost of each transaction to the profit it might capture. This PIP takes the denominator of that ratio - the cost of a failed tx - and sets it to 0.. The “formula for the optimal number of transactions to spam” for these bots stops depending on the cost of a transaction and instead relies solely on the cost of their own compute to create / sign the transactions.

Given that some of these bot teams are running over 100 nodes, I do not think this would be good for the health of the chain.

*Note that FastLane helps to mitigate the randomness for backrunners / liquidations, but it is not currently able to run Top of Block auctions / priority gas auctions. This is because:

  1. We don’t see the need, as there’s already a built in mechanism (GasPrice)
  2. There’s no intuitive, non-invasive solution that allows “top of block” transactions but that won’t facilitate sandwich attacks or other attack vectors on Polygon’s users.

And even though the randomness for backrunners / liquidations is mitigated for validators who are connected to FastLane, if there’s no cost of failure for the spammers then there’s no strong disincentive for spammers to try, even if their chance of success is virtually 0.


Hello Everyone,

We (Proton Gaming) are in an interesting situation with this PIP as it is likely a big boost for web3 game developers and players which is great but may not be in our financial best interest as a validator.

If I’m off base on this, please show me the light but the biggest advantage for game devs and gamers would be lowering/removing fees for unexecuted/reverted transactions. This is obviously a great step toward what we all talk about in web3 gaming - “frictionless” interaction with the blockchain.

To this end I’m inclined to recommend adoption.

As a validator however, that removal of fees essentially removes those potential rewards from the pool. While I may not give this too much consideration in a solid market, I am obligated to raise the concern currently.

I would love to see an amendment or recommendation to offset this potential loss of revenue for Validators.

Of course if there is another variable in the equation I’m not seeing or aware of, please let me know.

From a security standpoint we see some potential opportunities for abuse but have no doubt the Polygon Team would address those as attempts are made.

Overall this is a great step, particularly for web3 gaming.


Scott Lilliston
Co-Founder & Managing Director
Proton Gaming


I agree with @Thogard, that this would with high probability create a big resource consumption for validators.
I wonder if it could possibly be mitigated by limiting the number of consecutive txs/bundles containing same data.
Alternatively, a change could be made in such way, that Polygon validators would require blockbuilders to register their address (a separate whitelist), which then can be monitored for spam and banned if needed.
This creates a set of problems of it’s own, mainly that it will create a very limited and permissioned environment for block builders using Polygon. On the other hand, this will only relate to bundles using eth_sendRawTransactionConditional. Arbitragers can still run arbitrage bots as usual as they did before.


Hi @Thogard, @ScottLilliston, @luboremo thank you for the feedback and the detailed explanation.

The following changes are made in the PIP (will update this forum post once the PR is merged).

The eth_sendRawTransactionConditional API is privileged and can only be used if the bundler is connected to the validator (block producer). The conditional transactions will not be broadcasted to the peers, so if the bundler sends the conditional transaction to the sentry node, it will not be executed. There is a bidirectional trust involved between the validator and the bundler. The validator trusts that the bundler is not going to spam him, and the bundler trusts that the validator will not front-run the transaction.

Looking forward to your feedback and comments.


This sounds interesting! I think it’s a great fix.

I’m not sure if there’s a right or a wrong answer to this question, but what type of party do you envision acting as the bundler in a scenario like this?


From what we know, Alchemy, Biconomy, and Stackup are already running bundlers. But to use this API, each of them would need to be connected with at least one validator.


Hello! I’ve reviewed your proposal for adding support for EIP-4337 Bundled Transactions, and I’m intrigued by the concept of the eth_sendRawTransactionConditional endpoint.

However, I have a question regarding the ‘knownAccounts’ feature. While this mechanism helps ensure the validity of transactions during validation, could you provide more insight into how this process interacts with potential changes in smart contract storage post-validation? How does it prevent cases where valid transactions become invalid due to post-validation changes in storage?


Confirming that Alchemy and others are already running bundlers on Polygon and they do occasionally get frontrun. I also completely understand the attack vector described above.

A few more thoughts:

  • If only some validators participate, this can significantly increase time to mine for these operations.
  • Each bundler team has to independently start defining relationships with each validator which adds a lot of overhead.
  • Perhaps a Flashbots style relay with reputation that proxies these transactions directly to validators could be useful? The reputation system protects the validators from DoS attacks but creates a single simple endpoint that can avoid relying on the public mempool and allows validators to receive these transactions directly. I know BloxRoute labs has something similar but there’s likely issues with validator opt-in.

If you’d like, we could create a system that could forward the bundles directly on to validators. As far as MEV protocols, FastLane has the greatest coverage of validators (roughly 1/3 now and rapidly growing) and is also optimally positioned to verify that the bundles wouldn’t disrupt any of the MEV auctions or lead to ‘sandwich’ attacks on users. I’d need to do more research on the latter though, and it would be an ‘opt in’ system for validators.

If you’re interested please send me a DM. My biggest hesitation with respect to building something like this would be that by the time we finish the whole stack that it’s built on would be deprecated due to Polygon 2.0 :sweat_smile:


The validators validate the transaction directly before inserting it into the block, so there wouldn’t be a way for invalid inclusion unless the validator had modified their node to bypass that check.


Thank you for the reply and the initiative, @Thogard.

This will not be the case, as even after Polygon 2.0 the PoS validators will still exist, and the functionality of the APIs will remain the same.


Interesting! That’s good to know. We currently use bor nodes for our MEV infrastructure. I was under the impression that 2.0 would involve a pivot away from geth (bor) and towards an erigon-based execution client. But I’m sure we could set up the relay so that we could move the module from geth to erigon with minimal refactoring.

I’ll discuss with my team - this type of infra layer would be a fun project.


Hi folks, a small update has been made to the PIP. The namespace of the API has beem changed from eth to bor. Meaning the new API endpoint will now be bor_sendRawTransactionConditional.


Hello everyone, we have included PIP-15 (that is, added support for the bor_sendRawTransactionConditional API) in the latest bor v1.0.4 release.

Try it out and please feel free to share your feedback. Thank you.

1 Like

Will begin testing out a relay shortly.

Currently, we plan to delay all bundled operations by two blocks to ensure that they aren’t being used to sandwich attack or otherwise take advantage of users of the regular mempool.

1 Like

Why was the namespace changed from eth to bor.

Wallets and other chains support EIP-4337 under the eth namespace, and wallets have not added support for any bor namespace methods.

any universal dApps which utilize this method would also expect to be able to use the eth namespace for this method.

Historically bor has been used for methods which are uniquely and specifcally polygon related, eip-4337 is not unique to polygon, and putting it under the bor namespace only just adds headache and complexity to projects, or they will just not support EIP-4337 on polygon.