PIP-15: Adding support for EIP-4337 Bundled Transactions

PIP-15: Adding support for EIP-4337 Bundled Transactions

Authors:

Pratik Patil

Type: Interface

Table of Contents:

  • Abstract
  • Motivation
  • Specification
    • Sample Requests
    • Sample Responses
  • Backward Compatibility
  • Security Considerations
  • References
  • Reference Implementation
  • Copyright

Abstract

The Ethereum researchers have put forth a proposal for a new RPC endpoint called eth_sendRawTransactionConditional (implemented as bor_sendRawTransactionConditional), which is designed to facilitate the bundled transactions introduced in EIP-4337. This proposal has been successfully implemented in PR#945 within the Bor client in the bor namespace. The plan is to incorporate this endpoint into Borā€™s Mumbai and Mainnet versions to provide support for bundled transactions.

Motivation

EIP-4337 aims to enable account abstraction while maintaining decentralization and censorship resistance. It strives to uphold a level of decentralization comparable to the underlying chainā€™s block production process. The proposal achieves this by dividing validation and execution into separate steps when bundling transactions for submission to an alternative mempool. However, this separation introduces a potential challenge for bundlers working on their L2 transactions.

The problem arises from the time delay between the bundlersā€™ simulated transaction validation and the final inclusion of those transactions by sequencers. This delay poses a risk of reverting bundle submissions. The reason for this risk is the lag between the initial submission of a transaction and its ultimate inclusion, during which a smart contract accountā€™s storage can change and invalidate the transaction. When bundled transactions are reverted, it results in a loss of value for the bundler. Consequently, this discourages the operation of these services until the issue is effectively addressed.

To tackle this problem, Ethereum researchers introduced the bor_sendRawTransactionConditional endpoint. This enhanced endpoint extends the functionality of the standard eth_sendRawTransaction endpoint by incorporating additional options that enable users to define specific valid ranges for block height, timestamps, and knownAccounts.

In this context, knownAccounts refers to a map of accounts with expected storage that must be verified before executing the transaction. By utilizing this feature, sequencers can now reject transactions that fail to meet the specified conditions for inclusion during the initial validation stage. Consequently, this effectively mitigates the risk associated with potential changes in in-account storage between the validation and execution phases of the transaction.

The bor_sendRawTransactionConditional API is privileged and can only be used if the bundler is connected to the validator (block producer). The conditional transactions will not be broadcasted to the peers, so if the bundler sends the conditional transaction to the sentry node, it will not be executed. There is a bidirectional trust involved between the validator and the bundler. The validator trusts that the bundler is not going to spam him, and the bundler trusts that the validator will not front-run the transaction.

Specification

The reference implementation in Bor adds support for the bor_sendRawTransactionConditional API. It also adds checks to validate the ranges for block height, timestamps, and knownAccounts. These checks are performed at two locations, one at the API level (when the transaction is sent to the Bor client), and other in the worker module (when the transaction gets picked up from the transaction pool for inclusion in the block). The number of the slots/accounts in the knownAccounts structure is expected to be below 1000 entries, else the transaction will be rejected.

Sample Requests

{
    "jsonrpc": "2.0",
    "id": 1,
    "method": "bor_sendRawTransactionConditional",
    "params": [
        "0x2815c17b00...",
        {
            "blockNumberMax": 12345,
            "knownAccounts": {
                "0xadd1": "0xfedc....",
                "0xadd2": { 
                    "0x1111": "0x1234...",
                    "0x2222": "0x4567..."
                }
            }     
        } 
    ]
}

// A curl request will look something like
curl localhost:8545 -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0", "id": 1, "method":"bor_sendRawTransactionConditional", "params": ["0x2815c17b00...", {"knownAccounts": {"0xadd1": "0xfedc...", "0xadd2": {"0x1111": "0x1234...", "0x2222": "0x4567..."}}, "blockNumberMax": 12345, "blockNumberMin": 12300, "timestampMax": 1700000000, "timestampMin": 1600000000}]}'

Sample Responses

// A successful response will contain the transaction hash
// For example

{"jsonrpc":"2.0","id":1,"result":"0xd772ef64b9b56bb1b6773f08dbad8f6285f241e2f723a13e6b380c6d3749637b"}


// An unsuccessful response will contain an error with code -32003 or -32005 (or some other code) with an explanation
// For example

{"jsonrpc":"2.0","id":1,"error":{"code":-32003,"message":"out of block range. err: current block number 201 is greater than maximum block number: 55"}}

{"jsonrpc":"2.0","id":1,"error":{"code":-32003,"message":"out of time range. err: current block time 1690287327 is less than minimum timestamp: 0xc019eee948"}}

{"jsonrpc":"2.0","id":1,"error":{"code":-32003,"message":"storage error. err: Storage Trie is nil for: 0xAdd1Add1aDD1aDD1aDd1add1add1AdD"}}

{"jsonrpc":"2.0","id":1,"error":{"code":-32005,"message":"limit exceeded. err: an incorrect list of knownAccounts"}}

{"jsonrpc":"2.0","id":1,"error":{"code":-32005,"message":"limit exceeded. err: number of slots/accounts in KnownAccounts 1100 exceeds the limit of 1000"}}

{"jsonrpc":"2.0","id":1,"error":{"code":-32000,"message":"transaction type not supported"}}

Backward Compatibility

This PIP has no effect on the consensus rules of the network and aims to add one additional API only which is backward compatible.

Security Considerations

  • Requiring validators to check this knownAccounts list before execution could exacerbate the resource usage issue caused by reading large amounts of state. Effectively, a submitter to this endpoint could cause high CPU usage for a validator without ever paying gas, also resulting in denial of service.
    • This is mitigated by adding a condition (which is also mentioned in the spec) that the number of slots/addresses in known accounts should be less than 1000.
  • A validator isnā€™t strictly required to follow this protocol, so a validator could insert a transaction before the bundle which changes the gas characteristics of the bundle, and thus causing issues for the relayer which has submitted the bundle. In effect they could create a scenario where the relayer cannot be fully reimbursed for the gas they pay.
    • This technical issue already exists without this endpoint, and, in any event, the relayer could just stop using that validator.

References

Reference Implementation

Copyright

All copyrights and related rights in this work are waived under CC0 1.0 Universal.

8 Likes

I have serious misgivings about this. It provides spammers / bots with a ā€œfree lookā€ mechanism that is paid for with validator computeā€¦ which is the one resource we should be precious with.

I would be against any account abstraction proposal that doesnā€™t impute some form of cost on the entity using up the validatorā€™s processing power.

This particular mechanism would be very easy to abuse. Hundreds of reverted spam transactions on chain would become hundreds of thousands of unexecuted (but simulated) transactions off chain. This would be 10x worse than the sunflower farms load.

3 Likes

As an example of how this could be abused, please consider how ā€œstat arb botsā€ (IE statistical arbitrage algorithmic traders / Market Makers) currently operate on Polygon.

Here is an example of a ā€œTop of Blockā€ auction in which the competing stat arb bots incremented their GasPrice to try and have their transaction processed first in the block. Note that on block explorers like polygonscan, the transaction order is listed in reverse and the first transaction is therefore at the bottom of the list.

This is called a ā€œPriority Gas Auctionā€ and is a unique type of MEV in that unlike backruns or liquidations, it is rare for stat arb bots to ā€œspamā€ multiple, distinct copies of the same transaction.

The reason there isnā€™t any spam is simple: as you can see from the image, each transaction is very expensive.

Unlike backrun MEV and liquidation MEV, which on Polygon require the bot to have a GasPrice that matches the GasPrice of a user or oracle, with these Priority Gas Auctions, the winner is whoever have the highest GasPrice that gets seen by the Validator.

Thereā€™s an auction battle that takes place in the Memory Pool - a bot will send out a transaction with a GasPrice of X and a competing bot will see it and send out their own transaction with a GasPrice of X+1. The first bot doesnā€™t want to lose, so they send out a new transaction with the same Nonce and From but with a GasPrice of X*1.1 (the 1.1 modifier is because the nodes require a 10% increase in GasPrice to overwrite a previous transaction from the same address / nonce).

Due to the transaction overwriting, this tends to not place too much strain on the Validators as most transactions get overwritten before they make it past the Validatorā€™s Sentry nodes.

It is crucial for the bots to not have duplicate transactions, as their GasPrice is a function of how profitable the trade is. The higher the tradeā€™s profit, the higher the stat arb bots will bid with GasPrice, with each bid overwriting the previous one.

How do they calculate the profit? To oversimplify, stat arb bots look at the price on a centralized exchange (Such as Binance) and then look at the price on a decentralized exchange (Such as quickswap). The greater the difference in the two prices, the more profitable the trade is. But remember that every trade on a DEX will change its price - therefore the first stat arb bot to trade on the DEX will change the DEXā€™s price so that thereā€™s no profit left for the next stat arb bot to capture.

Now letā€™s imagine PIP-15 goes through as-is. How would the bots act differently?

The first thing theyā€™d do is start bundling their own transactions. They could set the KnownAccounts parameter of their bundled transaction to hold the pre-arbitrage DEX price. This would make the entire auction free for all parties.

All nodes would use the previous storage value to check the transaction, and they would all pass. It wouldnā€™t be until the validator starts actually building the block and processing the transactions that the fails would take place. And these fails would be free.

Only the winner - the bot that lands the first transaction - would have to pay their gas fees, and they would only have to pay those gas fees knowing that theyā€™d won and that it was a profitable transaction. Validators would start making significantly less money - no reverts means no more cost of reverts.

The second thing that would happen is that stat arb bots would start using the tactics of the backrun spammers. For the priority gas auctions, speed does have an advantageā€¦ and due to the default-random propagation of Bor (thatā€™s also found in Erigon btw), the more transactions one makes, the greater the likelihood of it arriving quickly at the validator node.

So the stat arb bots will start generating hundreds of transactions and flooding the network with them, knowing that only one of them is valid and that the rest will be discarded without cost.

Meanwhile the backrunners / liquidation bots will start doing the same thing, only 10x worse.

Currently, the only variable that puts a ceiling on the number of transactions they spam is the ratio of the cost of each transaction to the profit it might capture. This PIP takes the denominator of that ratio - the cost of a failed tx - and sets it to 0.. The ā€œformula for the optimal number of transactions to spamā€ for these bots stops depending on the cost of a transaction and instead relies solely on the cost of their own compute to create / sign the transactions.

Given that some of these bot teams are running over 100 nodes, I do not think this would be good for the health of the chain.


*Note that FastLane helps to mitigate the randomness for backrunners / liquidations, but it is not currently able to run Top of Block auctions / priority gas auctions. This is because:

  1. We donā€™t see the need, as thereā€™s already a built in mechanism (GasPrice)
  2. Thereā€™s no intuitive, non-invasive solution that allows ā€œtop of blockā€ transactions but that wonā€™t facilitate sandwich attacks or other attack vectors on Polygonā€™s users.

And even though the randomness for backrunners / liquidations is mitigated for validators who are connected to FastLane, if thereā€™s no cost of failure for the spammers then thereā€™s no strong disincentive for spammers to try, even if their chance of success is virtually 0.

4 Likes

Hello Everyone,

We (Proton Gaming) are in an interesting situation with this PIP as it is likely a big boost for web3 game developers and players which is great but may not be in our financial best interest as a validator.

If Iā€™m off base on this, please show me the light but the biggest advantage for game devs and gamers would be lowering/removing fees for unexecuted/reverted transactions. This is obviously a great step toward what we all talk about in web3 gaming - ā€œfrictionlessā€ interaction with the blockchain.

To this end Iā€™m inclined to recommend adoption.

As a validator however, that removal of fees essentially removes those potential rewards from the pool. While I may not give this too much consideration in a solid market, I am obligated to raise the concern currently.

I would love to see an amendment or recommendation to offset this potential loss of revenue for Validators.

Of course if there is another variable in the equation Iā€™m not seeing or aware of, please let me know.

From a security standpoint we see some potential opportunities for abuse but have no doubt the Polygon Team would address those as attempts are made.

Overall this is a great step, particularly for web3 gaming.

Regards,

Scott Lilliston
Co-Founder & Managing Director
Proton Gaming

4 Likes

I agree with @Thogard, that this would with high probability create a big resource consumption for validators.
I wonder if it could possibly be mitigated by limiting the number of consecutive txs/bundles containing same data.
Alternatively, a change could be made in such way, that Polygon validators would require blockbuilders to register their address (a separate whitelist), which then can be monitored for spam and banned if needed.
This creates a set of problems of itā€™s own, mainly that it will create a very limited and permissioned environment for block builders using Polygon. On the other hand, this will only relate to bundles using eth_sendRawTransactionConditional. Arbitragers can still run arbitrage bots as usual as they did before.

4 Likes

Hi @Thogard, @ScottLilliston, @luboremo thank you for the feedback and the detailed explanation.

The following changes are made in the PIP (will update this forum post once the PR is merged).

The eth_sendRawTransactionConditional API is privileged and can only be used if the bundler is connected to the validator (block producer). The conditional transactions will not be broadcasted to the peers, so if the bundler sends the conditional transaction to the sentry node, it will not be executed. There is a bidirectional trust involved between the validator and the bundler. The validator trusts that the bundler is not going to spam him, and the bundler trusts that the validator will not front-run the transaction.

Looking forward to your feedback and comments.

9 Likes

This sounds interesting! I think itā€™s a great fix.

Iā€™m not sure if thereā€™s a right or a wrong answer to this question, but what type of party do you envision acting as the bundler in a scenario like this?

4 Likes

From what we know, Alchemy, Biconomy, and Stackup are already running bundlers. But to use this API, each of them would need to be connected with at least one validator.

3 Likes

Hello! Iā€™ve reviewed your proposal for adding support for EIP-4337 Bundled Transactions, and Iā€™m intrigued by the concept of the eth_sendRawTransactionConditional endpoint.

However, I have a question regarding the ā€˜knownAccountsā€™ feature. While this mechanism helps ensure the validity of transactions during validation, could you provide more insight into how this process interacts with potential changes in smart contract storage post-validation? How does it prevent cases where valid transactions become invalid due to post-validation changes in storage?

2 Likes

Confirming that Alchemy and others are already running bundlers on Polygon and they do occasionally get frontrun. I also completely understand the attack vector described above.

A few more thoughts:

  • If only some validators participate, this can significantly increase time to mine for these operations.
  • Each bundler team has to independently start defining relationships with each validator which adds a lot of overhead.
  • Perhaps a Flashbots style relay with reputation that proxies these transactions directly to validators could be useful? The reputation system protects the validators from DoS attacks but creates a single simple endpoint that can avoid relying on the public mempool and allows validators to receive these transactions directly. I know BloxRoute labs has something similar but thereā€™s likely issues with validator opt-in.
3 Likes

If youā€™d like, we could create a system that could forward the bundles directly on to validators. As far as MEV protocols, FastLane has the greatest coverage of validators (roughly 1/3 now and rapidly growing) and is also optimally positioned to verify that the bundles wouldnā€™t disrupt any of the MEV auctions or lead to ā€˜sandwichā€™ attacks on users. Iā€™d need to do more research on the latter though, and it would be an ā€˜opt inā€™ system for validators.

If youā€™re interested please send me a DM. My biggest hesitation with respect to building something like this would be that by the time we finish the whole stack that itā€™s built on would be deprecated due to Polygon 2.0 :sweat_smile:

3 Likes

The validators validate the transaction directly before inserting it into the block, so there wouldnā€™t be a way for invalid inclusion unless the validator had modified their node to bypass that check.

3 Likes

Thank you for the reply and the initiative, @Thogard.

This will not be the case, as even after Polygon 2.0 the PoS validators will still exist, and the functionality of the APIs will remain the same.

3 Likes

Interesting! Thatā€™s good to know. We currently use bor nodes for our MEV infrastructure. I was under the impression that 2.0 would involve a pivot away from geth (bor) and towards an erigon-based execution client. But Iā€™m sure we could set up the relay so that we could move the module from geth to erigon with minimal refactoring.

Iā€™ll discuss with my team - this type of infra layer would be a fun project.

3 Likes

Hi folks, a small update has been made to the PIP. The namespace of the API has beem changed from eth to bor. Meaning the new API endpoint will now be bor_sendRawTransactionConditional.

2 Likes

Hello everyone, we have included PIP-15 (that is, added support for the bor_sendRawTransactionConditional API) in the latest bor v1.0.4 release.

Try it out and please feel free to share your feedback. Thank you.

1 Like

Will begin testing out a relay shortly.

Currently, we plan to delay all bundled operations by two blocks to ensure that they arenā€™t being used to sandwich attack or otherwise take advantage of users of the regular mempool.

1 Like

Why was the namespace changed from eth to bor.

Wallets and other chains support EIP-4337 under the eth namespace, and wallets have not added support for any bor namespace methods.

any universal dApps which utilize this method would also expect to be able to use the eth namespace for this method.

Historically bor has been used for methods which are uniquely and specifcally polygon related, eip-4337 is not unique to polygon, and putting it under the bor namespace only just adds headache and complexity to projects, or they will just not support EIP-4337 on polygon.