Pre-PIP Discussion: Addressing Reorgs and Gas Spikes

Pre-PIP Discussion: Addressing Reorgs and Gas Spikes

Changes in BaseFeeChangeDenominator and SprintLength

Note: The below poll serves only to gauge general sentiment towards the changes and is by no means a binding vote concerning the hardfork’s contents or implementation.

The poll is one of the feedback tools employed, such as the discussion thread and the Polygon Builders Session, each addressing a different audience, i.e., validators, general community, and core infrastructure developers. A hardfork on Polygon is reached in a decentralized manner via rough consensus of two-thirds of $MATIC stake validating the network.

As of January 18th, all active validators have executed the Delhi hardfork, with 3.5 billion staked $MATIC (including delegations) now validating the chain’s upgraded version.

For more information on how upgrades are performed, please refer to the Polygon documentation.

Which of the following changes would you implement?
  • Decreasing SprintLength to 16 blocks from 64 and increasing BaseFeeChangeDenominator from 8 to 16 (as proposed in this document)
  • Only Decreasing SprintLength to 16 blocks from 64
  • Only increasing BaseFeeChangeDenominator from 8 to 16
  • Other solution (please specify in the comments)
  • None of the above
0 voters

Overview

In this thread, we want to invite the community to discuss the proposed changes in the Polygon PoS chain aiming to provide better UX by addressing two major concerns - a) reorgs and b) gas spikes during high demand.

It is possible for reorganizations to occur in the chain with the consensus mechanism bor uses. Although there has been a reduction in the frequency of reorgs with the introduction of the BDN being used by validators, it is still prevalent and a cause for concern among DApp developers. One of the ways identified to mitigate the issue is to reduce the sprint length from the current 64 blocks to 16 blocks.

Polygon PoS chain saw a major update in the beginning of 2022 when EIP-1559 was rolled out on Mainnet as part of the London Hardfork which changed the on-chain gas price dynamics. Although it works well majority of the time, during high demand, we have seen huge gas spikes due to rapid increase in the base fee. We propose to smoothen the change in base fee by changing the BaseFeeChangeDenominator from 8 to 16.

Chain Reorganisations

A chain reorganisation (or “reorg”) takes place when a validator node receives blocks that are part of a new “longer” or “higher” version of the chain (we refer to this as the chain with the highest difficulty, or the “canonical” chain). The validator node will then ignore/deactivate blocks in its old highest chain in favour of the blocks that build the new highest chain.

The impact on applications (and ultimately users) is related to transaction finality as reorgs disrupt an application’s ability to be confident that their transactions are part of the canonical version of the chain. To get around this, applications need to wait for additional block confirmations.

Why are there reorgs in the first place? Because the consensus mechanism used in PoS Mainnet is probabilistic - meaning that finality is eventual and typically based on the number of confirmations layered on top of the block holding your transaction.

In a typical industry example, a transaction on the Bitcoin Network is typically considered “final” after about 6-12 block confirmations. On PoS we suggest that applications wait approximately 50 or more blocks to feel safe about transaction finality - this can lead to undesirable user experiences unless the application teams can build in some fancy workarounds.

Proposed Solution

Based on the above, we propose a decrease in the sprint length from 64 to 16 blocks. This means that a block producer produces blocks continuously for much lower time as compared with the current 128 sec. This will help a great deal in reducing the frequency and depth of reorgs. This doesn’t affect the total time/no of blocks a validator is producing blocks over a span and hence there would be no change in the rewards overall.

Background Work

We observed that most of the reorgs have very low depth (as is clear from the below graph created on a sample set of blocks/reorgs).

We conducted experiments by creating multiple devnets consisting of 7 nodes. In each devnet, we set different sprint lengths (8, 16, 32, 64) and induced reorgs in 1 node and 2 nodes programatically.

In 1 node partition, we experimented by disconnecting once by primary node and once by secondary node. We induced reorg length of 4x(sprint size) and found out that the maximum reorg length was around 1x(sprint length).

In 2 node partition, we experimented by disconnecting once by (primary node, secondary), once by (secondary node, tertiary). We induced reorg length of 4x(sprint size) and found out that the maximum reorg length was around 2x(sprint length).

As the sprint length decreased, the reorg depth also decreased. This relation is made clear in the below charts.

Gas Spikes

To know more about the implementation of EIP-1559 and its effects on Polygon, you can refer to this forum thread.

To summarize, the main reasons for gas spikes happening during high demand are:

  1. Exponential Gas pricing due to EIP-1559 and the current basefee: On Ethereum, a series of constant full blocks will increase the gas price by a factor of 10 every ~20 blocks (~4.3 min on average). But on the PoS chain, since the block time is 2 secs, the baseFee goes 10x in just 40 seconds. A user’s transaction becomes ineligible due to high gasPrice right after the particular block it was fired in. As the network utilization goes high, the baseFee starts climbing and the transaction is stuck in pending till the baseFee comes back down and the transaction becomes eligible. Also, gas SPIKES faster than applications (wallets, etc) can keep up / make predictions. This results in poor user experience during peak times.
  2. Bad contracts: Occasionally poor design choices by dapp developers will unintentionally create a DDoS type effect on the network and litter the transaction mempool with tons of transactions driving up demand for blockspace resulting in higher gas prices for everyone.

Proposed Solution

Hardfork

We would like to propose increasing the BaseFeeChangeDenominator from the current value of 8 to 16. This will smoothen the increase(/decrease) in baseFee when the gas used in blocks is higher(/lower) than the target gas limit. After this change, the rate of change will decrease to 6.25% (100/16) as compared to the current 12.5% (100/8).

Background Work - Simulations

We ran several simulations and found the following:

  • Blue line: recent PoS gas price spike
  • Red line: ‘back fit’ calc with 2x decrease in rate of change
  • Key point: rate is exponential over time (blocks) so small change in rate is magnified

Experiments using Load Bot

We ran the load bot on devnets we created with different BaseFeeChangeDenominator values - 8 (curent), 16 (proposed), 32 and 64. There are two sets of results important to note:

  1. Blocks/Time taken for baseFee to become 10x (the higher the better)
  2. No of Blocks for 100k txs to get mined into blocks (the lower the better)

This time, we are proposing the change in BaseFeeChangeDenominator from 8 to 16.

Next Steps

We invite the community to participate in the discussion and the soft consensus vote attached to this post. In order to signal preference, each validator team present on Discourse is able to cast one vote, using their designated account.

We will be conducting a Town Hall on Monday 12th, in direct correlation to the proposed changes in this post. This meeting will serve as a place where the bright minds of the community come together and evaluate decisions, share suggestions and clarify any queries that may be present in relation to the proposed changes.

We look forward to seeing and hearing from you there, the meeting invite will be sent out via email. The session will be recorded for those who cant attend.

Note: The scheduling of a potential hardfork will be communicated using regular channels, i.e., Discord, TG, Email, Forum.

3 Likes

@Mateusz

Regarding the reduction in sprint length, could you elaborate on how the change in sprint length would be implemented into the consensus engine?

  1. Would checkpoint intervals stay the same?
  2. Would this require any changes to Heimdal?
  3. How would this impact finality?

Regarding the gas spikes - The new PFL protocol (Polygon FastLane) will eventually mitigate a significant percentage of the “spam” transactions that lead to many of the gas price spikes. Currently about 20% of validators (based on stake weight) are participating but we expect that to grow significantly as awareness spreads and as MEV revenue for participating validators increases. Similarly to how Flashbots adoption on Ethereum Mainnet led to a massive decrease in spam, we expect PFL to have the same effect here on polygon.

If we assume that PFL adoption does reach threshold levels, I’m concerned that increasing the BaseFeeChangeDenominator might render the base fee meaningless. Currently, during “high spam periods”, the vast majority of the transactions are redundant “spam” transactions that exist to game the randomness of the p2p layer’s transaction propagation mechanism. The spam transactions stay in the mempool until processed and, based on the quantity of spam transactions, provide time for the baseFee to “catch up” to the market rate (maxPriorityFeePerGas). With PFL removing the economic incentive for the existence of these duplicate “spam” transactions, I foresee the baseFee having much less time to “catch up” to the market rate before it (and the market rate / priority fee) begins dropping back down.

Regardless, I believe that slowing down the rate of change of the baseFee won’t make things easier for wallets’ gas price estimators. This is because if the baseFee is slower to increase, the minimum maxPriorityFeePerGas for block inclusion will be slower to decrease. This may actually hurt users as wallets are better equiped to handle changes in baseFee than changes in maxPriorityFeePerGas, the latter of which has greater variance between nodes. Note that I’m starting with the assumption that the user may be ‘priced out’ of the block and am not addressing the potential savings to the user from EIP1559 if the baseFee drops in between transaction submission and block inclusion.

I can’t help but think that going back to the legacy gas price solution might be simpler and more effective solution.

1 Like

Before casting a vote, we’d love to see more comments. Because of @Mateusz commentary, we might initially lean toward reducing sprint time only, but certainly open to all options.

1 Like

@Thogard thanks for your thoughtful analysis. I recall reading some of your ideas about re-orgs previously and they were equally helpful.

Regarding your questions about the sprint length change, there is no immediate impact on checkpoints or explicit finality. Obviously, if the rate and depth of re-orgs decreases as intended, there is a significant benefit to probabilistic finality and this is part of the motivation for the change.
Having said that, we are also planning a subsequent set of changes - which will require a hard-fork as well - that will directly address this and formally anchor finality to sprints. The intent with this is to have much faster, more explicit finality. More on that soon.

With the change to the base fee calculation, I would say that spam transactions are part of the issue but only part of the story. While spam is often a problem some of the spikes we have seen in the base fee recently have been driven by sustained surges in usage from legitimate applications.
I think we can all agree that the robustness and success of applications is the primary driver of value to any blockchain - all the more with Polygon given the range and diversity of the applications we host. I’d wager if you polled any number of the prominent application teams on Polygon they would say the wild variability of gas fees is their number one current challenge, even above re-orgs. We’re aware of some teams that have had to implement extraordinary - and unhealthy - mitigations to manage the spikes. From our perspective, some of these spikes are tantamount to outages - the system functionally becomes unusable.

Moreover, our modeling and testing has shown us that the core EIP-1559 parameter - which arbitrates the maximum rate of change - is just too high for us given the much greater throughput we have compared to other systems like Ethereum. If anything, I’m concerned that the 2X change we’re proposing here might not be enough - but we’re intentionally being cautious about this.

Finally, I take your comment about legacy gas pricing as being somewhat informal. :slight_smile: Any gas auction mechanism is tricky and needs to balanced correctly. Basically it’s a hard problem. But EIP-1559 has proven to be successful at managing capacity on Ethereum while improving gas price stability and I think it can be successful on Polygon. We also consider it critical to maintain compatibility with much of the tooling and infrastructure in the ecosystem. We just need to get the tuning right.

Thanks again.

3 Likes

@pgo You nailed it. You’re correct about my comment on legacy gas pricing being somewhat informal. It’s actually a topic that I don’t have a high-conviction opinion on one way or the other, but I do enjoy discussing it and I do believe that the matter isn’t as “settled” as many others believe. This probably isn’t the place for it, but if you’re curious and have some free time then maybe we could connect in telegram (@thogard) or discord (thogard#3282) and go over some of the more nuanced elements of that EIP and some issues I’ve found in the studies conducted on it.

Regarding the proposed changes:
My own mental model is still stuck on the mechanism by which decreasing the baseFee’s rate of change (delta) will lead to more reliable estimates from wallets. I’m going to lay out my thoughts, along with some formulas, below. I assume you’re familiar with most of this but am going to be thorough for the sake of other readers.

The criteria we’re evaluating is a wallet’s capacity to provide an estimate for gas parameters that will place the transaction in a block within the next ten seconds or so.

In EIP-1559, there are two market forces at play:

  1. The base fee
  2. The priority fee (EffectiveGasTip)

By decreasing the max delta of the base fee, it feels like a wallet should be able to provide a more reliable estimate… but once we look at how these numbers are calculated then I’m not sure that’s actually true.

Under the hood, a Type 2 transaction has two “gas price” parameters: maxFeePerGas and maxPriorityFeePerGas. The two parameters are exactly what their names imply - maxFeePerGas is the maximum (capped) total gas price, and maxPriorityFeePerGas is the maximum (capped) miner tip. While the wallet is responsible for providing an estimate for these two parameters, it’s important to note that the wallet does not set a “base fee” parameter. In fact, transactions don’t even have a baseFee parameter – the baseFee exists solely in Bor/Geth’s miner/worker process (it’s not even accessible inside the EVM).

But the baseFee is still factored into how much the user actually pays in gas fees: it is the mining parameter through which the difference between maxFeePerGas and maxPriorityFeePerGas manifests. The formula is:

GasPrice = minimum(maxFeePerGas || baseFee + maxPriorityFeePerGas)
(Note that maxPriorityFeePerGas cannot exceed maxFeePerGas)

also note its sister-formula:
EffectiveGasTip = minimum(maxFeePerGas - baseFee || maxPriorityFeePerGas)
and therefore
EffectiveGasTip = GasPrice - baseFee

So with that in mind, here’s why I’m struggling to understand why decreasing the base fee’s max delta will help the wallet estimators:

  1. If a user’s transaction fails to be included in a block, it’s because their GasPrice was too low.
  2. GasPrice is the lesser of two separate parameters (maxFeePerGas and baseFee + maxPriorityFeePerGas).
  3. The only scenario in which lowering the max baseFee delta would allow an otherwise excluded transaction into a block is when the baseFee would have otherwise increased above the user’s transaction’s maxFeePerGas.

But this is actually the exact scenario which EIP-1559 was designed to handle. User’s can safely set an exorbitantly high maxFeePerGas (and a low maxPriorityFeePerGas) and rest assured that not only would the scenario above be prevented, but if the baseFee actually drops then their effective GasPrice will also drop (because it’s de-facto pegged to the baseFee in this scenario) and the user will save money. The only downside to setting a high maxFeePerGas is that the user may have to actually pay it if the baseFee increases enough to warrant it… but they can rest assured knowing that if they do have to pay a high maxFeePerGas then that means there was no other way to be included in the block. In fact, this potential for confidence in savings is one of the strongest arguments in support of EIP-1559.

The other side…

My concern with decreasing the max baseFee delta is the potential side effect it will have on EffectiveGasTip needed for block inclusion. If a spike in GasPrice does happen, any users who are priced out of the network due to their wallet using a low maxPriorityFeePerGas estimate could have their tx delay increased. This is because:
(For reference: EffectiveGasTip = minimum(maxFeePerGas - baseFee | maxPriorityFeePerGas))

  1. It will take longer for the baseFee to rise
  2. It will therefore take longer for maxFeePerGas - baseFee to become less than maxPriorityFeePerGas (and therefore become the EffectiveGasTip)
  3. It will therefore take longer for an erroneously low maxPriorityFeePerGas parameter to become irrelevant to the miner.

In general, though, I’m just very skeptical that during high-demand time periods we won’t see an artificially reduced baseFee manifest itself via supply and demand as an artificially increased EffectiveGasTip… and I believe the the latter is harder to control / plan for than the former, specifically when accounting for the changes introduced by EIP-1559 that were specifically added to help control these scenarios.

Some thinking out loud (low conviction thoughts):
The more I think about it, the more I want to gather some data on “stuck” transactions. It would be relatively straightforward to ascertain which of the two transaction parameters (maxFeePerGas or maxPriorityFeePerGas) is responsible… Or if both being too low is actually the most common cause for delay. Furthermore, we could fairly easily simulate exactly what would happen if we decreased the max baseFee delta.

I’d also propose that if the majority of “stuck” transactions are “stuck” due to a low maxPriorityFeePerGas estimate then it might be worth considering moving the opposite direction and actually increasing the max baseFee delta.

Would love to hear your thoughts on this or any clarifications/corrections if I’m missing something.

Side note: I can’t help but think that changing the blockchain’s block construction process to accomodate the inaccuracies in the estimates of third party programs is kinda backwards. Perhaps approaching the wallets / gas price oracles directly would be a simpler solve? I’m also curious to see if there are any strange interactions between gas price estimators and the 30gwei price floor.

3 Likes

Thanks for opening this for discussion.

It is a shame that Polygon network continues to be so unaware of the network conditions that it is effectively creating, due to the randomness of tx propagation and how this affects the behavior of MEV bots on the network.

I created this pull request nearly one year ago:

It was closed because apparently the Polygon team was already planning to address this, but there was never any update.

If you look at periods of high network congestion, you are guaranteed to find that much of it is due to competitive spam.

This doesn’t need to be handed off to a third party, for-profit company like Thogard’s PFL in order to fix. In fact, doing so goes completely against the open source ethos that Polygon has worked so hard to uphold.

Tl;dr there are some legacy design decisions and parameter values (like the tx_fetcher constants) that need to be addressed. They are not difficult to address, but so far nobody at Polygon has followed through with looking into this.

Please consider investigating these measures before proposing a hard fork.

2 Likes

@adamb thanks for this but I’m not sure why you would say that Polygon is ‘unaware’ of this issue when the PR you’re referencing was recently re-opened and acknowledged.

While this does seem to be a significant spam scenario it surely is not the only one and we have recently seen periods where other scenarios have created periods of high gas load.

Given this, it seems reasonable to prioritize changes that mitigate the overall negative effects of all spam, regardless of the cause. And while periods of spam do sometimes lead to full blocks, which can crowd out legitimate transactions, the most common negative effect is artificially high gas prices and occasional massive price spikes.

This is why it makes sense to prioritize changing the base gas price calculation above other mitigations.

Polygon will of course continue to research and address other issues that impact the performance and throughput of the system.

Thanks again.

2 Likes

@Thogard thanks again for your analysis on this issue.

The proposed change should have a positive effect on the predictability of gas price, if for no other reason than just keeping the price more stable. But I would not assert that this is the primary goal of the change.

Informally, I think of the rate of change in the calculation as governing the allowed capacity of price change over time / blocks. When EIP-1559 was initially implemented in Polygon, the rate was set to be the same as Ethereum since the bias is generally towards maximum compatibility with the rest of the ecosystem. But since Polygon produces blocks at 6X the rate of Ethereum and the calculation is exponential across blocks, the capacity for price change over time is greatly magnified. This is what allows for massive spikes that have a huge negative impact on the usability of the system.

Hope this helps.

2 Likes

Just one thing to add about reducing sprint length, this is the first step of introducing real, not probabilistic finality to V1 chain. @sandeep

Well, because it was closed because @ferranbt essentially promised that it was being worked on internally, and then nothing happened, and I had to pester folks to reopen it and take it seriously.

Go ask some users whether they have trouble getting their transactions included during peak network load. I hear users complaining about this pretty much every time we have an “event”, like price volatility or a NFT project hogging blockspace.

We started work on it. I believe, I am the person you could ask about it - a stuff engineer in V1+Edge.
THank you for your analysis. We decided to start with introducing a bunch of txpool related metrics, at the moment it’s not obvious what change and how will change the system - so we need a few key metrics to measure and optimize. As next steps we’re going to work on, I believe, 2 of them: txpool efficiency(ratio of normal to normal + spam transactions, and median/95/99/max time for a transaction to be included to the chain.

Thanks for your reply.

Can you come up with any reason why the sentry should NOT send a full transaction packet to the validator? Sending TransactionHashes only adds an unnecessary roundtrip.

There are things that need measurement and there are things that (IMO) are just common sense.

FastLane’s mechanism of action isn’t the removal of random propagation alone - it’s also the 250ms head start it gives to the MEV auction winner. That 250ms advantage greatly exceeds the variance in the p2p layer that’s leftover even when the direct vs announced mechanism is removed at the sentry level.

The direct-only PR alone would not solve the problem. It would help it, but it wouldn’t solve it.

FastLane solves it and gives validators MEV revenue. MEV-Bor would also solve it.

Outside of MEV, there is no way to fully remove the randomness in the p2p layer. It can be reduced, but not removed entirely. Furthermore, transaction announcements exist for a reason - to prevent redundant data transmission. Many validators run 4+ sentries. Your PR would increase the load on the validators by a non-trivial amount as the validator node would be blasted with a redundant copy of every full transaction from each of its sentries. (The FastLane sentry patch does the opposite - it announces all txs, meaning the validator only receives the full tx once. Gotta keep those data costs down :slight_smile: )

Regardless, it’s a fact that giving validators access to MEV revenue is inevitable. Whether it’s via validators responding to your DMs to set up a back-room MEV revenue split with your bot or via validators joining the FastLane protocol, most validators will start (or have already started) to receive MEV revenue. The FastLane path just has the extra benefits of solving the searcher spam issue and being non-harmful to users (no sandwich attacks, no front running, etc) .

Thank you for the response. Funny story - ironically enough, I think I was actually the first external (non-polygon) person to raise awareness on this issue and target the 12.5% as being inappropriate for polygon due to the block times. (You can search the polygon discord for the text “12.5%” to see my very informal complaining from before EIP-1559 went live.)

I understand the proposal and its mechanism for action - it slows down the baseFee’s ability to grow past a user’s transaction’s maxFeePerGas parameter. But my concern is still that pricing users out via rising baseFee is exactly what EIP-1559 is supposed to do, and allowing users to set a high maxFeePerGas to handle growing baseFees was a specific design feature for EIP-1559. This proposed solution is a good one that will help users… but I can’t help but feel like we’re putting duct tape on something that needs to be structurally changed.

If we take for granted that we want the chain to be fully compatible with Ethereum’s EIPs and that we can’t change EIP-1559 other than modifying some of the variables, then it seems to me that the real fix should be done at the wallet / gas price estimator level. A higher maxFeePerGas estimate from the user’s wallet will also help users avoid the scenario where baseFee exceeds the maxFeePerGas and their transaction gets priced out of block inclusion.

What program does metamask use when making a gasPrice estimate?

If they use bor’s (geth’s) built in estimator, I can submit a PR to the official bor repo by this weekend to fix it (smooth/boost the maxFeePerGas based on estimated time to tx inclusion in a block and accounting for the size of the mempool and the maxFeePerGas of txs at set thresholds to predict the steady baseFee rise due to regular usage). But I’m not sure if that’s what they use.

Is there someone at metamask we can reach out to for more information on their gas price estimation methodology?
(Edit: this would be in addition to your proposals, both of which I support)

You used to use “CPU load” as a reason to FUD against this change, now it’s about bandwidth. You realize that they’re all in the same data center, right? Lmfao.

This is just plain and simple FUD from someone who wants to build a monopoly.

The Lens Protocol Core Team is supportive of both changes.

We especially encourage validators to support changing the BaseFeeChangeDenominator from 8 to 16.

For services operating relays, the current 1559 configuration can cause the gas price to change must faster than most current systems can react, leading to increased uncertainty if a given transaction will need to have its transaction fee setting updated in order to be included within an acceptable timeframe.

This new reduced growth rate will bring the rate of change within the capabilities of current systems, and will provide a better UX for users using gasless systems such as Lens API, Biconomy, Gelato Relay and OpenZeppelin Defender.

Polygon PoS is already the blockchain of choice for applications that use these systems to make applications with the best end user experience, and we hope validators support these proposed changes to help Polygon PoS continue to grow its lead in this key area.

4 Likes

Just checking on this - if the reason is bandwidth usage, I can collect metrics and report back. I suspect it will be increased, but only slightly, and will only be between the sentries and validator which are almost certainly within the same intranet.

If the issue is cpu load, I can collect metrics too and I suspect it will be decreased.

To me this just seems like a very logical change so I’m unsure why it’s being opposed, with the exception of thogard’s opposition which is very obvious to me.

The last reviewer who worked at Polygon seemed to agree, and simply did not want to merge because he had other plans for the sentry/validator setup that seem like they have not come to fruition.

Fwiw. I also support the changes in this hard fork for all the reasons that others have listed

Fun fact - literally none of the seven validators currently active or pending w/ PFL are in the same data center.

CPU load is still a concern. As is bandwidth.

Re: monopoly - every single PFL node could crash and every validator using PFL would still continue to function just fine. That’s the whole point of our design.

Everyone here understands that higher MEV payments to validators means less money leftover for you. Your motive behind your PR is transparent. My motive is also transparent, but my solution actually fixes the problem and pays validators for doing so. Yours does neither - it simply continues to reward and incentivize spam (albeit to a lesser degree than before), places higher load on validators, and cuts them off from decentralized MEV revenue. But it leaves more money for you.

I am saying that validators are in the same d/c as their sentries. Is this not the case? I could care less whether they are in the same data center as each other, or as the PFL sentries.

Flashbots has done a great job with their vision of making MEV extraction open, transparent, and decentralized. Why do you insist on reinventing the wheel with such ineptitude?

In most scenarios, that is not the case.

Our system is significantly more open than pre-merge Flashbots was. It’s also ecosystem friendly and blocks sandwich attacks. Furthermore, a fork of flashbots was already tried on polygon. We learned from their challenges.

You know this. But again - you are the quintessential spammer who has the most to lose from our success.

  1. You benefit from the problem that we are setting out to solve (incentivization of spam).
  2. You benefit from validators not receiving MEV revenue
  3. You haven’t researched or tested any of your proposals

So forgive me for not taking your feedback on PFL seriously.

1 Like