Gathering block information (and potentially other lower-level metrics) from the validators is a fantastic goal, but the technical solution seems to have some aspects which need further consideration:
-
The client-server structure makes the system centralized, adds a single point of failure (the server), and creates a trust relationship between each validator and the operator of that server. Such an approach is incompatible with a network that strives for full decentralization, like Polygon.
-
The proposed system is opaque and places the data in a silo. The community can not obtain the raw information provided by validators and no one can build additional or alternative tooling, creating vendor lock-in.
A better approach would be for the validators to publish the information over a decentralized pubsub protocol, making it freely accessible to everyone and allowing anyone to build analytics, monitoring, and alerting tools on top of the stream of data. With this approach, any alerting backend including the proposed one can receive the information via the decentralized messaging protocol instead of directly from the validators, and other backends and frontends can subscribe to the information equally well, creating an open and fair environment with no lock-in and no trust required.
To provide an example, the Streamr Network uses a similar approach to share node metrics across the community, allowing anyone to build tooling such as this explorer, where real-time information about the network nodes is available without any centralized backend collecting the data.
I would be happy to work on and put forward over the next few weeks an improved proposal around the same idea with two important improvements:
-
The data published by validators is distributed over a decentralized protocol in an open and accessible way,
-
The data content is extensible and flexible, allowing it to include a set of metrics - for example CPU & memory usage, or whatever the community finds useful for detecting problems in the validator set.