#13.The Eclipse Score

    Invent an easy-to use ""Eclipse Score"", based on at least 3 metrics, to display validators who are not effectively representing their delegators via voting.

    db_img

    Let's Define A New Scoring Metric for Terra Validators

    Invent an easy-to use ""Eclipse Score"", based on at least 3 metrics, to display validators who are not effectively representing their delegators via voting.

    For example: if you participate in only 5% of votes but are in the top 5% of validators by LUNA delegated, you should have a terrible Eclipse Score. Alternatively, if you participate in 5% of votes but are in the bottom 10% of Validators by amount delegated, you're not letting down the community quite as badly, and your score should reflect that (but it still shouldn't be very good; 5% is awful1)

    Provide a table and at least 1 visual that displays validators according to your Eclipse Score, e.g. a bubble chart with LUNA Delegated vs. Votes Attended.

    Top Score Worthy: Additionally, analyze the top 5 best and top 5 worst validators according to your Eclipse score. For Insight, provide your analysis or commentary on one, several, or all of these validators.

    3 Metric to Rank the Validators

    To determine a good validator, given that not much data is available to us, we define the following three criteria: ‌

    • Total Voting Power
    • Voting Participation
    • User(Staking) Growth

    The following three diagrams show the output of these three metric and the top 20 validators based on each criterion.

    Loading...
    Loading...
    Loading...

    A Single Weighted Metric

    It is better to define a criterion with these specifications based on the above three criteria:

    • Voting power: 60%
    • Active participation in voting: 20%
    • Growth and popularity rate among users: 20%

    And then, the ratings are ranked accordingly.