Terra - The Eclipse Score


What is a validator? → Source
The Terra Core is powered by the Tendermint consensus. Validators run full nodes, participate in consensus by broadcasting votes, commit new blocks to the blockchain, and participate in governance of the blockchain. Validators are able to cast votes on behalf of their delegators. A validator's voting power is weighted according to their total stake.
What is a delegator?
Delegators are Luna holders who want to receive staking rewards without the responsibility of running a validator. Through Station, a user can delegate Luna to a validator and in exchange receive a part of a validator's revenue.
How will delegators choose their validators?
Delegators are free to choose validators according to their own criteria. This may include:
- Amount of self-bonded Luna: The amount of Luna a validator self-bonds to its staking pool. A validator with a higher amount of self-bonded Luna has more skin in the game, making it more liable for its actions.
- Amount of delegated Luna: The total amount of Luna delegated to a validator. A high stake shows that the community trusts this validator; however, this also means that a validator is a bigger target for hackers. Large stakes provide large voting power. This weakens the network. At any given time, if 33% or more of staked luna becomes inaccessible, the network will halt. Through incentives and education, this weakness can be prevented by delegating away from validators that have too much voting power. Validators sometimes become less attractive as their amount of delegated Luna grows.
- Commission rate: The commission applied to rewards by a validator before being distributed to its delegators.
- Track record: Delegators can look at the track record of a validator they plan to delegate to. This includes seniority, past votes on proposals, historical average uptime, and how often the node was compromised.
Validators can also provide a website address for advertisement and to increase transparency. However, building a good reputation in the community will always be most important when attempting to attract delegators. It's also good practice for validators to have their setup audited by a third party. Please note that the Terra team will not approve or conduct any audits.
✍️ Description of Work
In this dashboard, we want to come up with an easy-to-use "Eclipse Score", based on at least 3 metrics, to rank validators who don't effectively represent their delegators through voting. To do this, we perform the following steps and finally rank the validators based on the defined "Eclipse Score":
-
Show the list of all validators on terra along with their details (Until Feb 7, 2023)
-
Show the list of active validators on terra along with their details (Until Feb 7, 2023)
-
Ranking of validators based on the voting power, participation in voting and proposals
-
Ranking of validators based on the voting power and impact of voting on the proposals result
-
Ranking of validators based on the voting power and commission rate
-
Ranking of all active validators based on entered metrics (final result)
-
5 of the best validators and 5 of the worst validators based on their "Eclipse Score"
🧠 Methodology
To deal with this dashboard, we use the schema terra.core and the tables ez_staking, fact_governance_votes and fact_governance_submit_proposal.
1️⃣ → How to get a list of ==validators==:
First, it is necessary to explain how to get the list of validators in the terra ecosystem. In the terra ecosystem, each validator has an operator address and an account address, and the Operator Address for each validator is available in the ez_staking table. Also, each validator uses their account address to perform an activity such as voting. Now, we can get the account address of each validator using the transaction types MsgWithdrawValidatorCommission, MsgCreateValidator, MsgEditValidator, etc., and this query shows how to get these addresses. But there are some major issues with getting a list of validators using flipsidecrypto data. First, the actual Amount of Delegated (Luna) is not being obtained correctly because the transactions in the ez_staking table are incomplete and because in this dashboard we want to rank validators based on "Eclipse Score" and the Number of Delegates (Moon) is of great importance, then incorrectness of this value does not make the "Eclipse Score" obtained correctly classify validators. Second, we don't have enough information about the validators, such as their active, jailed, or inactive status, and the commission amount of each validator is not getting correctly, so we use the second method to get the list of validators and their details. So we explain how to get the list of validators:
- In this link, all terra validators are listed along with their details.
- First, we send a GET HTTP request using Postman software to the following address:
- Request URL
- Request Method: GET
- Then, after receiving the list of validators in JSON format, we convert it to a csv file, and then convert the csv file to a select command using a tool.
Now, after receiving the full list of validators, we select the validators that are active and check various criteria for them, finally getting the "Eclipse Score" for each validator.
2️⃣ → Definition of metrics and how to calculate =="Eclipse Score":
To calculate the "Eclipse Score", we calculate the "Eclipse Score" for all validators in several steps and based on different criteria, and finally calculate the final "Eclipse Score" from the weighted average of the "Eclipse Score". Eclipse" obtained for each validator And finally, we show the best and worst validators.
1→ First step:== Ranking of validators based on the voting power, participation in voting and proposals
In the first step, we calculate "Eclipse Score" based on voting power and participation in voting for proposals by each validator. The metrics reviewed in this section include:
- Voting Power==: A validator's voting power is weighted according to their total stake.
- Votes Score: Each validator has recorded a number of votes for different proposals. First, we calculate the total number of recorded votes for each validator, then we convert this number to a number between 0 and 1 and consider it as a metric for calculating the Eclipse Score.
- Proposals Score: Each validator participates in a number of proposals for voting, so we first calculate the total number of proposals in which a validator participates and then convert it to a number between 0 and 1 and consider it as a metric for calculating the Eclipse Score.
- Proposals Submit Score: Each validator submits a number of proposals for voting, so we calculate the number of proposals submitted by each validator and then convert it to a number between 0 and 1 and consider it as a metric for calculating the Eclipse Score.
Finally, we calculate the Eclipse Score for these three metric based on weighting each metric (the importance of each metric ) as follows:
round((("Votes Score" * 3) + ("Proposals Score" * 3) + ("Voting Power Score" * 2) + "Proposals Submit Score") * 100, 2) as "Eclipse Score"
2 → Second step:== Ranking of validators based on the voting power and impact of voting on the proposals result
In the second step, we calculate "Eclipse Score" based on the voting power and impact of voting on the proposals result. The metrics reviewed in this section include:
- Voting Power==: A validator's voting power is weighted according to their total stake.
- Vote Option Score: In voting, each voter can vote for any proposal using vote_option, which includes Yes, No, NoWithVeto and Abstain, and the outcome of the proposals is determined based on these votes. Each proposal passes or fails. If the percentage of Yes votes is greater than 50% and that of NoWithVeto votes is less than 33.4%, the proposal will be approved, otherwise it will be rejected. Now, in this metric, we want to check what type of vote each validator gave to each proposal and whether the type of vote was consistent with the proposal result or not. First, we get the total result of the Proposal and each validator's vote type for each Proposal, then we determine for each validator and each Proposal according to the following command whether the validator's vote type is consistent with the Proposal or not. Finally, by
sum(vote_option_score)
for each validator, we determine to what extent each validator influenced the results of the Proposals, and then convert this value to a number between 0 and 1, and as a criteria for calculating the Eclipse Score on this passed:
case
when ("Proposal Result" = 'Pass ✅' and vote_option_text = 'Yes') or ("Proposal Result" = 'Failed ❌' and vote_option_text = 'No') then "Number of Vote Option"
when ("Proposal Result" = 'Pass ✅' and vote_option_text = 'No') or ("Proposal Result" = 'Failed ❌' and vote_option_text = 'Yes') then 0
else 1
end as vote_option_score
Finally, we calculate the Eclipse Score for these three metric based on weighting each metric (the importance of each metric ) as follows:
round((("Vote Option Score" * 7) + ("Voting Power Score" * 2)) * 100, 2) as "Eclipse Score"
3 → Third step==: Ranking of validators based on the voting power and commission rate
In the third step, we calculate "Eclipse Score" based on the based on the voting power and commission rate. The metrics reviewed in this section include:
- Voting Power==: A validator's voting power is weighted according to their total stake.
- Commission Rate Score: The commission rate for validators can vary, as it is set by each individual validator or validator organization. In general, the commission rate refers to the percentage of the rewards earned by a validator that is taken as a fee for their services. For example, a validator might charge a commission rate of 20%, meaning that 20% of the rewards earned by their stakers would be taken as a fee for their services as a validator. The remaining 80% would be distributed among the stakers who delegated their tokens to the validator. It's important to note that commission rates can vary widely, and stakers should carefully consider the commission rate when choosing a validator to delegate their tokens to. Lower commission rates may be appealing, but a lower rate might also indicate that the validator is less established or has less experience, which could impact their ability to secure the network and earn rewards for their stakers. Up to this point we have examined the effect of the validator on voting based on several different metric, now considering the importance of commission rate, here we consider any validator that has a lower commission rate as a better validator, but still take Voting Power into calculate the Eclipse Score, because a low commission rate does not necessarily indicate a better validator
Finally, we calculate the Eclipse Score for these three metric based on weighting each metric (the importance of each metric ) as follows:
round((("Commission Rate Score" * 7) + ("Voting Power Score" * 2)) * 100, 2) as "Eclipse Score"
Finally, by combining these three steps and taking the weighted average of the obtained Eclipse Scores, we rank the validators.
- When calculating the "Eclipse Score" we did not neglect the effect of the Voting Power on each step and put it with a weight in the "Eclipse Score" calculation formula because the value of each validator is also measured by the Voting Power and a validator with high Voting Power can It is good to represent its delegators in the form and vice versa
- All data related to validators, including voting power, commission rate, etc., is related to before ==Feb 7, 2023==, and we got these results with data before this date.
✅ Observations
In the tables and charts above, you can see the ranking of validators based on the metric of voting power, participation in voting and proposals, as is clear:
-
5 of the best validators based on the obtained Eclipse Scores include:
- #1: Synergy Nodes → Terra Finder, Atomscan
- #2: Pro-Nodes75 → Terra Finder, Atomscan
- #3: Gidorah → Terra Finder, Atomscan
- #4: Smart Stake → Terra Finder, Atomscan
- #5: Orbital Command → Terra Finder, Atomscan
-
5 of the worst validators based on the obtained Eclipse Scores include:
- #1: Forbole → Terra Finder, Atomscan
- #2: MANTRA DAO → Terra Finder, Atomscan
- #3: Luna Whale→ Terra Finder, Atomscan
- #4: TheNFTProject → Terra Finder, Atomscan
- #5: Jesselstake → Terra Finder, Atomscan
-
As you can see, the main metric for calculating the Eclipse score are voting participation and proposals, which shows that the validators who participated in the most proposals and voted the most are among the best validators.
-
Also, in the scatter chart, where the numbers represent LUNA Delegate (Voting Power) VS. Eclipse Score, there are validators who, despite having low Voting Power, are among the best validators in terms of participation in voting and vice versa.
✅ Observations
In the tables and charts above, you can see the ranking of validators based on the voting power and impact of voting on the proposals result, as is clear:
-
5 of the best validators based on the obtained Eclipse Scores include:
- #1: Synergy Nodes → Terra Finder, Atomscan
- #2: StakeWithUs → Terra Finder, Atomscan
- #3: Orbital Command → Terra Finder, Atomscan
- #4: Smart Stake → Terra Finder, Atomscan
- #5: Gidorah → Terra Finder, Atomscan
-
5 of the worst validators based on the obtained Eclipse Scores include:
- #1: Forbole → Terra Finder, Atomscan
- #2: MANTRA DAO → Terra Finder, Atomscan
- #3: Luna Whale→ Terra Finder, Atomscan
- #4: Lavender.Five Nodes → Terra Finder, Atomscan
- #5: RockX→ Terra Finder, Atomscan
-
As you can see, the validators who have the most matches in terms of the type of vote with the proposal result are among the top validators. You can also see that the results obtained for the 5 best and 5 worst validators are almost the same as the result of step one.
-
Also, in the scatter chart, where the numbers represent LUNA Delegate (Voting Power) VS. Eclipse Score, there are validators who, despite having low Voting Power, are among the best validators in terms of impact of voting on the proposals result and vice versa.
✅ Observations
In the tables and charts above, you can see the ranking of validators based on the voting power and commission rate, as is clear:
- 5 of the best validators based on the obtained Eclipse Scores include:
- #1: polkachu.com → Terra Finder, Atomscan
- #2: Terrascope→ Terra Finder, Atomscan
- #3: Orion - Auto-Compound & Zero Fees → Terra Finder, Atomscan
- #4: WildSage→ Terra Finder, Atomscan
- #5: MANTRA DAO → Terra Finder, Atomscan
- 5 of the worst validators based on the obtained Eclipse Scores include:
- #1: The Charity Block→ Terra Finder, Atomscan
- #2: TheNFTProject → Terra Finder, Atomscan
- #3: Legend.X→ Terra Finder, Atomscan
- #4: WhisperNode🤐 → Terra Finder, Atomscan
- #5: Lunatic Validator→ Terra Finder, Atomscan
- As you can see, validators that have a lower commission rate are among the best validators, but voting power also shows its effectiveness.

✅ Observations → ==Final Result
After calculating the Eclipse Score based on various criteria such as vote and proposal participation, Validator vote type alignment with proposal results, and commission rate, we finally calculated the final Eclipse Score based on the following formula:
("Step 1 Eclipse Score" * 4) + ("Step 2 Eclipse Score" * 3) + ("Step 3 Eclipse Score" * 2) as "Final Eclipse Score"
As you see:
- 5 of the best validators based on the obtained Final Eclipse Scores include:
- #1: Synergy Nodes → Terra Finder, Atomscan
- #2: Gidorah → Terra Finder, Atomscan
- #3: Smart Stake → Terra Finder, Atomscan
- #4: Pro-Nodes75 → Terra Finder, Atomscan
- #5: StakeWithUs → Terra Finder, Atomscan
- 5 of the worst validators based on the obtained Final Eclipse Scores include:
- #1: Luna Whale→ Terra Finder, Atomscan
- #2: Forbole → Terra Finder, Atomscan
- #3: RockX→ Terra Finder, Atomscan
- #4: MANTRA DAO → Terra Finder, Atomscan
- #5: Mosaic→ Terra Finder, Atomscan
As you can see, according to our metrics, 5 of the best Validators were still among the top in most of the steps, and finally these Validators were selected as the best Validators, these Validators were more active in voting and participating in the proposals, and that It made their score high according to our criteria.




