SVC Ready

    Question: - Add any items to the above list that you think is important to help maintain your best submissions as an ongoing resource. Anything goes here: it could be “get more feedback on my answers”, “bug bounties”, “regular community calls to review”, “create an Observer squad to monitor”, “improved ticketing system”...tell us what’s important to ensure that your best work is accurate, fresh, and up-to-date - Propose a solution that involves leveraging community resources (that’s all of us!) to help monitor and maintain high-value dashboards, like the Acme of Skill submissions for Question 159. Be clear about the 5 W’s (who, what, where, when, why).

    Maintaining motivation for effort

    • Bug Bounties: Bounties and specifically Bug Bounties need to be maintained at a good rate per/submission for all levels, in order to ensure that the talent actually wants to spend time, outside of work/study to write the queries + reports. Not all questions will be interesting to everyone.
    • Feedback on submissions: I've been rejected once on submission with just the "Not eligible" tag as feedback. Obviously, I can understand that I did not put nearly enough effort into that submission as expected. Although that is the case, it would be greatly appreciated, if the person that grades the submission also left a 1-2 paragraph comment on the submission.
    • Community calls: Community calls help bring in ideas for data imports and up and coming ecosystems that the community may like to investigate.

    Maintaining HQ submissions

    • Small bounties every 2 weeks/monthly for updating/improving those HQ dashboards.
      • This will ensure a continuation of life for the dashboard and also bring motivated, incentivised community members with new in touch with the dashboard.
      • The reason for the incentive is pretty self-explanatory. Upon continuously rewarding one's effort you're ensuring that they would want to go back and refine/work on the dashboard. In the case where they do not, there are other community members that would be interested to take the bounty and helping with the maintenance. In terms of the output, this is a win, win as Flipside gets continuously maintained, QAed dashboards and the users that do the work get rewarded for doing so.
      • Why I am proposing every 2 weeks to a month for the timeframe for this, is because this is a long enough term where the data on the dashboard, up until that time, are still relevant and would soon benefit from a re-run or enrichment. (New protocols, projects, re-running queries so that they are up to date, e.t.c...)

    Fixing Errors

    • Wrong logical analysis (SQL): Bring in data scientists, to help build some NLP models that will only accept submissions that contain i.e. SQL that looks at that specific contract. When doing this you need to be very careful to not discriminate, as not all submissions are done using on-chain (flipside specific) data.
    • Outdated content: This can be fixed with a simple query re-run, to ensure data are up to date. Or even better, on some occasions, a re-run could be triggered by a spike in x or a spike in y. i.e. Increased TVL in Anchor, trigger re-runs of all Anchor related queries that get displayed into dashboards.
    • Broken functionality: This should be treated with similar rewards as the bug-bounty system. Figuring out where there is broken functionality on the site and rewarding the user accordingly.
    • Paying maintenance costs
      • Time aspect: There is no way around this one as if you want to keep a high amount of bounties distributed to the community and also grow the community, the number of submissions will need to increase significantly, which then also does not help if the number of people validating these submissions is fixed. A way to circumvent that would be to hire more people, either on a contract basis, where they come in 2 times a week and validate up to x amount of submissions or on a more permanent basis, where they come in, join the team and help. In fact, for either of these things, you could ask the top-performing users whether or not they would like to part-take in this. Assuming that there are a good marking scheme for the submissions, that should not affect the users negatively as more bias is introduced.
      • Monetary Costs: Whether it is AWS, GCP, Azure or local infrastructure, it can only handle so much load, from users spamming the infrastructure with queries. What could be deployed here, and depending on the budget, is a more flexible scaling up/down the system. Where you let the containers/EC2s scale up dynamically according to user activity hourly/daily.