The best part of the fediverse is that anyone can run their own server. The downside of this is that anyone can easily create hordes of fake accounts, as I will now demonstrate.
Fighting fake accounts is hard and most implementations do not currently have an effective way of filtering out fake accounts. I’m sure that the developers will step in if this becomes a bigger problem. Until then, remember that votes are just a number.
Maybe also take the votes provenance into account. If the distribution of the votes is so that 99% of them comes from a single instance, use maybe 5% of that. As more instances get to vote, increase the “reliability” of the post.
Interesting idea.
deleted by creator
Obviously, on the server the posts are from, you display the full vote count. There, the admins know the accounts, can vet them, etc.
This would be rather to detect and alert admin of a bad actors (instances) and then admin can kick it off from federation same for other tupe of offences.
Small instances are cheap, so we need a way to prevent 100 bot instances running on the same server from gaming this too
Well, as I said, statistical distribution is a good indicator. So, this can be generalized to a “higher degree”, and we could weight the instance by age, peering, etc. The Web-Of-Trust idea could also have instances vouching for other instances, thereby increasing their standing. Etc etc.
How would you prevent someone using wildcard domains from spamming servers the same way they can spam clients? The Fediverse has no way to distinguish between subdomains and normal domains. Anyone running an instance through classic DDNS would be affected by this.
The approach could work, but it would invalidate some major assumptions in the Fediverse itself. The algorithm would also need to make sure a few single user instances don’t get to sway entire servers.