Don’t tell people “it’s easy”, and six more things KBin, Lemmy, and the fediverse can learn from Mastodon
https://privacy.thenexus.today/kbin-lemmy-fediverse-learnings-from-mastodon/
Reddit’s strategy of antagonizing app writters, moderators, and millions of redditors is good news for reddit alternatives like KBin and Lemmy. And not just them! The fediverse has always grown in waves and we’re at the start of one.
Previous waves have led to innovation but also major challenges and limited growth. It’s worth looking at what tactics worked well in the past, to use them again or adapt them and build on them. It’s also valuable to look at what went wrong or didn’t work out as well in the past, to see if there are ways to do better.
Here’s the current table of contents:
* I’m flashing!!!
* But first, some background
- Don’t tell people “it’s easy”
- Improve the “getting-started experience”
- Keep scalability and sustainability in mind
- Prioritize accessibility
- Get ready for trolls, hate speech, harassment, spam, porn, and disinformation
- Invest in moderation tools
- Values matter
* This is a great opportunity – and it won’t be the last great opportunity
https://privacy.thenexus.today/kbin-lemmy-fediverse-learnings-from-mastodon/
Thanks to everybody for the great feedback on the draft version of the post!
#kbin #lemmy #fediverse @fediversenews @fediverse@kbin.social @fediverse@lemmy.ml
@SemioticStandard There are good subreddits with over a million users. At least up to some threshold, it’s just not true that the more popular a community becomes the shittier it gets.
I disagree with that. The larger subreddits have significant moderation problems. Only through extraordinary efforts by the mod teams, such as at /r/askhistorians, are things kept in line. It’s simple math: the more users you have, the more likely you are to have people posting in bad faith. If a subreddit of 1 million users has only 0.05% of its users posting low quality content, that’s still 50,000 people that need to be moderated for.
@SemioticStandard I agree that the larger a community gets the harder it is to moderate well (and the tools here are still much less advanced than Reddit, which is a big problem). But trying to deter bad actors by making it hard to sigh up doesn’t work. Spammers and other bad actors are typically more likely to make the effort than people who might well add a lot of value.
Why do you think this?
@SemioticStandard experiences moderating forums and discussion groups on multipple platforms, helping to start two social networks, and what I’ve learned as part of Disinfo Defense League over the last few years.
[And I have no idea why fediversenews is boosting this post!]
I really do not understand this expectation people have that an online forum of 1,000,000,000 people would be full of deep nuanced conversations. Even if you got the smartest 1,000,000,000 people who ever lived and put them in some group, how could they consistently have interactions anything other than superficial? Communications will be flying around at blazing speed all the time.
Any group that size is going to have only tenuous connections and contexts with one another. So it will suit certain kinds of topics and vibes and goals and not others. The lingua franca of funny cat videos will work. But some things require a more intimate approach where participants can create and become acculturated to group norms. Luckily all modern forum software and platforms have the ability to form sub groups and to choose what groups you attend to. Nobody was forced to spend time in /r/all. All this talk about how put upon the smarty pants geniuses are because easy to use technology compelled them to pay attention to dumb people does not impress me. Really just seems to be a lack of agency when it comes to deciding how to spend one’s own time.
I think large communities can be perfectly fine as long as you have your expectations calibrated properly.