It is widely reported that lemmy has been the target of a massive bot sign-up wave where possibly more than a million fake accounts have been created on the federated alternative to reddit.
As of now the consensus seems to be [more below] that these bots are dormant, possibly having been put in place to wreak havoc at a later date.
A strength of the fediverse, of which lemmy is a part, is its spread out nature, over at least forty thousand servers (lemmy: one thousand), but this is also a weakness meaning that co-ordination of action against a bot-wave like this is a more complex task.
A further complication is the recent widespread availability of A.I. or LLM functionality where plausibly human content can be created in vast quantities by those who possess relatively cheap computer systems and the desire. Should the recent wave of fraudulent accounts be fed content from an ‘AI farm’ it could easily lead to lemmy being ridiculed as a notoriously unreliable platform.
As such it is imperative that across the lemmyverse a concerted effort is made to purge fraudulent accounts.
[from above] There is a possibility that such a fake AI posting undertaking has already been started; after all slipping vaguely relevant content into lemmy threads would be very difficult to spot in small quantities.
Is lemmy doing enough to protect itself from the bots?
My first contact with GPT was the subredditsimulatorGPT. At least that was contained.
I remember thinking just that was freaky, ChatGPT is even crazier