You shouldn’t be trusting random online books anyways, you shouldn’t even be trusting random physical books.
Only trust books with a reputation, or that you can verify are authored by someone trustworthy.
And even with a good guide: always be very careful. In my country we are seeing a rise of cases where migrants end up dead or with severe organ damage, because they confuse our indigenous deadly mushrooms with edible mushrooms from their country. And it’s hard to blame them, turns out that some species look so much alike that even experts have trouble telling the difference.
So be careful out there and make sure your guide is suited for the area you’re in.
This has happened before, like 30 years ago. In Winter v. G.P. Putnam’s Sons, 938 F.2d 1033 (9th Cir. 1991) it was ruled that the publisher can’t be sued for selling a guide book that misled a reader into eating an extremely poisonous mushroom.
I can’t find anything about the authors in that case (I think Colin Dickinson and John Lucas?) ever getting sued, probably because they were in Britain so the US courts couldn’t get jurisdiction over them, unlike how it could against the publisher who did business in the US.
Might be time for a change in the law.
Be cool if it came with a few examples
I think this is simply a crime.
If not, then I’m certain they would lose a civil case if someone got hurt because of this.
“Chat-gpt, how can I avoid psychedelic mushrooms?”
At least those won’t kill you
Sure they will, if they’re actually something else.
This is actually really scary and problematic
Off topic, but wow, a community for foraging
This and many more issues heading our way is why we will need our own locally run AI agent to vet and warn us.
FFS it’s not AI, it’s large language modelling. There is nothing “intelligent” about that.
I’ve seen this same comment 100 times, and it’s less intelligent than an LLM.
If you talk to actual computer scientists studying these things, they call it intelligence. It doesn’t matter how stupid it is sometimes. All intelligences make mistakes, even confident mistakes. It doesn’t matter that its “just a language model”. Intelligence, when reasonably defined, includes algorithms that produce intelligent output. LLMs absolutely produce intelligent output. What LLMs are not is general artificial intelligence, and they’re not sapient, but they are still intelligent.
You are the second person I’ve met that understands the difference between AI and AGI. It might not mean much to you, but it means a lot to me. I found my people!
The problem lies in how laymen interpret the phrase “intelligence”. I fully understand and agree with your argument in respect to the technical definition and usage, but laymen will not care enough to understand that, and they’re creating dangerous situations because of it.
And hover boards don’t even hover!
That ship has sailed unfortunately
Found the Linux user
I think you mean GNU/Linux