• Swedneck@discuss.tchncs.de
    link
    fedilink
    arrow-up
    19
    ·
    1 year ago

    You shouldn’t be trusting random online books anyways, you shouldn’t even be trusting random physical books.

    Only trust books with a reputation, or that you can verify are authored by someone trustworthy.

    • Aganim@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      1 year ago

      And even with a good guide: always be very careful. In my country we are seeing a rise of cases where migrants end up dead or with severe organ damage, because they confuse our indigenous deadly mushrooms with edible mushrooms from their country. And it’s hard to blame them, turns out that some species look so much alike that even experts have trouble telling the difference.

      So be careful out there and make sure your guide is suited for the area you’re in.

  • catreadingabook@kbin.social
    link
    fedilink
    arrow-up
    14
    ·
    edit-2
    1 year ago

    This has happened before, like 30 years ago. In Winter v. G.P. Putnam’s Sons, 938 F.2d 1033 (9th Cir. 1991) it was ruled that the publisher can’t be sued for selling a guide book that misled a reader into eating an extremely poisonous mushroom.

    I can’t find anything about the authors in that case (I think Colin Dickinson and John Lucas?) ever getting sued, probably because they were in Britain so the US courts couldn’t get jurisdiction over them, unlike how it could against the publisher who did business in the US.

    Might be time for a change in the law.

    • average650@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      I think this is simply a crime.

      If not, then I’m certain they would lose a civil case if someone got hurt because of this.

  • Nougat@kbin.social
    link
    fedilink
    arrow-up
    13
    arrow-down
    13
    ·
    1 year ago

    FFS it’s not AI, it’s large language modelling. There is nothing “intelligent” about that.

    • CrayonRosary@lemmy.world
      link
      fedilink
      arrow-up
      20
      arrow-down
      1
      ·
      1 year ago

      I’ve seen this same comment 100 times, and it’s less intelligent than an LLM.

      If you talk to actual computer scientists studying these things, they call it intelligence. It doesn’t matter how stupid it is sometimes. All intelligences make mistakes, even confident mistakes. It doesn’t matter that its “just a language model”. Intelligence, when reasonably defined, includes algorithms that produce intelligent output. LLMs absolutely produce intelligent output. What LLMs are not is general artificial intelligence, and they’re not sapient, but they are still intelligent.

      • LoafyLemon@kbin.social
        link
        fedilink
        arrow-up
        9
        arrow-down
        1
        ·
        edit-2
        1 year ago

        You are the second person I’ve met that understands the difference between AI and AGI. It might not mean much to you, but it means a lot to me. I found my people!

      • Dashmaybe@lemmygrad.ml
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        The problem lies in how laymen interpret the phrase “intelligence”. I fully understand and agree with your argument in respect to the technical definition and usage, but laymen will not care enough to understand that, and they’re creating dangerous situations because of it.