There are lots of articles about bad use cases of ChatGPT that Google already provided for decades.

Want to get bad medical advice for the weird pain in your belly? Google can tell you it’s cancer, no problem.

Do you want to know how to make drugs without a lab? Google even gives you links to stores where you can buy the materials for it.

Want some racism/misogyny/other evil content? Google is your ever helpful friend and garbage dump.

What’s the difference apart from ChatGPT’s inability to link to existing sources?

  • Square Singer@feddit.deOP
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    The issue is that LLMs are fundamentally not able to not know something. Non-LLM filters that are strapped in front of an LLM can catch stuff like that (“As an LLM I am not able to…”), but if the request makes it through the filter, the LLM is not able to say “Sorry, I don’t know that”, because the data set doesn’t contain that.

    For example, there aren’t a lot of API documentations that contain a “Sorry, I don’t know how this endpoint works”.

    • CoderKat@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Strongly agreed. I view this as the biggest issue with LLMs. They will hallucinate a confidently incorrect answer for those cases. It makes them misinformation machines.