Google’s AI-driven Search Generative Experience have been generating results that are downright weird and evil, ie slavery’s positives.

    • ninjakitty7@kbin.social
      link
      fedilink
      arrow-up
      11
      arrow-down
      1
      ·
      1 year ago

      Honestly AI doesn’t think much at all. They’re scary clever in some ways but also literally don’t know what anything is or means.

    • NumbersCanBeFun@kbin.social
      link
      fedilink
      arrow-up
      7
      arrow-down
      4
      ·
      1 year ago

      Incorrect. If we are relying on AI as our ONLY source of information then we are doomed. We should always fact check things we believe we know and seek additional information on topics we are researching. Especially if they offer opposing factual positions.

      Ironically though you’ve just proven that you think at only a surface level.

      • somethingsnappy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Nobody said we were relying on that. We’ll all keep searching. We’ll all keep hoping it will bring abundance, as opposed to every other tech revolution since farming. I can only think at the surface level though. I definitely have not been in the science field for 25 years.

      • oo1@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        ai ain’t going to be much “worse” or “better” than humans.

        but re earlier points I don’t think things should be judged on a timescale of a few years.
        relevant timescales are more like generation(s) to me.

    • Bluskale@kbin.social
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      1 year ago

      LLMs aren’t AI… they’re essentially a glorified autocorrect system that are stuck at the surface level.