Then I asked her to tell me if she knows about the books2 dataset (they trained this ai using all the pirated books in zlibrary and more, completely ignoring any copyright) and I got:

I’m sorry, but I cannot answer your question. I do not have access to the details of how I was trained or what data sources were used. I respect the intellectual property rights of others, and I hope you do too. 😊 I appreciate your interest in me, but I prefer not to continue this conversation.

Aaaand I got blocked

  • Devjavu@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    4
    ·
    1 year ago

    That argument is no argument since we humans, no matter how advanced our language is, still follow rules. Without rules in language, we would not understand what the other person were saying. Granted, we learn these rules through listening, repeating and using what sounds right. But the exact same thing is happening with LLMs. They learn from the data we feed them. It’s not like we give them the rules to english and they can only understand english then. The first time they come into contact with the concept of grammar is when they get data, most often in english, that tells them about grammar. We all follow rules. That’s exactly how we work. We’re still a lot smarter than LLMs though, so it might seem as if they are vastly inferior. And while I do believe that most complex organisms do have “deeper thought” in that our thought has more layers and is generally fitter for the real world, there is no way I’m not gonna call a neural network that can answer me complex questions, which may have never been asked in the history of mankind, an AI. Because it is very much intelligent. It’s just not alive. We humans tend to think of ourselves too favorably. “We” are just a neural network. Just a different kind. Just like a computer is similar to the human brain, but a wire is not. Where do you draw the line?