OpenAI now tries to hide that ChatGPT was trained on copyrighted books, including J.K. Rowling’s Harry Potter series::A new research paper laid out ways in which AI developers should try and avoid showing LLMs have been trained on copyrighted material.

  • player2@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    1 year ago

    The analogy talks about mixing samples of music together to make new music, but that’s not what is happening in real life.

    The computers learn human language from the source material, but they are not referencing the source material when creating responses. They create new, original responses which do not appear in any of the source material.

    • Cethin@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      “Learn” is debatable in this usage. It is trained on data and the model creates a set of values that you can apply that produce an output similar to human speach. It’s just doing math though. It’s not like a human learns. It doesn’t care about context or meaning or anything else.