• 1 Post
  • 47 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle






  • Not OP, but I also don’t think it’s the same thing. But even if it were, the consequences are nowhere near the same.

    A person might be able to learn to replicate an artist’s style, given enough practice and patience, but it would take them a long time, and the most “damage” they could do with that, is create new content at roughly the same rate as the original creator.

    It would take an AI infinitely less time to acquire that same skill, and infinitely less time to then create that content. So given those factors, I think there’s an enormous difference between 1 person learning to copy your skill, or a company that does it as a business model.

    Btw, if you didn’t know it yet - search engines don’t need to create a large language model in order to find web content. They’ve been working fine (one night even say Better) without doing that.


  • Exposed credentials means that somebody got sloppy the password. So yeah, “stolen creds”. Give the fact that a) NYT seems knows which credentials were exposed, and b) We haven’t seen hundreds of other high(er) profile companies have their private repos breached, it is far more likely that NYT fucked up, and not Microsoft (which is what you implied, with nothing to back it up - other than a very narrow-minded definition of the word hack).




  • Nah.

    By default an AI will draw from its entire memory, and so will have lots of different influences. But by tuning your prompt (or restricting your input dataset) you can make it so specific, it’s basically creating near perfect clones. And contrary to a human, it can then produce similar works hundreds of times per minute.

    But even that is beside the point. Those works were sold under the presumption that people will read them. Not to ingest them into a LLM or text-to-image model. And now, companies like openai and others profit from the models they trained without permission from the original author. That’s just wrong.







  • I don’t think an AI or indeed most writers could mimic anything like the frantic ad-libbing he was known and loved for.

    I’m not convinced. If you trained a model on all of his performances and scripts, I think it could generate something that could fool most people. Not everything it generates would be terrific, but even if only 1% is good, you just cut out all the rest.

    And that’s at the current state of tech.