cross-posted from: https://lemmy.ca/post/19946388
An anticapitalist tech blog. Embrace the technology that liberates us. Smash that which does not.
cross-posted from: https://lemmy.ca/post/19946388
An anticapitalist tech blog. Embrace the technology that liberates us. Smash that which does not.
Yeah in fact you’re giving the llm additional data to train on what poisoned data looks like so it can avoid it better, as they can clear see the before vs after
It is necessary to employ a method which enables the training procedure to distinguish copyrighted material. In the “dumbest” case, some humans will have to label it.
Just because you’ve edited a comment, doesn’t mean that this can be seen as “oh, this is under copyright now”.
I don’t say it’s technical impossible. To the contrary, it very much is possible. It’s just more work. This drives the development costs up and can give some form of satisfaction to angered ex-reddit users like me. However, those costs will be peanuts for giants like Google / Alphabet.