“The school shooter played Doom on his IBM Personal Computer” vibe all over again.

    • Diurnambule@lemmy.fmhy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Initial training picture may come from abused child’s. And we can theorize that they will need more to keep training the AI.

            • CyanParsnips@burggit.moe
              link
              fedilink
              English
              arrow-up
              5
              ·
              1 year ago

              You’re right, it does work that way - it’s why ‘a photo of an astronaut riding a horse’ is the standard demo for SD, to show that it can create things it wasn’t trained on by remixing and extrapolating elements. Even without that though, it can do things like turn a cartoon image into a realistic one (or vise versa) with img2img without necessarily needing to know what the content is at all.

              Also, it’s possible to recursively train models - create a rough model, use its output as training data for a more refined model, rinse and repeat. I’ve found it works well for getting a strong and consistent face LoRA, but I imagine the same method could be used to create any sort of model without using real photos.

    • Haui@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      As described in the article, there is a no tolerance policy because of the danger of switching from generated to real. Children are not sexual objects. Whoever feels different needs help.