Alt. Profile @Th4tGuyII

  • 0 Posts
  • 47 Comments
Joined 3 months ago
cake
Cake day: June 11th, 2024

help-circle



  • Votes should absolutely be public. They were on KBin, and it made people more civil for it because you could be shamed if you were dislike trolling or liking all of your own posts/comments to make them look better (which is something you actively have to do on here, unlike Reddit).

    Given this place is pseudo-anonymous anyways, and people comment far more personal and identifiable info here anyways (which tbf you should be careful about), I think public votes would do much more good than harm.








  • Th4tGuyII@fedia.ioto196@lemmy.blahaj.zonePony Rule
    link
    fedilink
    arrow-up
    21
    ·
    edit-2
    2 months ago

    Particularly for physical injuries, I sometimes prefer to suffer as it reminds me that I need to be careful, which I know I’d forget if I subdued the pain.

    Having said that, for things like flu’s where it’s just pure suffering for nothing, I’ll suck down as many painkillers as I can have in a day.




  • Th4tGuyII@fedia.iotoMemes@lemmy.mlpriorities
    link
    fedilink
    arrow-up
    1
    ·
    2 months ago

    At the time I got my current system, I did 1tb SSD for the main, and a 4tb HDD for data drive.

    For my next system, I think I’ll split that a bit more evenly, as most of my games end up on the HDD which means they a bit to load


  • So providing a fine-tuned model shouldn’t either.

    I didn’t mean in terms of providing. I meant that if someone provided a base model, someone took that and but on of it, then used it for a harmful purpose - of course the person modified it should be liable, not the base provider.

    It’s like if someone took a version of Linux, modified it, then used that modified version for a similar person - you wouldn’t go after the person who made the unmodified version.


  • SB 1047 is a California state bill that would make large AI model providers – such as Meta, OpenAI, Anthropic, and Mistral – liable for the potentially catastrophic dangers of their AI systems.

    Now this sounds like a complicated debate - but it seems to me like everyone against this bill are people who would benefit monetarily from not having to deal with the safety aspect of AI, and that does sound suspicious to me.

    Another technical piece of this bill relates to open-source AI models. […] There’s a caveat that if a developer spends more than 25% of the cost to train Llama 3 on fine-tuning, that developer is now responsible. That said, opponents of the bill still find this unfair and not the right approach.

    In regards to the open source models, while it makes sense that if a developer takes the model and does a significant portion of the fine tuning, they should be liable for the result of that…

    But should the main developer still be liable if a bad actor does less than 25% fine tuning and uses exploits in the base model?

    One could argue that developers should be trying to examine their black-boxes for vunerabilities, rather than shrugging and saying it can’t be done then demanding they not be held liable.