Y’know, now that you mention it, the sealioning behaviour I’d been conditioned to expect is a big reason for why I spend so much time writing my comments and adding qualifying statements.
Y’know, now that you mention it, the sealioning behaviour I’d been conditioned to expect is a big reason for why I spend so much time writing my comments and adding qualifying statements.
Removing the homepage entirely, replacing the entire UI with the shorts-style format of “view video right now, tap button to see next/previous video”. If you want a specific video, you must search for it.
People developing local models generally have to know what they’re doing on some level, and I’d hope they understand what their model is and isn’t appropriate for by the time they have it up and running.
Don’t get me wrong, I think LLMs can be useful in some scenarios, and can be a worthwhile jumping off point for someone who doesn’t know where to start. My concern is with the cultural issues and expectations/hype surrounding “AI”. With how the tech is marketed, it’s pretty clear that the end goal is for someone to use the product as a virtual assistant endpoint for as much information (and interaction) as it’s possible to shoehorn through.
Addendum: local models can help with this issue, as they’re on one’s own hardware, but still need to be deployed and used with reasonable expectations: that it is a fallible aggregation tool, not to be taken as an authority in any way, shape, or form.
How about: Popularizing the idea of the wall in the first place, going mask-off calling illegal immigrants “murderers and rapists”, the “Muslim Ban” on air travel, moving the US embassy to Jerusalem, employing white nationalists as staffers, packing the supreme court with extreme conservative justices, giving permanent tax cuts to the rich, expanding the presence of immigrant concentration camps, cozying up to foreign dictators, stating he wanted generals like Adolf Hitler’s behind closed doors when his own generals refused to nuke North Korea and blame it on someone else, egging on a far-right insurrection attempt, directly pursuing strikes and assassination attempts against middle-Eastern military generals and diplomats, ending the Iran nuclear deal, calling climate change a Chinese hoax, calling Covid the “China virus”, spreading vaccine disinformation until one was developed before the end of his term, trying to start a trade war with China, discrediting his chief medical advisor on factual statements about Covid, saying Black Lives Matters were “burning down cities”, wanting to designate Antifa as a terrorist organization, declaring “far left radical lunatics” part of his “enemy from within”, being an avowed friend of Epstein, sexually assaulting over a dozen women and underage girls, being a generally abusive sleazebag, also funding a genocide (Israel has always been ethnically displacing Palestinians), also building the wall, also not implementing healthcare reform (and being against what we have), also not protecting abortion rights (+ setting up the conditions that led to their erosion; see supreme court point above), and also denigrating anti-genocide protestors (but not as harshly since he wasn’t the one in charge when it happened).
I guess he’s not a cop though, so there’s that.
(minor edits made for grammar/spelling)
On the whole, maybe LLMs do make these subjects more accessible in a way that’s a net-positive, but there are a lot of monied interests that make positive, transparent design choices unlikely. The companies that create and tweak these generalized models want to make a return in the long run. Consequently, they have deliberately made their products speak in authoritative, neutral tones to make them seem more correct, unbiased and trustworthy to people.
The problem is that LLMs ‘hallucinate’ details as an unavoidable consequence of their design. People can tell untruths as well, but if a person lies or misspeaks about a scientific study, they can be called out on it. An LLM cannot be held accountable in the same way, as it’s essentially a complex statistical prediction algorithm. Non-savvy users can easily be fed misinfo straight from the tap, and bad actors can easily generate correct-sounding misinformation to deliberately try and sway others.
ChatGPT completely fabricating authors, titles, and even (fake) links to studies is a known problem. Far too often, unsuspecting users take its output at face value and believe it to be correct because it sounds correct. This is bad, and part of the issue is marketing these models as though they’re intelligent. They’re very good at generating plausible responses, but this should never be construed as them being good at generating correct ones.
Wow, it’s almost as if someone being bad can be for multiple reasons!
90 days to cycle private tokens/keys?
Days of the week in Spanish follow almost the exact same pattern as they do in English, with the exception of the weekend.
Specifically, Friday in Japanese translates to Gold Day, which also has a lot of cultural connotations with it being the lucky day of the week
I’ve heard there are hyper-reflective stickers you can put on/near the plate that basically blind a traffic camera’s view when trying to read it
I believe the wording is “drinks which burn the throat” which naturally means:
Most of the focus is interpreted as “contains caffeine and/or alcohol” but the wording is vague enough that it leaves for a lot of weird wiggle room people try to argue (based on convenience usually). It’s quite silly
Advertising is like the Kudzu vine: neat and potentially useful if maintained responsibly, but beyond capable of growing out of control and strangling the very landscape if you don’t constantly keep it in check. I think, for instance, that a podcast or over-the-air show running an ad-read with an affiliate link is fine for the most part, as long as it’s relatively unobtrusive and doesn’t put limitations on what the content would otherwise go over.
The problem is that there needs to be a reset of advertiser expectations. Right now, they expect the return on investment that comes from hyper-specific and invasive data, and I don’t think you can get that same level of effectiveness without it. The current advertising model is entrenched, and the parasitic roots have eroded the foundation. Those roots will always be parasitic because that’s the nature of advertising, and the profit motive in general when unchecked.
~Babe, wake up! New feminization technique just dropped~
Years back, I had that happen on PayPal of all websites. Their account creation and reset pages silently and automatically truncated my password to 16 chars or something before hashing, but the actual login page didn’t, so the password didn’t work at all unless I backspaced it to the character limit. I forgot how I even found that out but it was a very frustrating few hours.
Radical unschooling?
We could assign it to any point within a recognizable region in the Cosmic Microwave Background, which would probably be the most universally-applicable reference available. One just needs to be able to filter out the noise from surrounding celestial bodies. The CMB does slowly change over time, but so too does the position of stars within galaxies and galaxies relative to one another.
Complete side note, I saw your pfp and checked your profile to confirm my suspicions. Thank you for your work on OpenRGB! It’s been a great tool for managing the LEDs on my computer.
Life is suffering. Once you accept that fact wholly, you may ascend
Just like the shopping cart theory itself, this is mostly just a thought experiment at this point in time.
The point of a protestation is to make it hard for others to ignore, and make it clear what the end condition is. I don’t plan on just starting to do this as an individual because it would have no impact; I still make sure my own carts get returned personally.
The point stands that our goodwill is frequently exploited for profit, often under the pretense that it’s just basic human decency.
That’s how it’s been for basically decades