There’s another trolley coming, and she’s trying to stop it from running over people again by crashing the first trolley into it.
There’s another trolley coming, and she’s trying to stop it from running over people again by crashing the first trolley into it.
Sounds like it’d be nice if you had real control over the car’s software, and you could roll it back.
This… also makes me a little more weary driving around Teslas in traffic.
Honestly, they should fight fire with fire? Another vision model (like Qwen VL) would catch this
You can ask it “does this image seem fake?” and it would look at it, reason something out and conclude it’s fake, instead of… I dunno, looking for smaller patters or whatever their internal model does?
I agree.
But also downvoting is very useful as community moderation tool. I participate in some communities with regular, scummy spam posts that are technically off topic and not quite bannable. They’d get naive, hype driven upvotes if left alone. And I like that there’s a community mechanism to bury them.
The localllama people are feeling quite mixed about this, as they’re still charging through the nose for more RAM. Like, orders of magnitude more than the bigger ICs actually cost.
It’s kinda poetic. Apple wants to go all in on self-hosted AI now, yet their incredible RAM stinginess over the years is derailing that.
Sure, here you go: https://www.congress.gov/bill/118th-congress/house-bill/7329
One of many such bills.
Take a look at the sponsors. I’ll give you 1 guess at which party would vote it down, because it would hurt them in elections.
It’s the same reason Puerto Rico will never be a state.
Presumably you will advance along with humanity though, or failing that, just figure out the transcendence thing yourself with so much time?
I don’t think anyone would choose to stay ‘meatbag human’ for trillions of years.
There is a breaking point, eventually. YouTube’s trajectory is gonna make next quarter’s revenue great, but eventually something else will pick up user’s attention instead.
I don’t even look at the algo anymore, I just go out and search for content externally.
Maybe I am just out of touch, but I smell another bubble bursting when I look at how enshittified all major web services are simultaneously becoming.
It feels like something has to give, right?
We have YouTube, Reddit, Twitter, and more just racing to enshittify like I can’t even believe, Google Search is racing to destroy the internet, yet they’re also at the ‘critical mass’ of ‘too big to fail’ and shoved out all their major competitors already (other than Discord I guess).
There are already open source/self hosted alternatives, like Perplexica.
CEO Tony Stubblebine says it “doesn’t matter” as long as
nobody reads it.they keep generating sign-ups and selling ads… till next quarter, at least.
Soldered is better! It’s sometimes faster, definitely faster if it happens to be lpddr.
But TBH the only thing that really matters his “how much VRAM do you have,” and Qwen 32B slots in at 24GB, or maybe 16GB if the GPU is totally empty and you tune your quantization carefully. And the cheapest way to that (until 2025) is a used MI60, P40 or 3090.
TSMC doesn’t really have official opinions, they take silicon orders for money and shrug happily. Being neutral is good for business.
Altman’s scheme is just a whole other level of crazy though.
It’s useful.
I keep Qwen 32B loaded on my desktop pretty much whenever its on, as an (unreliable) assistant to analyze or parse big texts, to do quick chores or write scripts, to bounce ideas off of or even as a offline replacement for google translate (though I specifically use aya 32B for that).
It does “feel” different when the LLM is local, as you can manipulate the prompt syntax so easily, hammer it with multiple requests that come back really fast when it seems to get something wrong, not worry about refusals or data leakage and such.
the model seems ok for tasks like summarisation though
That and retrieval and the business use cases so far, but even then only if the results can be wrong somewhat frequently.
the term AI will become every bit as radioactive to investors in the future as it is lucrative right now.
Well you say that, but somehow crypto is still around despite most schemes being (IMO) a much more explicit scam. We have politicans supporting it.
Current LLMs cannot be AGI, no matter how big they are. The fundamental architecture just isn’t right.
It’s easy to forget how bad it can get.
One day I wandered into /r/kotakuinaction over some linked comment on the Tomb Raider animation, and the Fallout TV series, and… yeah. I remembered.
And that’s a pretty mild example.