They still can’t game it for engagement optimization to that extreme, not like the closed loops of monolithic sites.
They still can’t game it for engagement optimization to that extreme, not like the closed loops of monolithic sites.
^
Futurama had it right, spammers are the ultimate destroyers.
Hmm, what if the shadowbanning is ‘soft’? Like if bot comments are locked at a low negative number and hidden by default, that would take away most exposure but let them keep rambling away.
Top 50% of the population still.
After all, they wrote a review.
Trap them?
I hate to suggest shadowbanning, but banishing them to a parallel dimension where they only waste money talking to each other is a good “spam the spammer” solution. Bonus points if another bot tries to engage with them, lol.
Do these bots check themselves for shadowbanning? I wonder if there’s a way around that…
This. I’m surprised Lemmy hasn’t already done this, as it’s such a huge glaring issue in Reddit (that they don’t care about, because bots are engagement…)
GPT-4o
Its kind of hilarious that they’re using American APIs to do this. It would be like them buying Ukranian weapons, when they have the blueprints for them already.
A+ feature, ready to monetize. Thumbs up emoji
Jokes aside (and this whole AI search results thing is a joke) this seems like an artifact of sampling and tokenization.
I wouldn’t be surprised if the Gemini tokens for XTX are “XT” and “X” or something like that, so it’s got quite a chance of mixing them up after it writes out XT. Add in sampling (literally randomizing the token outputs a little), and I’m surprised it gets any of it right.
The plan is to monetize the AI results with ads.
I’m not even sure how that works, but I don’t like it.
I dunno. My experience on Reddit is that even bringing up the word “AI” in discussions outside of it will almost get be doxxed. I asked a TV fandom if cleaning up a bad release with diffusion models and some “non AI” filters sounded interesting, and I felt like I had triggered Godwin’s law.
I did bring this up in AskLemmy, and got a mostly positive response, but I also felt like it was a tiny subset of the community.
On my G14, I just uses the ROG utility to disable turbo and make some kernel tweaks. I’ve used ryzenadj before, but its been awhile. And yes I measured battery drain in the terminal (but again its been awhile).
Also throttling often produces the opposite result in terms of extended battery life as it likely takes more time in the higher states to do the same amount of work whereas running at a faster clock speed, the work is completed faster and the CPU returns to a lower less energy using state quicker and resides there more of the time.
“Race to sleep” is true to some extent, but after a certain point the extra voltage one needs for higher clocks dramatically outweighs the benefit of the CPU sleeping longer. Modern CPUs turbo to ridiculously inefficient frequencies by default before they thermally throttle themselves.
The later seasons are a ride!
It’s so underrated! And it’s gorgeous, being Studio Mir.
It’s kinda over the top, but I guess that’s relative.
Is the series ending anytime?
Maybe I missed the joke, but it’s ended :(
Heh, does western anime count?
Legend of Korra, Vox Machina, DOTA: Dragons Blood
And Pantheon.
For heaven’s sake, if you have not watched Pantheon, correct that immediately. Techically the cast is young, at first, but it doesn’t feel like that at all.
It can be if you run linux and throttle the chips. Even my older G14 last a long time, as the AMD SoCs are great, it can run fanless throttled down, and it just has a straight up bigger battery than razor thin Macs.
But again, it’s just not configured this way in most laptops, which sacrifice battery for everything else because, well, OEMs are idiots.
People overblow the importance of ISA.
Honestly a lot of the differences are business decisions. There is a balance between price, raw performance and power efficiency. Apple tend to focus exclusively on the latter two at the expense of price, while Intel (and AMD) have a bad habit of chasing cheap raw performance.
Honestly, a lot of that is budget.
Apple makes low clocked, very wide SoCs, and are always the first customer of the most cutting edge silicon node. This is very expensive. And Apple can eat it with their outrageous prices.
Intel (and AMD) go more for “balance,” with smaller cheaper dies and higher peak clocks. Their OEMs also “cheap out” by bundling a bunch of bloatware that also drains the battery to pad margins. You can find PCs with big batteries and better stock configs, but these are more expensive.
AMD is only just now getting into the “premium” game with the upcoming Strix Halo chip (M2 Pro-ish spec wise). Intel isn’t there yet, but there are rumors they will as well.
Others on the social media network shared similar criticism about a perceived lack of balance in the guest list, including Dr. Margaret Mitchell of Hugging Face. “It could be beneficial to have an AI Oprah follow-up discussion that responds to what happens in [the show] and unpacks generative AI in a more grounded way,” she said.
A lot of people don’t realize there are two wars going on.
AI vs anti-AI.
Corporate API AI vs Open source, self hosted AI.
Given this is Lemmy, I would think a lot of us would care about the latter, but everyone (here, and elsewhere) only seems to care about the former, and are content to give Sam Altman a monopoly and destroy the planet with his crazy “outscale everyone else” ideas (instead of, you know, making AI more efficient and training it legally and transparently).
TBH that would ddos lemmy with new users, lol.