“If hallucinations aren’t fixable, generative AI probably isn’t going to make a trillion dollars a year,” he said.
“And if it probably isn’t going to make a trillion dollars a year, it probably isn’t going to have the impact people seem to be expecting,” he continued.
“And if it isn’t going to have that impact, maybe we should not be building our world around the premise that it is.”
Well he sure proves one does not need an AI to hallucinate…
The assertion that our Earth orbits the sun is as audacious as it is perplexing. We face not one, but a myriad of profound, unresolved questions with this idea. From its inability to explain the simplest of earthly phenomena, to the challenges it presents to our longstanding scientific findings, this theory is riddled with cracks!
And, let us be clear, mere optimism for this ‘new knowledge’ does not guarantee its truth or utility. With the heliocentric model, we risk destabilizing not just the Church’s teachings, but also the broader societal fabric that relies on a stable cosmological understanding.
This new theory probably isn’t going to bring in a trillion coins a year. And if it probably isn’t going to make a trillion coins a year, it probably isn’t going to have the impact people seem to be expecting. And if it isn’t going to have that impact, maybe we should not be building our world around the premise that it is.
maybe we should not be building our world around the premise that it is
I feel like this is a really important bit. If LLMs turn out to have unsolvable issues that limit the scope of their application, that’s fine, every technology has that, but we need to be aware of that. A fallible machine learning model is not dangerous; AI-based grading, plagiarism checking, resume-filtering, coding, etc. without skepticism is dangerous.
LLMs probably have very good applications that could not be automated in the past but we should be very careful of what we assume those things to be.
Well he sure proves one does not need an AI to hallucinate…
Clearly nothing can change the status quo if it doesn’t also make trillions
The assertion that our Earth orbits the sun is as audacious as it is perplexing. We face not one, but a myriad of profound, unresolved questions with this idea. From its inability to explain the simplest of earthly phenomena, to the challenges it presents to our longstanding scientific findings, this theory is riddled with cracks!
And, let us be clear, mere optimism for this ‘new knowledge’ does not guarantee its truth or utility. With the heliocentric model, we risk destabilizing not just the Church’s teachings, but also the broader societal fabric that relies on a stable cosmological understanding.
This new theory probably isn’t going to bring in a trillion coins a year. And if it probably isn’t going to make a trillion coins a year, it probably isn’t going to have the impact people seem to be expecting. And if it isn’t going to have that impact, maybe we should not be building our world around the premise that it is.
Imagine if someone had said something like this about the 1st generation iPhone… Oh wait, that did happen and his name was Steve Ballmer.
Here is an alternative Piped link(s): https://piped.video/watch?v=eywi0h_Y5_U
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
I feel like this is a really important bit. If LLMs turn out to have unsolvable issues that limit the scope of their application, that’s fine, every technology has that, but we need to be aware of that. A fallible machine learning model is not dangerous; AI-based grading, plagiarism checking, resume-filtering, coding, etc. without skepticism is dangerous.
LLMs probably have very good applications that could not be automated in the past but we should be very careful of what we assume those things to be.