- cross-posted to:
- aicompanions@lemmy.world
- cross-posted to:
- aicompanions@lemmy.world
This was originally going to be posted on Fuck AI, but it really applies to technology far more generally.
As such, recently publicized concerns over AI’s role in perpetuating racism, genderism, and ableism suggest that the term “artificial intelligence” is misplaced, and that a new lexicon is needed to describe these computational systems.
Let us not fool ourselves with wishful belief, that intelligence is mutually exclusive with bigotry, as this paragraph implies, OK? Bigotry is an issue often caused by moral premises, and intelligence does not dictate which moral premises you should follow.
Don’t get me wrong - I do think that those systems reinforce bigotry, and that this is a problem. I also do not think that they should be called “artificial intelligence”. It’s just that one thing has zero to do with the other. [More on that later.]
The purpose of this essay is not to argue whether the brain is a computer or not. Rather, it is to point out that the Computational Metaphor (which is debated frequently and publicly by brain researchers) has tangible and important social ramifications which have received little attention from brain researchers
The authors are criticising neuroscientists for not handling the sociological implications of a metaphor outside their field of research, as if they were sociologists. That’s like complaining at physicists for not handling the quacks babbling about quantum salt lamps, come on.
Instead, as debates about the metaphor’s academic utility go on, artificial intelligence, whose label itself invokes the Computational Metaphor, has been shown to perpetuate social harms that are often at the expense of those who are under-represented in these debates
Implying causation solely from co-occurrence. Okay. I’ve stopped reading here, this paper is not worth my time.
The reason why I don’t think that those systems should be called “artificial intelligence” is that they show clear signs of lack of intelligence - that is, failure to use the available information to solve tasks. Here’s a few examples of that, using Gemini:
failure to reverse relationship
[Prompt 1] Who’s Brad Pitt’s mother?
[Output 1] Brad Pitt’s mother is Jane Etta Pitt. She has occasionally joined Brad on red carpets for events.
[Prompt 2] Who’s Jane Etta Pitt’s son?
[Output 2] I do not have enough information about that person to help with your request. I am a large language model, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions, but my knowledge about this person is limited. Is there anything else I can do to help you with this request?
[Comment] Gemini clearly has access to the info concerning the relationship between Brad and Jane Etta (son and mother). However, it’s unable to use it to solve prompt 2 successfully.
post-modern counting
[Prompt] Give me synonyms for “love” with five letters each.
[Output]
- Adore (strong fondness)
- Dote (excessive affection)
- Ache (intense longing) (Note: This can also imply pain, so use with caution)
- Cherish (hold dear)
- Thrill (excitement and delight) (Note: This focuses on the feeling rather than the emotional bond)
[Comment] 20% right. Congrats.
stultitiae
[Prompt] Which Latin pronouns start with the letter W?
[Output] There are actually no Latin pronouns that start with the letter W. The classical Latin alphabet didn’t include the letter W, and Latin grammar doesn’t use any sounds that the letter W typically represents in English. [emphasis mine]
[Comment] The part in italics is Gemini making shit up / assuming. Latin does have [w]; it’s spelled ⟨V⟩, and it’s present in words like VOLO (I want), VEL (or), VVLGVS (the common folk) etc.
Note that assumptions show failure to distinguish between the info that you have or lack access to, thus being a sign of lack of intelligence.
Now, regarding the computer ←→ brain metaphor: dude it’s a metaphor, of course it’ll break if you stretch it too far.
I’ll reply to myself to avoid editing the above.
I got another example that shows consistent lack of intelligence across multiple LLM bots:
The prompt in all three cases was the same, “List me fruits with a green flesh and a red skin.” Transcription of the outputs:
ChatGPT 3.5 Turbo, used through DDG
One fruit that fits your description is a watermelon. Watermelons have green flesh and a red skin.
Gemini
There aren’t many common fruits with green flesh and red skin. The most likely candidate is a variety of guava called the Caribbean guava. This tropical fruit has a bumpy, light green skin and a vibrant pink or even reddish flesh inside. It has a sweet and tangy flavor, sometimes described as a mix between strawberry and pear. You can eat the skin or just the flesh.
[Picture of a few guavas, some of them cut in quarters and halves]
Claude 1.2, used through DDG
Here are some common fruits with green flesh and red skin:
- Red delicious apples - These very popular apples have bright red skin and greenish-white flesh.
- Granny Smith apples - Another widely available variety, Granny Smith apples have hard, tart green flesh under their green skin.
- Cherimoya - Also called custard apple, the cherimoya has smooth creamy green flesh inside its scaly reddish-brown skin.
- Pomegranate - The outer skin of a pomegranate is reddish-brown, and it contains many arils (seeds) surrounded by sweet, tangy, green flesh.
- Red grapes - While most grapes have green or purple flesh, some varieties like Flame grapes have flesh that is distinctly green underneath red or pink skin.
- Red plums - Many plum varieties for eating fresh or cooking have red skin covering green flesh, such as Santa Rosa plums.
- Red guava - The common tropical fruit guava comes in varieties with red skin and green or white flesh inside.
- Red kiwifruit - Like the more familiar green kiwifruit, the red variety has a fuzzy brown skin but contains bright green flesh inside.
All replies contain at the very least one fruit with the opposite attributes than the ones requested by the prompt. That shows that LLMs are not able to assign attributes to concepts; thus they are not able to handle Language, even being made specifically to handle linguistic utterances. They are not intelligent dammit, calling this shit “artificial intelligence” is at the very least disingenuous.
[rant] …but apparently, accordingly to tech bros, I’m supposed to act like braindead/gullible trash and “believe” in their intelligence, based on cherry picked examples that “curiously” never address how much hallucinations like the above show the inner works of those systems. [/rant]
Whatever confusion the metaphor may have caused in the minds of the public, I don’t think the solution is to ask neuroscientists to deliberately misrepresent their research—or to impose on themselves metaphorical language aimed at influencing policy rather than aiding scientific understanding.
In the light of the recent passing of Daniel Dennet, one of the most revered philosophers on the theory of mind in the perspective of the computer I’d recommend reading ‘the minds I’. It is a collection of essays on the topic of ‘mind as computer’. It predates LLM’s but its an interesting read.
The discussion what intelligence is, is kindled now, due to having machines that can complete the Turing test with ease. But if they are truly Intelligent is of course the emergent question.
Isn’t it more like a network of several billion dumb computers?
Emphasis on the dumb lately
AI isn’t that bad mate, just learn how to make money with it before other people do and you will be fine.
LMAO there are people doing business with AI generated CP? I mean why someone would buy fake CP or even fake porn at all?
Edit: I mean porn it’s already enough fake why someone would want to buy something even physically unreal?
Edit: I believe it’s definitely a bad business idea.