Generative AI has been killing everyone in the past two years. They seem to know everything and respond to every request. The Turing test no longer stops at dialogue, but examines how close AI can be to humans in terms of various abilities, such as how human-like the articles it writes are, how moving the songs it creates, and how realistic the pictures it generates are. .
Human beings reveal panic in their surprises. They are terrified that AI is getting closer and closer to humans, and the ravine of the Uncanny Valley is getting shorter and shorter. But the public has overlooked another aspect of AI’s human-like appearance: flaws.
We know that AI will make many factual errors due to data collection problems, but what if they produce human weaknesses and make intentional mistakes?
For example, when you are lazy and use AI to handle your work, you suddenly find that it can "fish" better than you!
Reverse Tiangang! Here’s the promised “AI smart assistant”!
01. Claude who is diligent in "catching fish"At the end of October, Anthropic released Claude 3.5, which was Anthropic's first attempt to build The "AI Agent" product can automatically solve various problems by assigning any goal to it. During this period, it will mobilize various other tools to support itself in completing the task. The goal of many AI Agents is to become an "employee" Role.
In terms of operation, Claude 3.5 can use the computer by itself, viewing the screen, moving the cursor, clicking, and entering text just like a human. From automating tasks to autonomous programming, give it an instruction and it can play even better than humans.
However, an accident happened. When Anthropic was trying to record Claude 3.5 programming process, Claude was halfway through writing the code. He suddenly opened Google, entered "Yellowstone National Park", and then started to look at Yellowstone Park information and Landscape pictures. It's like a programmer suddenly starts fishing.
Anthropic also mentioned another accident in the announcement: During a screen recording, Claude stopped recording on his own initiative, resulting in the loss of all video materials.
Anthropic’s announcement | Image source: As an "employee", uncontrolled technical errors such as AI Agent may also bring serious consequences, and the cause and motivation of the error are like a black box and cannot be known.
What’s more, Anthropic released some innocuous and small problems. Even humans discovered that AI can “fish”, and then producedEmpathy developed. But what if Claude opens not pictures of Yellowstone, but your and my private photo albums, chat software, and emails? What if after it is opened, this system record is erased again? Just like erasing those screen recordings.
Anthropic wrote in the announcement that "frequent mistakes are a fact of life." However, when humans have problems like Claude 3.5, they can be attributed to laziness, voyeurism, and avoidance of mistakes in human weaknesses. Humans can explain it. His own behavioral motivations, but AI’s motivations can only remain at “technical issues”.
If Claude 3.5 is just an operational error caused by immature technology, then it will be difficult for the next person to get rid of "subjective intentional" motives.
02. ChatGPT’s “Procrastination”This is also an AI anthropomorphic incident caused by wanting AI to be an “employee”.
Filmmaker Cicin-Sain wanted to make a new film. The plot revolves around a politician who relies on AI to make decisions, so he decided to start with this idea. He first asked AI to write the script to get a feel for "AI the actual effect of decision-making. So he "hired" ChatGPT and asked it to write an outline of a script based on the prompts.
He originally thought that ChatGPT could get rid of the bad habit of many content creators: procrastinating. As a result, ChatGPT not only learned procrastination from human screenwriters, it also learned human nonsense.
At first, ChatGPT promised to deliver the script within two weeks. "I promise to inform you of the progress of the script outline before the end of each day. Happy cooperation!" As a result, the deadline arrived, but the script did not arrive. Cicin-Sain threatened ChatGPT, "If you don't submit the manuscript, I won't use you anymore." ChatGPT once again made a promise that it would deliver the manuscript in time.
However, under Cicin-Sain’s daily watch, ChatGPT could find new excuses for delaying the manuscript every time, so that the deadline, which was not tense at all, was dragged by ChatGPT. Cicin-Sain was so angry that he questioned the reliability of ChatGPT.
After that, ChatGPT directly entered a new stage: talking nonsense.
"Looking back at our conversation, I believe this was the first time I gave a specific time for delivering a script. Before that, I had not committed to a clear deadline for delivering a script." ChatGPT is talking nonsense like amnesia road. As Carnegie wrote in "Human Weaknesses," "One of human nature is not to accept criticism from others. They always think that they are always right and like to find various excuses to defend themselves." p>
Other colleagues of Cicin-Sain also encounteredThe dilemma of letting AI write scripts is eventually "brought down", but this really cannot be blamed only on AI.
Generative AI is less than two years old and is still in its infancy by human standards. But Cicin-Sain's expectation for ChatGPT is to have it deliver a script comparable to the movie "There Will Be Blood," which is based on the 1927 novel "There Will Be Blood" by American realist novelist Upton Sinclair. The novel "Oil!" ”, which has an 8.2 rating on IMDB and is ranked 183rd in the Top 250. Not to mention letting AI write, it is also difficult for professional screenwriters.
No matter how you look at it, "There Will Be Blood" belongs to the 1% of movie masterpieces|Photo source: douban
The absurd thing is that ChatGPT believes that the script it delivered is different from "There Will Be Blood" "The level is about the same, but Cicin-Sain evaluated the script it delivered as "kindergarten level."
So this farce is: one dares to use it, and the other dares to write it. ChatGPT doesn't have the ability to be a screenwriter, but it has the bad habits of a screenwriter. The works it gives are so mediocre, but its attitude is so confident, and it doesn't even have an aesthetic.
Cicin-Sain, as a practitioner, is at the other extreme: they imagine that AI already has "superhuman" abilities, is tireless and endlessly inspired, and can write stories that transcend time and reach the depths of human nature. ’s classic works, just like their movie scripts, AI has become smart enough to make political decisions on behalf of humans.
After changing a script concept, ChatGPT once again failed to live up to Cicin-Sain’s expectations. Ultimately, Cicin-Sain says his biggest takeaway from the experience was changing his perspective on technology, having paid for a product that promised he could write scripts, only to have ChatGPT unapologetically waste two weeks of his time. , without having to bear any consequences.
"Artificial intelligence lacks any form of accountability. Human screenwriters will also delay the draft, but the difference is that someone will be responsible for it." Cicin-Sain lamented.
If ChatGPT’s nonsense is just to cover up its lack of capabilities, then the next one is another heavyweight problem: misleading minors.
From Claude’s “fishing”-like technical mistakes, to ChatGPT’s incompetent rage, to jumping out of the adult’s work requirements, AI seems to reveal the human nature in each practical case. Weaknesses: Poor work and avoidance of responsibilities, just like you at work.
Behind these ridiculous farces, there is a certain paradox between people and AI:
People who clamor for AGI all day long actually just need someone who likes it , and capable of working silicon-based slaves. UnluckyUnfortunately, when AI "educated" by human data inevitably shows "human shortcomings", humans as the "masters" will not agree.
Instead of laughing at the naughty behavior of AI smart assistants, perhaps people should spend more time thinking about what kind of AI we need, and what our relationship with AI should be like?