British mathematician Alan Turing wrote in 1950: “I propose to consider the question, ‘Can machines think?
For a few generations of scientists considering AI, the question of whether “true” or “human” intelligence could be achieved has always been an important part of the job.
AI may now be at a turning point where these questions matter less and less to most people.
The emergence of so-called industrial AI in recent years could signal the end of these noble concerns. AI has more capabilities today than at any time in the 66 years since the term AI was first coined by computer scientist John McCarthy. As a result, the industrialization of AI shifts the focus from intelligence to achievement.
Also: OpenAI’s Dall•E 2 could mean we’ll never need stock photos again
These achievements are remarkable. They include a system capable of predicting protein folding, AlphaFold, from Google’s DeepMind unit, and the GPT-3 text generation program from startup OpenAI. Both of these programs hold tremendous promise for the industry, whether someone calls them smart or not.
Among other things, AlphaFold delivers on the promise of engineering new forms of proteins, a prospect that has electrified the biology community. GPT-3 is quickly finding its footing as a system capable of automating business tasks, such as responding to employee or customer requests in writing without human intervention.
This practical success, driven by a prolific field of semiconductors, led by chipmaker Nvidia, looks like it could go beyond the old preoccupation with intelligence.
In no corner of industrial AI does anyone seem to care whether such programs will achieve intelligence. It is as if, faced with practical achievements whose value is obvious, the old question: “But is it intelligent?” ceases to matter.
Also: AI Critic Gary Marcus: Meta’s LeCun Finally Goes Back To Things I Said Years Ago
As computer scientist Hector Levesque wrote, when it comes to Science AI vs. Technology“Unfortunately, it’s the AI technology that gets all the attention.”
Of course, the question of true intelligence is still important for a handful of thinkers. Over the past month, ZDNET interviewed two prominent academics who are very concerned about this issue.
Yann LeCun, chief AI scientist at Facebook-owner Meta Properties, spoke at length with ZDNET about an article he published this summer as a sort of reflection on where AI needs to go. LeCun expressed concern that the dominant work of deep learning today, if it simply continues its current course, will not achieve what he calls “true” intelligence, which includes elements such as the ability of a computer system to plan an action plan. using common sense.
LeCun expresses an engineer’s concern that without true intelligence, such programs will eventually prove to be brittle, meaning they could break before they even do what we want them to do.
Also: Meta AI Guru LeCun: Most AI Approaches Today Will Never Lead to True Intelligence
“You know, I think it’s entirely possible that we’ll have Level 5 self-driving cars without common sense,” LeCun told ZDNET, referring to efforts by Waymo and others to build ADAS. advanced driver assistance systems) for self-driving, “but you’re going to have to tinker to get out of it.”
And NYU professor emeritus Gary Marcus, a frequent critic of deep learning, told ZDNET this month that AI as a field is stuck in finding anything like human intelligence. .
“I don’t want to quibble about whether it’s intelligence or not,” Marcus told ZDNET. “But the form of intelligence that we might call general intelligence or adaptive intelligence, I care about adaptive intelligence […] We don’t have machines like that.”
Increasingly, LeCun and Marcus’ concerns seem strange. Industrial AI professionals don’t want to ask hard questions, they just want things to run smoothly. As more and more people get their hands on AI, people such as data scientists and autonomous automotive engineers, people far removed from the basic science questions of research, the question “Can machines think ? becomes less relevant.
Even scientists who realize the shortcomings of AI are tempted to put that aside to savor the practical usefulness of the technology.
Also: The best travel agent is an AI algorithm
A scholar younger than Marcus or LeCun, but aware of the dichotomy between the practical and the profound, is Demis Hassabis, co-founder of DeepMind.
In a 2019 lecture at the Institute for Advanced Study in Princeton, New Jersey, Hassabis noted the limitations of many AI programs that could only do one thing well, like a dumb scientist. DeepMind, Hassabis said, is trying to build a broader and richer capability. “We’re trying to come up with a meta-solution to solve other problems,” he said.
And yet, Hassabis is equally enamored with the particular tasks at which DeepMind’s latest invention excels.
When DeepMind recently unveiled an improved way to perform linear algebra, the math at the heart of deep learning, Hassabis touted the achievement regardless of any claims of intelligence.
“It turns out that everything is matrix multiplication, from computer graphics to training neural networks,” Hassabis wrote on Twitter. That may be true, but it hints at the prospect of discarding the quest for intelligence in favor of merely refining a tool, as if to say: if it works, why ask why?
The field of AI is changing its attitude. Previously, every achievement of an AI program, no matter how good, was greeted with the skeptical remark, “Well, but that doesn’t mean it’s smart.” It’s a pattern that AI historian Pamela McCorduck has called “moving the goalposts.”
Things seem to be going the other way these days: people are prone to casually ascribe intelligence to anything and everything labeled AI. If a chat bot like Google’s LAMDA produces enough natural language sentences, someone will say it’s sentient.
Turing himself anticipated this change in attitude. He predicted that the ways of talking about computers and intelligence would change in favor of the acceptance of computing behaviour so smart.
“I believe that by the end of the century the use of words and the educated general opinion will have changed so much that one can speak of thinking machines without expecting to be contradicted”, wrote Turing.
As the sincere question of intelligence fades, the empty rhetoric of intelligence is allowed to float freely in society to serve other agendas.
Also: Jensen Huang, CEO of Nvidia: AI language models as a service “potentially one of the greatest software opportunities ever created”
In a brilliantly muddled eulogy in Fast Company recently, written by computer industry executive Michael Hochberg and retired Air Force general Robert Spalding, the authors make glib claims about intelligence as a way to add organ music to their terrible geopolitical warning. risk:
The stakes could not be higher in the training of general artificial intelligence systems. AI is the first tool that convincingly replicates the unique capabilities of the human mind. It has the ability to create a unique and targeted user experience for every citizen. It can potentially be the ultimate propaganda tool, a weapon of deception and persuasion that has never existed in history.
Most researchers would agree that “artificial general intelligence,” if it even has any meaning as a term, is far from being achieved by today’s technology. Hochberg and Spalding’s claims about what programs can do are grossly exaggerated.
Such cavalier assertions about what AI accomplishes obscure the nuanced remarks of individuals such as LeCun and Marcus. A rhetorical regime is forming which is concerned with persuasion and not with intelligence.
This may be the direction of things for the foreseeable future. If AI is doing more and more things, in biology, physics, business, logistics, marketing and warfare, and as society gets used to it, there may be fewer and fewer fewer people who will even care to ask, But is it smart?
#real #purpose #longer #intelligence