http://www.nytimes.com/2013/10/15/technology/the-rapid-advance-of-artificial-intelligence.html?_r=0
http://www.newyorker.com/online/blogs/elements/2013/10/why-we-should-think-about-the-threat-of-artificial-intelligence.html
The Rapid Advance of Artificial Intelligence
Sally Ryan for The New York Times
By JOHN MARKOFF
Published: October 14, 2013
A gaggle of Harry Potter
fans descended for several days this summer on the Oregon Convention
Center in Portland for the Leaky Con gathering, an annual haunt of a
group of predominantly young women who immerse themselves in a fantasy
world of magic, spells and images.
The jubilant and occasionally squealing attendees appeared to have no
idea that next door a group of real-world wizards was demonstrating
technology that only a few years ago might have seemed as magical.
The scientists and engineers at the Computer Vision and Pattern
Recognition conference are creating a world in which cars drive
themselves, machines recognize people and “understand” their emotions,
and humanoid robots travel unattended, performing everything from
mundane factory tasks to emergency rescues.
C.V.P.R., as it is known, is an annual gathering of computer vision
scientists, students, roboticists, software hackers — and increasingly
in recent years, business and entrepreneurial types looking for another
great technological leap forward.
The growing power of computer vision is a crucial first step for the
next generation of computing, robotic and artificial intelligence
systems. Once machines can identify objects and understand their
environments, they can be freed to move around in the world. And once
robots become mobile they will be increasingly capable of extending the
reach of humans or replacing them.
Self-driving cars, factory robots and a new class of farm hands known as
ag-robots are already demonstrating what increasingly mobile machines
can do. Indeed, the rapid advance of computer vision is just one of a
set of artificial intelligence-oriented technologies — others include
speech recognition, dexterous manipulation and navigation — that
underscore a sea change beyond personal computing and the Internet, the
technologies that have defined the last three decades of the computing
world.
“During the next decade we’re going to see smarts put into everything,”
said Ed Lazowska, a computer scientist at the University of Washington
who is a specialist in Big Data. “Smart homes, smart cars, smart health,
smart robots, smart science, smart crowds and smart computer-human
interactions.”
The enormous amount of data being generated by inexpensive sensors has
been a significant factor in altering the center of gravity of the
computing world, he said, making it possible to use centralized
computers in data centers — referred to as the cloud — to take
artificial intelligence technologies like machine-learning and spread
computer intelligence far beyond desktop computers.
Apple was the most successful early innovator in popularizing what is
today described as ubiquitous computing. The idea, first proposed by
Mark Weiser, a computer scientist with Xerox, involves embedding
powerful microprocessor chips in everyday objects.
Steve Jobs, during his second tenure at Apple, was quick to understand
the implications of the falling cost of computer intelligence. Taking
advantage of it, he first created a digital music player, the iPod, and
then transformed mobile communication with the iPhone. Now such
innovation is rapidly accelerating into all consumer products.
“The most important new computer maker in Silicon Valley isn’t a computer maker at all, it’s Tesla,” the electric car
manufacturer, said Paul Saffo, a managing director at Discern
Analytics, a research firm based in San Francisco. “The car has become a
node in the network and a computer in its own right. It’s a primitive
robot that wraps around you.”
Here are several areas in which next-generation computing systems and
more powerful software algorithms could transform the world in the next
half-decade.
Artificial Intelligence
With increasing frequency, the voice on the other end of the line is a computer.
It has been two years since Watson, the artificial intelligence program created by I.B.M.,
beat two of the world’s best “Jeopardy” players. Watson, which has
access to roughly 200 million pages of information, is able to
understand natural language queries and answer questions.
The computer maker had initially planned to test the system as an expert
adviser to doctors; the idea was that Watson’s encyclopedic knowledge
of medical conditions could aid a human expert in diagnosing illnesses,
as well as contributing computer expertise elsewhere in medicine.
In May, however, I.B.M. went a significant step farther by announcing a
general-purpose version of its software, the “I.B.M. Watson Engagement
Advisor.” The idea is to make the company’s question-answering system
available in a wide range of call center, technical support and
telephone sales applications. The company says that as many as 61
percent of all telephone support calls currently fail because human
support-center employees are unable to give people correct or complete
information.
Watson, I.B.M. says, will be used to help human operators, but the
system can also be used in a “self-service” mode, in which customers can
interact directly with the program by typing questions in a Web browser
or by speaking to a speech recognition program.
That suggests a “Freakonomics” outcome: There is already evidence that
call-center operations that were once outsourced to India and the
Philippines have come back to the United States, not as jobs, but in the
form of software running in data centers.
Robotics
A race is under way to build robots that can walk, open doors, climb
ladders and generally replace humans in hazardous situations.
In December, the Defense Advanced Research Projects Agency,
or Darpa, the Pentagon’s advanced research arm, will hold the first of
two events in a $2 million contest to build a robot that could take the
place of rescue workers in hazardous environments, like the site of the
damaged Fukushima Daiichi nuclear plant.
Scheduled to be held in Miami, the contest will involve robots that
compete at tasks as diverse as driving vehicles, traversing rubble
fields, using power tools, throwing switches and closing valves.
In addition to the Darpa robots, a wave of intelligent machines for the
workplace is coming from Rethink Robots, based in Boston, and Universal
Robots, based in Copenhagen, which have begun selling lower-cost
two-armed robots to act as factory helpers. Neither company’s robots
have legs, or even wheels, yet. But they are the first commercially
available robots that do not require cages, because they are able to
watch and even feel their human co-workers, so as not to harm them.
For the home, companies are designing robots that are more sophisticated
than today’s vacuum-cleaner robots. Hoaloha Robotics, founded by the
former Microsoft executive Tandy Trower, recently said it planned to
build robots for elder care, an idea that, if successful, might make it possible for more of the aging population to live independently.
Seven entrants in the Darpa contest will be based on the imposing
humanoid-shaped Atlas robot manufactured by Boston Dynamics, a research
company based in Waltham, Massachusetts. Among the wide range of other
entrants are some that look anything but humanoid — with a few that
function like “transformers” from the world of cinema. The contest, to
be held in the infield of the Homestead-Miami Speedway, may well have
the flavor of the bar scene in “Star Wars.”
Intelligent Transportation
Amnon Shashua, an Israeli computer scientist, has modified his Audi A7
by adding a camera and artificial-intelligence software, enabling the
car to drive the 65 kilometers, or 40 miles, between Jerusalem and Tel
Aviv without his having to touch the steering wheel.
In 2004, Darpa held the first of a series of “Grand Challenges” intended
to spark interest in developing self-driving cars. The contests led to
significant technology advances, including “Traffic Jam Assist” for
slow-speed highway driving; “Super Cruise” for automated freeway
driving, already demonstrated by General Motors and others; and
self-parking, a feature already available from a number of car
manufacturers.
Recently General Motors and Nissan have said they will introduce
completely autonomous cars by the end of the decade. In a blend of
artificial-intelligence software and robotics, Mobileye, a small Israeli
manufacturer of camera technology for automotive safety that was
founded by Mr. Shashua, has made considerable progress. While Google and
automotive manufacturers have used a variety of sensors including
radars, cameras and lasers, fusing the data to provide a detailed map of
the rapidly changing world surround a moving car, Mobileye researchers
are attempting to match that accuracy with just video cameras and
specialized software.
Emotional Computing
At a preschool near the University of California, San Diego, a
child-size robot named Rubi plays with children. It listens to them,
speaks to them and understands their facial expressions.
Rubi is an experimental project of Prof. Javier Movellan, a specialist
in machine learning and robotics. Professor Movellan is one of a number
of researchers now working on a class of computers that can interact
with humans, including holding conversations.
Computers that understand our deepest emotions hold the promise of a
world full of brilliant machines. They also raise the specter of an
invasion of privacy on a scale not previously possible, as they move a
step beyond recognizing human faces to the ability to watch the array of
muscles in the face and decode the thousands of possible movements into
an understanding of what people are thinking and feeling.
These developments are based on the work of the American psychologist
Paul Ekman, who explored the relationship between human emotion and
facial expression. His research found the existence of “micro
expressions” that expose difficult-to-suppress authentic reactions. In
San Diego, Professor Movellan has founded a company, Emotient, that is
one of a handful of start-ups pursuing applications for the technology. A
near-term use is in machines that can tell when people are laughing,
crying or skeptical — a survey tool for film and television audiences.
Farther down the road, it is likely that applications will know exactly
how people are reacting as the conversation progresses, a step well
beyond Siri, Apple’s voice recognition system.
Harry Potter fans, stand by.
October 24, 2013
Why We Should Think About the Threat of Artificial Intelligence
If the New York Times’s latest article is to be believed, artificial intelligence is moving so fast it sometimes seems almost “magical.” Self-driving cars have arrived; Siri can listen to your voice and find the nearest movie theatre; and I.B.M. just set the “Jeopardy”-conquering Watson to work on medicine, initially training medical students, perhaps eventually helping in diagnosis. Scarcely a month goes by without the announcement of a new A.I. product or technique. Yet, some of the enthusiasm may be premature: as I’ve noted previously, we still haven’t produced machines with common sense, vision, natural language processing, or the ability to create other machines. Our efforts at directly simulating human brains remain primitive.
Still, at some level, the only real difference between enthusiasts and skeptics is a time frame. The futurist and inventor Ray Kurzweil thinks true, human-level A.I. will be here in less than two decades. My estimate is at least double that, especially given how little progress has been made in computing common sense; the challenges in building A.I., especially at the software level, are much harder than Kurzweil lets on.
But a century from now, nobody will much care about how long it took, only what happened next. It’s likely that machines will be smarter than us before the end of the century—not just at chess or trivia questions but at just about everything, from mathematics and engineering to science and medicine. There might be a few jobs left for entertainers, writers, and other creative types, but computers will eventually be able to program themselves, absorb vast quantities of new information, and reason in ways that we carbon-based units can only dimly imagine. And they will be able to do it every second of every day, without sleep or coffee breaks.
For some people, that future is a wonderful thing. Kurzweil has written about a rapturous singularity in which we merge with machines and upload our souls for immortality; Peter Diamandis has argued that advances in A.I. will be one key to ushering in a new era of “abundance,” with enough food, water, and consumer gadgets for all. Skeptics like Eric Brynjolfsson and I have worried about the consequences of A.I. and robotics for employment. But even if you put aside the sort of worries about what super-advanced A.I. might do to the labor market, there’s another concern, too: that powerful A.I. might threaten us more directly, by battling us for resources.
Most people see that sort of fear as silly science-fiction drivel—the stuff of “The Terminator” and “The Matrix.” To the extent that we plan for our medium-term future, we worry about asteroids, the decline of fossil fuels, and global warming, not robots. But a dark new book by James Barrat, “Our Final Invention: Artificial Intelligence and the End of the Human Era,” lays out a strong case for why we should be at least a little worried.
Barrat’s core argument, which he borrows from the A.I. researcher Steve Omohundro, is that the drive for self-preservation and resource acquisition may be inherent in all goal-driven systems of a certain degree of intelligence. In Omohundro’s words, “if it is smart enough, a robot that is designed to play chess might also want to be build a spaceship,” in order to obtain more resources for whatever goals it might have. A purely rational artificial intelligence, Barrat writes, might expand “its idea of self-preservation … to include proactive attacks on future threats,” including, presumably, people who might be loathe to surrender their resources to the machine. Barrat worries that “without meticulous, countervailing instructions, a self-aware, self-improving, goal-seeking system will go to lengths we’d deem ridiculous to fulfill its goals,” even, perhaps, commandeering all the world’s energy in order to maximize whatever calculation it happened to be interested in.
Of course, one could try to ban super-intelligent computers altogether. But “the competitive advantage—economic, military, even artistic—of every advance in automation is so compelling,” Vernor Vinge, the mathematician and science-fiction author, wrote, “that passing laws, or having customs, that forbid such things merely assures that someone else will.”
If machines will eventually overtake us, as virtually everyone in the A.I. field believes, the real question is about values: how we instill them in machines, and how we then negotiate with those machines if and when their values are likely to differ greatly from our own. As the Oxford philosopher Nick Bostrom argued:
We cannot blithely assume that a superintelligence will necessarily share any of the final values stereotypically associated with wisdom and intellectual development in humans—scientific curiosity, benevolent concern for others, spiritual enlightenment and contemplation, renunciation of material acquisitiveness, a taste for refined culture or for the simple pleasures in life, humility and selflessness, and so forth. It might be possible through deliberate effort to construct a superintelligence that values such things, or to build one that values human welfare, moral goodness, or any other complex purpose that its designers might want it to serve. But it is no less possible—and probably technically easier—to build a superintelligence that places final value on nothing but calculating the decimals of pi.The British cyberneticist Kevin Warwick once asked, “How can you reason, how can you bargain, how can you understand how that machine is thinking when it’s thinking in dimensions you can’t conceive of?”
If there is a hole in Barrat’s dark argument, it is in his glib presumption that if a robot is smart enough to play chess, it might also “want to build a spaceship”—and that tendencies toward self-preservation and resource acquisition are inherent in any sufficiently complex, goal-driven system. For now, most of the machines that are good enough to play chess, like I.B.M.’s Deep Blue, haven’t shown the slightest interest in acquiring resources.
But before we get complacent and decide there is nothing to worry about after all, it is important to realize that the goals of machines could change as they get smarter. Once computers can effectively reprogram themselves, and successively improve themselves, leading to a so-called “technological singularity” or “intelligence explosion,” the risks of machines outwitting humans in battles for resources and self-preservation cannot simply be dismissed.
One of the most pointed quotes in Barrat’s book belongs to the legendary serial A.I. entrepreneur Danny Hillis, who likens the upcoming shift to one of the greatest transitions in the history of biological evolution: “We’re at that point analogous to when single-celled organisms were turning into multi-celled organisms. We are amoeba and we can’t figure out what the hell this thing is that we’re creating.”
Already, advances in A.I. have created risks that we never dreamt of. With the advent of the Internet age and its Big Data explosion, “large amounts of data is being collected about us and then being fed to algorithms to make predictions,” Vaibhav Garg, a computer-risk specialist at Drexel University, told me. “We do not have the ability to know when the data is being collected, ensure that the data collected is correct, update the information, or provide the necessary context.” Few people would have even dreamt of this risk even twenty years ago. What risks lie ahead? Nobody really knows, but Barrat is right to ask.
No comments:
Post a Comment