By Gabrielle Beacken
Artificial Intelligence (AI), superb at social engineering and manipulation, will eventually become more charming than Bill Clinton, said James Barrat, author of the book “Our Final Invention: Artificial Intelligence and the End of Human Era.”
Having similar perception, vision, hearing and language as human beings, machines of rapidly progressing AI will exceed human intelligence, Barrat said.
According to the author, these facts beg the questions, “What is intelligence?” and “How can we control something more intelligent than we are?”
Students, faculty and the Ewing community filled Mayo Concert Hall on Monday, March 2, to engage in a conversation with Barrat about the future of AI and the ethical dilemmas it presents.
The College’s School of Business Center for Innovation and Ethics, as well as the School of Engineering, School of Humanities and Social Sciences and School of Nursing and Exercise Science, sponsored this event. The Net Impact Student Chapter, the Economics Club and Phi Beta Kappa honor society were also co-sponsors.
“It’s the most inwardly looking exploration of ourselves,” Barrat said, discussing the ethical considerations of AI. Barrat has also written and produced documentaries such as “The Gospel of Judas” and “Egypt: Secrets of Pharaohs” for National Geographic, PBS and Discovery. “This conversation is the most important conversation of our time.”
According to Merriam-Webster, AI is “the power of a machine to copy intelligent human behavior.”
“We will create AI machines that are better at AI research than we are,” Barrat said.
According to Barrat, the short term question of AI is “Who controls it?” while the long term question is “Can it be controlled?”
“It is clear that artificial intelligence will play an increasing role in our lives over the coming decades,” said Kevin Michels, professor and director of the School of Business Center for Innovation and Ethics. “AI is now recognizing our speech, faces and motions, trading on Wall Street, and before too long, it may be driving our cars and diagnosing our medical conditions.”
Though AI offers beneficial solutions and inventions that have encouraged business start-ups and scientists, the morals of this new type of intelligence should be deliberated, according to Michels.
“The ethics challenges are daunting — from concerns about the behavior of robots to the longer-term existential concerns identified by James Barrat,” Michels said.
Programming morals into a machine is “extremely hard,” according to Barrat. To program ethics, one has to develop a universal definition of what humans consider to be good, bad, right and wrong.
Barrat offered the example of trying to program the statement, “Have a good life.”
“If we can’t even agree when life begins, how can we program that?” Barrat said. “Humans differ from place to place.”
There is no foolproof solution, according to Barrat, but there are “precedents.” AI-makers are not even “considering how to control it.”
“The first step toward a solution is to develop an awareness of the ethical challenges posed by AI,” Michels said. “Awareness may lead programmers and those who finance their projects to reflect more deeply on their responsibilities.”
“Some people with deeper pockets are only thinking of autonomous robots and drones,” Barrat said. “DARPA will keep creating because there is nothing illegal about it.”
Agencies such as Defense Advanced Research Projects Agency (DARPA) and National Security Agency (NSA), large corporations such as Google, International Business Machines (IBM) and Apple, and advanced countries such as the U.S., Israel and China, are the top “people-pursuing” AI innovations, according to Barrat.
“AI will dominate the 21st century,” Barrat said. “There is rapid product development without ethical research.”
In the near future, Michels thinks humans will encounter the “autonomy paradox.” As AI machines and robots are given more freedom to “work on their own,” these machines will cultivate a more independent and autonomous nature, according to Michels.
“Machines that learn and improve, program themselves, gain self-awareness and massive computational power will, in Barrat’s view, one-day achieve ‘super-intelligence,’” Michels said. “For the first time, we will share the planet with entities that are vastly smarter than ourselves.”
As machines of intelligence, AI will have “basic drives,” according to Barrat. These drives include efficiency, creativity, self-protection and resource acquisition.
“They don’t want to be turned off,” Barrat said. “It’s rational for them to prove their intelligence.”
While this does not mean AI will be dangerous, according to Barrat, AI machines will want to be able to use all of its possible resources to be as sufficient as possible, which could cause a threat to humans.
“AI is a dual use of technology. It’s something that can be used for great good and great harm,” Barrat said. Though “life is pretty good for AI designers right now,” it doesn’t make them “evil,” according to Barrat — they are “just like us.”
At this point, the termination of AI is not an option, according to Barrat.
“There’s too much money to be made and there is too much public interest,” Barrat said. “It’s pretty much unstoppable.”