Call it artificial intelligence with a human touch. This week, two California universities separately announced new centers devoted to studying the ways in which AI can help humanity.
USC’s Viterbi School of Engineering and its School of Social Work said Wednesday that they had joined forces to launch the Center on Artificial Intelligence for Social Solutions. A day earlier, UC Berkeley unveiled its newly minted Center for Human-Compatible Artificial Intelligence.
Then on Thursday, a Stanford-led initiative to study the future of artificial intelligence in the next century released a report detailing the effect artificial intelligence could have on urban life by 2030.
The authors discussed its potential effects on several aspects of life, including transportation, healthcare, education and public safety. They emphasized the importance of developing AI expertise in all levels of government, as changing technologies trigger a need for new policies. Pointing to ambiguous federal laws, the experts also called for the removal of any real or perceived roadblocks to research on AI systems.
Even as Stephen Hawking, Bill Gates and Elon Musk and other science and technology pundits warn of the possible overthrow of humanity by advanced artificial intelligence — a prospect that experts say is nowhere on the horizon — scientists are increasingly looking ahead to the ways in which AI might actually aid people’s lives.
“If society approaches AI with a more open mind, the technologies emerging from the field could profoundly transform society for the better in the coming decades,” the report’s authors wrote.
The UC Berkeley-led center, directed by artificial intelligence researcher Stuart Russell, will seek to understand how human values can be built into AI’s design, and create a mathematical framework that will help people build AI systems that are beneficial to individuals and society.
One of the many questions they’ll be wrestling with, for example, is how to get robots to understand what humans really want — because people are notoriously bad at communicating what their objectives are.
Russell called it the King Midas problem. In Greek mythology, Midas asked for everything he touched to be turned to gold. As this meant his food and drink turned to metal, he died in misery and starvation. It didn’t occur to the king that he didn’t really mean “everything” until it was too late.
Scientists might get around this communication problem by designing artificial intelligence that can watch humans and learn what their values are through their actions, though even that comes with some uncertainty, as humans don’t always act in ways aligned with their values, Russell added.
“My objective ... is primarily to look at these long-term questions of how you design AI systems so that you are guaranteed to be happy with the outcomes,” Russell said. (And if they design some useful software or devices as they do so, even better.)
The USC center, co-directed by artificial intelligence researcher Milind Tambe and social work scientist Eric Rice, seems to operate in a mind-set perpendicular to the one at UC Berkeley: It seeks to harness AI’s existing capabilities to solve problems in messy, complicated human contexts.
Tambe, who worked on the report released Thursday, has led a workshop sponsored by the White House Office of Science and Technology Policy on using AI for “social good.” He has used AI to help rangers combat the poaching of wildlife in Asia and Africa and to help LAX security officials catch more weapons, drugs and other contraband. He and Rice are currently working on a project that exemplifies the kind of work the center could do: using artificial intelligence to identify key people in social networks to help prevent the spread of HIV among Los Angeles’ homeless youth.
Artificial intelligence encompasses a wide range of tools, including machine learning, computer vision, natural language processing and game theory. Some of these areas have analogs to aspects of human intelligence. Tambe said he hopes that as more researchers get involved in the center, more of these computational tool sets will be put to good use.
“An agreed-upon definition of AI that is acceptable to everyone is very hard to come by,” Tambe said. “But, essentially, all of the kinds of human reasoning that may be applied to problems, AI wants to be able to do that and more.”
Rice said he saw potential for the technology to be applied to a host of thorny problems in different contexts, from the impact of global warming on impoverished communities to issues with the child welfare system, homelessness and healthcare access.
Though the center’s founding directors have very different backgrounds, the pair’s distinct skill sets complement and enhance each other, Rice explained.
“If you bring together people from social work, who have this understanding of the complexity of the real world, with people from computer science who can model incredibly complex systems, it creates a really great way of moving forward and getting traction on these complicated problems,” Rice said.
MORE IN SCIENCE:
Sept. 1, 5 p.m.: This article was updated with details on the Stanford-led report on the future of artificial intelligence.
This article was originally published Aug. 31, 2016, at 1 p.m.