by Vincent Conitzer, Prospect Magazine
Progress in artificial intelligence has been rapid in recent years. Computer programs are dethroning humans in games ranging from Jeopardy to Go to poker. Self-driving cars are appearing on roads. AI is starting to outperform humans in image and speech recognition.
With all this progress, a host of concerns about AI’s impact on human societies have come to the forefront. How should we design and regulate self-driving cars and similar technologies? Will AI leave large segments of the population unemployed? Will AI have unintended sociological consequences? (Think about algorithms that accurately predict which news articles a person will like resulting in highly polarised societies, or algorithms that predict whether someone will default on a loan or commit another crime becoming racially biased due to the input data they are given.)
Will AI be abused by oppressive governments to sniff out and stifle any budding dissent? Should we develop weapons that can act autonomously? And should we perhaps even be concerned that AI will eventually become “superintelligent”—intellectually more capable than human beings in every important way—making us obsolete or even extinct? While this last concern was once purely in the realm of science fiction, notable figures including Elon Musk, Bill Gates, and Stephen Hawking, inspired by Oxford philosopher Nick Bostrom’s Superintelligence book, have recently argued it needs to be taken seriously.
Addressing concerns about the long-term societal impact of rapidly advancing artificial intelligence (AI) technologies should be the focus of a balanced debate that includes the participation of AI researchers who tend to avoid long-term speculation in favor of more immediate issues of concrete technical progress, writes Duke University professor Vincent Conitzer.
“Research communities work best when they include people with different views and different sub-interests,” he says. Conitzer notes in addition to traditional AI scientists, the debate is attracting economists concerned with AI-fueled unemployment, legal scholars focused on regulation of autonomous vehicles and other technologies, and philosophers studying AI’s moral and ethical ramifications.
“While I am quite skeptical of the idea that truly human-level AI will be developed anytime soon, overall I think that the people worried about this deserve a place at the table in these discussions,” he says. Read the report.