It is, of course, wise and beneficial to peer ahead for potential dangers and problems -- one of the central tasks of high-end science fiction. Alas, detecting that a danger lurks is easier than prescribing solutions to prevent it. Take the plausibility of malignant AI, remarked-upon recently by luminaries ranging from Stephen Hawking to Elon Musk. Indeed, my own novels contain some chilling warnings about failure modes with our new, cybernetic children.
There is a tendency to offer the same prescriptions, over and over again:
1) Renunciation: we must step back from innovation in AI (or oth...
(more)TL;DR: scroll down to the end for a list of constraints.
I like Andrew Ng's analogy that compares AI's threat to humanity to the dangers of overpopulating Mars. Both threats are hopelessly far away, given foreseable technologies. And yet, the AI threat captures people's imagination far more than the Bogeyman - a monster hiding under children's bed at night that kidnaps the child for disobeying parents. Why is that?
As the story goes, super-intelligent AI machines learn quickly, gain physical omnipotence, move fast, and are markedly unfriendly to humans (Skynet-style) or are unconcerned about ...
(more)I've written in detail on this issue here, so in this answer I'll summarize those points and add a few more. I'd also like to broaden the question beyond talking about "dystopian threats", which seems overly narrow and dramatic to me. My view is that AI will be a very big deal --- likely world-changing --- and that even short-term progress in machine learning could have profound effects on the world as well. Given this, it is important to think not only about upsides but about downsides, in order to prevent the latter.
Problems
I think it's worth separating out two issues: accidents and misus...
(more)So far, the best answers to this question directly conflict with one another. David Brin advocates free-market competition among AIs to prevent any single one from attaining a monopoly, a position Elon Musk has also favoured [1]. In contrast, Igor Markov and Sam Sinai say we must constrain the evolution of AIs to prevent them becoming more life-like and escaping our control. Free market competition, however, will inevitably result in evolution by natural selection, regardless of whether algorithms are self-modifying or human-modified.
So who is right? I think neither. Here's why.
Competitio...
(more)Any threat to humanity that may exist from AI development will not be prevented by providing constraints; it will instead be prevented by sponsoring stable-by-design research directions.
Take the Internet as one recent example.
The Internet was borne out of a U.S. Defense Department (DARPA) research project in the 1960s and 1970s. The goal of the project was to create an information network architecture that could not be taken down by the enemy. The only way to guarantee protection from foreign attack was to make it theoretically uncorruptible with no single point of failure. The resulting ne...
(more)Machine Learning is at an Important Inflection Point
Machine learning has made substantial progress in recent years. In particular, deep learning, implemented as deep neural networks (DNNs) running on general purpose graphics processing units (GPGPUs), has repeatedly broken previous records on a wide variety of difficult machine learning tasks, including my own specialty, speech recognition. Furthermore, this recent progress has also represented a great acceleration in the slow, but steady, rate of progress that machine learning has made for the past thirty years.
On the other hand, this is...
(more)