Minsky, one of the founders of the field of artificial intelligence, has offered $100 to anyone who can talk Loebner into revoking his prize. That would, said Minsky, “spare us the horror of this obnoxious and unproductive annual publicity campaign.”
* * *
How did Yudkowsky talk his way out of the box? He had many variations of the carrot and stick to choose from. He could have promised wealth, cures from illness, inventions that would end all want. Decisive dominance over enemies. On the stick side, fear-mongering is a reliable social engineering tactic—what if at this moment your enemies are training ASI against you? In a real-world situation this might work—but what about an invented situation, like the AI-Box Experiment?
When I asked Yudkowsky about his methods he laughed, because everyone anticipates a diabolically clever solution to the AI-Box Experiment—some logical sleight of hand, prisoner-dilemma tactics, maybe something disturbing. But that’s not what happened.
“I did it the hard way,” he said.
Those three successful times, Yudkowsky told me, he simply wheedled, cajoled, and harangued. The Gatekeepers let him out, then paid up. And the two times he lost he had also begged. Afterward he didn’t like how it made him feel. He swore to never do it again.
* * *
Leaving Yudkowsky’s condo, I realized he hadn’t told me the whole truth. What variety of begging could work against someone determined not to be persuaded? Did he say, “Save me, Eliezer Yudkowsky, from public humiliation? Save me from the pain of losing?” Or maybe, as someone who’s devoted his life to exposing the dangers of AI, Yudkowsky would have negotiated a meta deal. A deal about the AI-Box Experiment itself. He could have asked whoever played the AI to join him in exposing the dangers of AGI by helping out with his most persuasive stunt—the AI-Box Experiment. He could’ve said, “Help me show the world that humans aren’t secure systems, and shouldn’t be trusted to contain AI!”
Which would be good for propaganda, and good for raising support. But no lesson at all about going up against real AI in the real world.
Now, back to Friendly AI. If it seems unlikely, does that mean an intelligence explosion is inevitable? Is runaway AI a certainty? If you, like me, thought computers were inert if left alone, not troublemakers, this comes as a surprise. Why would an AI do anything , much less cajole, threaten, or escape?
To find out I tracked down AI maker Stephen Omohundro, president of Self-Aware Systems. He’s a physicist and elite programmer who’s developing a science for understanding smarter-than-human intelligence. He claims that self-aware, self-improving AI systems will be motivated to do things that will be unexpected, even peculiar. According to Omohundro, if it is smart enough, a robot designed to play chess might also want to build a spaceship.
Chapter Five
Programs that Write Programs
… we are beginning to depend on computers to help us evolve new computers that let us produce things of much greater complexity. Yet we don’t quite understand the process—it’s getting ahead of us. We’re now using programs to make much faster computers so the process can run much faster. That’s what’s so confusing—technologies are feeding back on themselves; we’re taking off. We’re at that point analogous to when single-celled organisms were turning into multi-celled organisms. We are amoebas and we can’t figure out what the hell this thing is that we’re creating.
—Danny Hillis, founder of Thinking Machines, Inc.
You and I live at an interesting and sensitive time in human history. By about 2030, less than a generation from now, it could be our challenge to cohabit Earth with superintelligent machines, and to survive. AI theorists return again and again to a handful of themes, none more urgent than this one: we need a science for understanding them.
So far we’ve
Dean Wesley Smith, Kristine Kathryn Rusch
Martin A. Lee, Bruce Shlain