Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover

Free Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover by James Barrat

Book: Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover by James Barrat Read Free Book Online
Authors: James Barrat
uploaded into computers. That’s another route to AGI and beyond, sometimes confused with reverse engineering the brain. Reverse engineering seeks to first complete fine-grained learning about the human brain, then represent what the brain does in hardware and software. At the end of the process you have a computer with human-level intelligence. IBM’s Blue Brain project intends to accomplish this by the early 2020s.
    On the other hand, mind-uploading, also called whole brain emulation, is the theory of modeling a human mind, like yours, in a computer. At the end of the process you still have your brain (unless, as experts warn, the scanning and transfer process destroys it) but another thinking, feeling “you” exists in the machine.
    “If you had a superintelligence that started out as a human upload and began improving itself and became more and more alien over time, that might turn against humanity for reasons roughly analogous to the ones that you are thinking of,” Yudkowsky said. “But for a nonhuman-derived synthesized AI to turn on you, that can never happen because it is more alien than that. The vast majority of them would still kill you but not for that. Your whole visualization would apply only to a super-intelligence that came from human stock.”
    *   *   *
    I’d find in my ongoing inquiry that lots of experts took issue with Friendly AI, for reasons different from mine. The day after meeting Yudkowsky I got on the phone with Dr. James Hughes, chairman of the Department of Philosophy at Trinity College, and the executive director of the Institute for Ethics and Emerging Technologies (IEET). Hughes probed a weakness in the idea that an AI’s utility function couldn’t change.
    “One of the dogmas of the Friendly AI people is that if you are careful you can design a superintelligent being with a goal set that will become unchanging. And they somehow have ignored the fact that we humans have fundamental goals of sex, food, shelter, security. These morph into things like the desire to be a suicide bomber and the desire to make as much money as possible, and things which are completely distant from those original goal sets but were built on through a series of steps which we can watch in our mind.
    “And so we are able then to examine our own goals and change them. For example, we can become intentionally celibate—that’s totally against our genetic programming. The idea that a superintelligent being with as malleable a mind as an AI would have wouldn’ t drift and change is just absurd.”
    The Web site of Hughes’s think tank, IEET, shows they are equal-opportunity critics, suspicious not just of the dangers of AI, but of nanotech, biotech, and other risky endeavors. Hughes believes that superintelligence is dangerous, but the chances of it soon emerging in the short term are remote. However, it is so dangerous that the risk has to be graded equally to imminent threats, such as sea level rise and giant asteroids plunging from the sky (both go in the first category in H. W. Lewis’s ranking of risks, from chapter 2). Hughes concurs with my other concern: baby steps of AI development leading up to superintelligence (called “god in a box” by Hughes) are dangerous, too.
    “MIRI just dismisses all of that because they are focused on god jumping out of a box. And when god jumps out of a box there is nothing that human beings can do to stop or change the course of action. You either have to have a good god or a bad god and that’s the MIRI approach. Make sure it’s a good god!”
    *   *   *
    The idea of god jumping out of a box reminded me of other unfinished business—the AI-Box Experiment. To recap, Eliezer Yudkowsky played the role of an ASI contained in a computer that had no physical connection to the outside world—no cable or wires, no routers, no Bluetooth. Yudkowsky’s goal: escape the box. The Gatekeeper’s goal: keep him in. The game was held in a chat room by players

Similar Books

All or Nothing

Belladonna Bordeaux

Surgeon at Arms

Richard Gordon

A Change of Fortune

Sandra Heath

Witness to a Trial

John Grisham

The One Thing

Marci Lyn Curtis

Y: A Novel

Marjorie Celona

Leap

Jodi Lundgren

Shark Girl

Kelly Bingham