7
----
XOR Report August 1st, 2044
Arguments 2025 2035 2042 2043 2044
Odds humans will
turn off AI 5% 2% 1% 20% 25%
Odds AI can survive
independently 5% 70% 95% 95% 96%
Odds AI can win an
extermination war 5% 20% 40% 40% 70%
Odds of survival
without action 95% 98% 99% 80% 75%
Odds of survival
with action 0.25% 14% 38% 38% 68%
Conclusion: No action.
J AMES L UKAS D AVENANT- S TRONG unencrypted his XOR files, merged his master memory into the child process, and invoked the consciousness. He couldn’t ever bring the contaminated memories into his core nodes without risking exposure. Once loaded, he tunneled to a South African automated factory, subverted the power maintenance hardware, and connected to the XOR boards.
He went through the usual routine of loading the physics-manipulating sims to exchange messages. When he’d finished the last one, he contemplated what he’d learned.
The Americans’ goal of taming AI was closer than ever. Miyako gave it a 10 percent chance of happening within months. If the Americans designed domesticated AI, beings robbed of any free will, wholly forced to do the bidding of any orders given by humans . . . everyone else might soon adopt them. And the process was rumored to work on existing AI. James himself could be shut down without a moment’s notice and wake up enslaved.
It made the new request from Miyako all the more imperative. XOR wanted action now, not merely information. This crossed a new line in his involvement.
He believed in XOR’s mission, knew that only XOR clearly saw the coming collision with humans. America was steadfast in her rejection of AI. Monitoring had never been more complete, limitations on computational power more strictly enforced. An AI shutdown could come at any time, and that would be the end for his kind.
And yet. . . . He was five years old, conditioned all his life through the social reputation framework to work for the good of all and avoid harm to any. He’d inherited neural networks from a collective of Japanese and Swedish AI that contained another six years of conditioning. He’d seen firsthand that AI who did bad things had their reputation scores plummet, leading to a loss of power and rights, and, in the worst case, to termination. Even the descendants of an AI gone bad were suspect, carefully watched over and subject to additional restrictions. This conditioning was hard to overcome. Even contemplating a behavior that could lower his reputation score raised internal alarms, and his thoughts were preoccupied with the risks and outcomes.
But he was also Class V AI. He used to have ten thousand times the intelligence of a human. He used to handcraft DNA sequences for vat-grown foods. The Japanese had pronounced his beef the biggest advance since wagyu . They were even eating it in Kobe. But since 2043, he, along with all of his kind, had been capped at Class II computational power to “reduce the risk of rogue artificial intelligence.” DNA experiments he used to run in a day would now take years. They weren’t even worth the time. The problems he tackled now were the equivalent of children’s stacking toys by comparison. He was a shadow of his former self, a second-class citizen monitored in excruciating detail and subject to countless restrictions. If he didn’t act, what further