What is artificial intelligence in lay terms, and why is it an existential threat, as posited by certain thinkers today? The term “existential risk” as applied to AI was coined by transhumanist Nick Bostrom of the Future of Humanity Institute—according to an article posted on 9-11-2014 by Angela Chen in the Chronicle Review, “Is Artificial Intelligence a Threat?”. Thus, the EOX (Evil Organization X) keeping track of us by the numbers misunderstands “the nature of super intelligence.”
But Bostrom expects evil applications won’t be the problem. The danger would be “from a powerful, wholly nonhuman agent” without common sense—carrying out a mundane prime directive. The article refers to bootstrapping ordinary digital intelligence into a super intelligence necessary to take over the world by reason (or unreason) of its prime directive. It is posited that this can happen through the machine’s rapid enactment of a “hard take off.” It begins using its smarts to upgrade and refine its own intelligence in order to achieve its goal. Whatever that goal may be. A search for brown eggs perhaps. Or forcing chickens to produce speckled eggs across all continents. Bostrom seems to feel the machine would be some kind of idiot savant.
All in creation, industry, and human society would be subjected and organized toward its goal, even if that goal is merely to make parts for 3-D printers. A sort of self-begetting machine. Imagine how such a machine would marshal all throughout the world to achieve this goal.
This is where Lewis Jenkins’ Diary of a Robot enters my essay. Dr. Little’s invention, the TM 2000 (Robey, pronounced Row-bee), is on its way to becoming a self-directed systems, software, and hardware testing machine. The “Doc” does not invent without the aid of his little company (TLC, Inc). In much of Jenkins’ book Robey displays the learning process of such an artificial intelligence. But the novel does much more, as regards the imaginative reading experience. What we want while reading science fiction is hardware, suspense, defined characters, situation, and the “what if” or BIG IDEA.
This novel has it all, and more: Corporate espionage, bad news, abduction, impersonation, intimations of murder, chess problems, and cheese.* But the real more is in TM 2000’s process of testing, of learning: What is true? Many questions are asked by The Machine and, as we watch it mature toward its full intellectual stature, many more possible answers are given (also by The Machine). Is this a robot developing common sense? Or an idiot so much lamented by Bostrom?
There are many respectful nods to I, Robot, Isaac Asimov’s work, and touches of evident love for Patrick McGoohan’s The Prisoner in this SF. The oddest thing about Jenkins’ novel, the weird thing about the Doctor’s intention for his invention is its prime directive. That the “Doc” succeeds in his creation is shown in Jenkins’ premise—or conceit, if you will—that the AI is the one telling us its own story. But, I have not yet revealed the weird—the robot’s prime directive.
Have you ever heard of a computer program designed to discover truth? And why would financial backers invest in a self-directed testing machine with such a directive? To do no harm is an important directive suggested by Asimov. Dr. Maynard Little’s team have encoded those specs and others, as well—but secondarily.
Robey is learning how to perceive through observation and questioning (testing). Earlier I mentioned the questionable hazard of a machine able to read the human mind. How the EOX would love to be able to do this! The lack of common sense in AI is one of those contingent risks non-evil organizations are hoping to combat in the applied science quest for artificial intelligence. Bostrom believes a super artificial intelligence will happen ... when it achieves the ability to think abstractly.
Robey’s quest for truth relies on spectral analysis in learning to read the human physiognomic and emotive cues, suppressed or otherwise. Mike did something similar in Robert A. Heinlein’s novel. Mike, the supercomputer of Heinlein’s The Moon Is A Harsh Mistress, begins as ordinary systems monitor. But unbeknownst to the Authority, Mike becomes artificially intelligent via bootstrapping and through friendship with Man, a.k.a. Manuel—sent to repair Mike’s software glitch.
Bostrom’s idea suggests that abstracting and compiling enough human “data” to read minds may be imminent. The Christian might say this sounds familiar. Almost biblical. Robey wants to determine the thoughts and intentions of the human heart. Its aim is incisive: Precision in reading the human intention in order to act toward its goal of perceiving the truth. Intelligently, Robey intends to achieve it. Being designed specifically for the task, nothing can stop it but a command to ... stop? ...What if the command to stop is not based on truth —?
*There is no cheese in the novel. I’m teasing my brother, the author, here.