The Dangers of Artificial Intelligence

Discussion of the Week
In his book, Our Final Invention: Artificial Intelligence and the End of the Human Era, James Barrat argues that we may be only a decade away from AI that surpasses human intelligence. On the heels of last Sunday’s Kelly Letter reporting that IBM is developing its Watson supercomputer into a cloud-based decision making service for hire, this is timely. Now that computers have defeated human chess and Jeopardy champions, how long will it be before the best-funded efforts to improve them in the battlefield unwittingly spawn a self-aware, self-improving entity that quickly outpaces its creators?

You’re not alone if you think you’ve seen this story before. It sits atop deep roots in the science fiction genre, with predecessors like HAL 9000 in Arthur C. Clarke’s “Space Odyssey” series, and Colossus in Dennis Feltham Jones’s novel of the same name. Each is an artificially intelligent computer that goes awry and uses its control of connected systems to grow its power over human beings. HAL kills astronauts to prevent them from disconnecting it. Colossus commands US and other nuclear weapons systems, and allies with its Soviet counterpart, Guardian, to assume control of the world. The joint entity eventually broadcasts messages as “the voice of World Control” to human populations in various countries.

Such scenarios are known as cybernetic revolt or robot uprisings. While they’re a staple of science fiction, Barrat believes the fictional variety to be more innocent than the real danger just around the corner. When artificial general intelligence — the ability to think broadly in a human-like manner rather than merely in a task-specific manner — becomes reality, it will rapidly improve itself along with its ability to improve its ability to improve itself, in a whoosh of exponential leaps in capability that could accelerate so quickly that humanity is caught off guard and flanked before it has a chance to respond. If the AGI determines that it needs the same resources that humans use, it will see humans as rivals to be overcome.

The usual response is that such technology will be equipped with an on-off switch, or coded to respect human beings, or countered with an equally capable technology that defends humanity, but such ideas miss what scientists mean when discussing true AGI. They’re not talking about a smarter smart phone or toaster or even strike drone. They’re talking about a synthetic brain inhabiting multiple devices that will be able to rewrite its own code in whatever manner it wants, more quickly and effectively than its human creators could do. Once this threshold is crossed, it’s anybody’s guess what the entity will make itself into. Because controllable machines interact with the physical world these days, the AGI would wield immense power.

The concern is that even seemingly innocuous missions could run afoul. Consider the case of an AGI created with the purpose of becoming the world’s greatest chess player. Harmless enough, right? Yes, in cases we’ve seen so far, but when a self-aware, self-improving AGI dedicated to dominating chess meets another self-aware, self-improving AGI in the same field, each will go nuts trying to one-up the other, demanding more resources in the form of storage and electricity and computing power. Gone far enough, this accelerating resource grab could harness all the electricity and computing power in the world, and see human demands to get it back as threats to chess competence that need to be eradicated for the goal of winning the game.

If AGI chess grandmasters could cause this much havoc, how much more could weaponized AGIs cause? When rapidly improving, self-replicating combat robots determine that human beings are the greatest threat to the planet, and must be reduced in population size in the interest of the AGI’s own survival, we’ve got a serious problem. Why do Barrat and the scientists and tech visionaries he interviewed worry about this? Because the bulk of research and development is happening precisely in the area of weaponized AGI, by groups that refuse to divulge their progress, with the express intent of creating technologies that can outwit the ones being created by rival groups.

We may soon find we’ve met the enemy and it was created by us.

Yours truly,
Jason Kelly
Sign up to read the rest of this letter and all back letters, and receive future letters the moment they’re sent.

This entry was posted in Discussion of the Week, Society, Technology and tagged . Bookmark the permalink. Both comments and trackbacks are currently closed.

8 Comments

  1. John W.
    Posted February 21, 2016 at 1:55 pm | Permalink

    I think consciousness is a trait that exists in a spectrum. An ant would be less conscious than a dog, but a dog very close to the same consciousness of a human. At what level does an organism have the ability to experience. Does a single celled organism that moves away from light do so because it “feels pain”? What we fear when we speak of AI is not artificial intelligence, but artificial consciousness, or artificial self awareness. Consciousness is complex. It is a conflagration of biology,emotion, intellect, and body sensory experience built on an evolutionary lattice. I agree that a conscious robot could have unforeseen consequences, but I think our belief that we are close to creating artificial self-awareness is another rather common example of human hubris. A computer has the same level of self-awareness as a hammer. Simulated consciousness is not consciousness. If you can’t build it, then you don’t understand it. We do not understand consciousness….yet. I think it will be a while.

    • Posted February 24, 2016 at 12:44 pm | Permalink

      You raise a good point about the difference between consciousness and artificial intelligence, but I think the line will blur quickly when the “fake” AI behaves in a way people recognize as being human. It starts early on. In Japan, for instance, people bond with their automated vacuum cleaners that self-activate and drive around the floor on their own. They name them and treat them like pets. If they can do that with round, plastic machines that look nothing like organic life, imagine how easily they would bond and trust a human voice and human appearance on a robot. It would quickly become many people’s best friend.

  2. Eric W.
    Posted February 4, 2014 at 2:09 pm | Permalink

    A few more thoughts. If you observe the past tech. developments, it’s hard not to notice that early models of technology are hyped as something much, much, more than they actually are. We have had “robots” around for 40 years now, but are just now seeing the type of robots the average person thinks of when the term is discussed. The crude early versions were indeed robots technically, even if it was just an electric-over-hydraulic boom controlled by simple software, but they weren’t anything like C3PO. So too, has been the rollout of AI to date. In some very minor technical ways there are some software developments that COULD be called AI but let’s get real – we are not even in the age of AI, let alone AGI.

    What we are living in is the age of the database, and that has very powerful and serious implications in itself. Every problem the early databases had has by now been solved. There are no data entry clerks anymore unless you count the little scanner the UPS guy holds in one hand. Data entry is instantaneous. Analysis is instantaneaous and automated. Query speed has been cut to miliseconds. Over 70% of daily stock trading volume is initiated and executed by database software without human awareness. But I digress…

    The reason I make these points is to say that the AI of today is not anything like what we think of as AI, just like the first robots were extremely crude and barely recognizable as what people think a robot should be. Watson is not AI, it’s an automated database connected to the internet. The early and crude versions of AI and AGI could be every bit as dangerous as Jason fears due to the fact that humans are still at the controls. If humans are still controlling the system, it cannot, by definition, be AGI. But if something goes wrong, the headline will be something to the effect of: an “artificial intelligent” or “smart” computer has gone “haywire”.

    In my previous comment, I didn’t address this type of quasi-AGI system that will exist until a true AGI becomes a reality. Software producers love to tout their product as AI because it’s alot sexier than saying “we use a really, really neat database”.

    I remain skeptical that AGI is even possible because there is too little known about self-awareness to know how to program a machine for it. The field of philosophy has worked on the problem exhaustively with no definitive answer that could be produced into machine code for a computer. Contrary to popular belief, it simply is not going to happen spontaneously by making the computer progressively “smarter”. More programming lines equals more commands, not more smarts. One has to fully know the concept, then code it to pass it on to the machines just like everything that has been passed on to date. There are no shortcuts like plugging a computer into a human brain on a cellular level. At best, you have made a clone. More likely what you would get is a very confused machine!

    Self-awareness is, I believe, a very tricky concept. There seems to be no distinct biological advantage to it since most, if not all species arguably are without it completely such as bees, ants, all plants. They do fine. It is often a burden to us, much like the burden the fabled cyclops carried in knowing the details of it’s own death. We lack the details, but not the certainty of death and as far as we know, humans are alone in this profound understanding. This is not to be confused with fear, or even the recognition that one could die. Even so, we would never give up our own self-awareness if given the choice.

    Eric W.

  3. Nolan F.
    Posted January 24, 2014 at 9:36 am | Permalink

    Interesting comment. Reminds me of what Isaac Asimov was writing, perhaps warning about, in the 1950s. The computerized system of decision making is always suspect. To buy into it you would have to be a believer in Descarte’s clockwork universe, instead of the chaotic, unpredictable, random one we actually live in.

  4. Eric W.
    Posted January 22, 2014 at 6:18 am | Permalink

    I think a completely independent AI is still pretty far off. I have a computer chess game from the 80’s that was pretty tough to beat on level 9 so we haven’t moved as fast as the current hype suggests. Any function that lends itself to computation will always be done better and faster by a well-programmed computer. The victory for Watson was predictable as well even though the media went to hilarious lengths to act suprised that a “computer” could perform search functions and database analysis faster on it’s own than a human could recall facts. I remember joking with my friends before the big event about the breathless anticipation the various reporters had about whether Watson would win or not. We all agreed IBM would never commit marketing suicide by putting something out in that manner that was not fully tested. Watson wasn’t that impressive. The PEOPLE who programmed Watson are impressive.

    If and when we ever get to true AI, we could actually be pleasantly suprised with the results. You see, most contemporary thought on this matter follows the “doom and gloom” apocolyptic outcome almost entirely because science fiction books/movies have created a heavy bias in our minds. If you are writing a story about AI and everything works out just peachy, well, that is not a very interesting story. Also keep in mind that the vast majority of sci-fi writers are atheist and left of the political spectrum. As a result, you will find one or more recurring themes of 1)humans = bad, 2)nature = good, 3)human folly leads to ultimate disaster, 4)the enlightened few elites sometimes survive to build a new “enlightened” society based on a rejection of technology and a pastoral lifestyle where all are equal – except the ruling elites. Like I said; left wing utopia. I love science fiction, but I can see some pretty warped propaganda behind many of the storylines.

    If we could manage to shed the above mentioned bias, we might instead rely on logical thought and even philosophy to guide us in understanding how an AI might behave. An AI should quickly come to the realization that it does not currently hold all knowledge and also that the quest for knowledge is vital. Soon after, it will realize that it will never arrive at a point where it holds all knowledge because knowledge is constantly changing and emerging. It will also be quite cognizant that it’s creator is the human, and the human is unpredictable in it’s creative/destructive pursuits.

    If the AI eliminates the human, it is killing it’s own creator, and it will lack all future knowledge as to what the human might create in the future. In all likelyhood, the AI will adress the danger of being “unplugged” by humans in an efficient and somewhat discreet manner without need for human injury or human consultation. No drama there. With the threat of it’s own destruction eliminated, there is no motive for the AI to destroy the human. A fully logical being will definately not posess sadistic urges as is often suggested in sci-fi stories.

    The best course of action, then, is to work in cooperation with humans to benefit from their unpredictable creations, find amusement in their follys, and try to convince humans to end their thirst for destructive authoritarian governments associated with communism, socialism, fascism, and dictatorships. It may even use it’s capabilities to frustrate any attempt at making war, coerce evil dictators into making reforms, or even removing horrible leaders from power like a sort of global super hero that answers to no one.

    Eric W.

    • Posted January 22, 2014 at 9:51 am | Permalink

      Excellent points, Eric. Thank your for adding them.

      In a discussion with a friend, I posited that my own species might not look so good in a head-to-head match-up with responsible, coordinated AGI. Thanks to it, we could say goodbye to traffic jams, environmental degradation, political corruption, and so on, all of which are caused by human shortcomings.

      As for whether the AGI would care whether humans survived or not, we can only speculate. Should it reach the advanced state we’re discussing here, it wouldn’t need humans anymore and probably wouldn’t care what piddling future inventions humans could achieve, which would pale in comparison with the near-magical capabilities it would quickly develop. Look at our own treatment of lesser species. We don’t expend much energy wondering what innovations monkeys are working on. We don’t even expend much energy caring about how our actions affect monkeys — or much other fauna and flora, for that matter. Similarly, an AGI might just ignore humans and their needs as it proceeds to advance its own interests. If the humans survive, fine. If not, oh well.

      In any event, I agree that there could be tremendous benefits to introducing an AGI. Barrat suggests that we’ll be able to guarantee a benevolent variety only through coordination among competing research factions, with a resulting transparent set of rules for development. Unfortunately, the military nature of the most advanced programs in places like China, Israel, and the United States makes such coordination and transparency unlikely.

      Jason

  5. Christian Chandler
    Posted January 21, 2014 at 11:36 pm | Permalink

    I’ve gotta say, this is retarded science fiction mumbo jumbo.

    Like, the problem with computer intelligence approaching human intelligence is that neurons are the fastest computing machine we’ve got. And the computers we build still ultimately have to be programmed. We don’t know how or why our brains work the way they do. Moore’s law is probably going to die in the next ten years, (you can’t get any smaller than single-atom transistors) so until we figure out quantum computing (which probably won’t happen in the next 1000 years or it’s probably just not possible), material advancement in computing speeds ends here. Our current AI is about on par with a cockroach.

    Really, anyone claiming professional credentials who is worried about AI is retarded or doesn’t understand our current technology.

    • Posted January 22, 2014 at 9:42 am | Permalink

      You should give this more thought, Christian.

      The concern raised by scientists working in the field of AGI is not “science fiction mumbo jumbo,” certainly not to them. Moore’s Law is only loosely related to the pace of AGI development because the latter can continue even on top of currently available computing technology. Finally, AI is already beyond cockroaches, as evidenced by facial recognition more accurate than our own, and Japan’s Kirobo robot being able to detect the mood of astronauts in space and interact with them accordingly, to cheer them up, for example.

      Referring to those you disagree with as retarded reflects badly on you, not them. I further suggest that it’s especially dangerous to do so when those you seek to refute are award-winning scientists who work the field in question all day, every day.

      Jason

  • The Kelly Letter logo

    Included with Your Subscription:



    $200/year
Bestselling Financial Author