How AI Will Kill Us All
With the introduction of the new Tesla fully autonomous semi-truck and roadster–the roadster being the fastest production car in history, going from zero to 60 in under 1.9 seconds–it seems that no matter how one views the future, self-driving autos are soon going to be a staple of the American consumer diet. Make no mistake, artificial intelligence is not going to just stop at cars, it’ll spread its way into the skies, underwater, and into our minds.
In fact, IBM’s Watson is already well on its way to becoming one of the first know-everything computers. You’ve probably seen Watson on the game show Jeopardy, where it blew past everyone the way a simple calculator can knock out equations faster than any human on earth. How is Watson so good at what he does? Simple, it’s called self-recursive improvements, basically meaning it teaches itself how to get better at whatever task you give it. And make no doubt about it, if you task a machine to do anything well, it will do that task eons better than any human physically can. When calculators were invented, they quickly out-competed any human on earth in equation solving. It is in a basic form, artificial intelligence, but since it’s only superior in one task, it is referred to as artificial narrow intelligence.
When Tesla’s fully autonomous self-driving upgrade was available, it quickly became the safest feature of any car on earth. Although many argue wrongly, a self-driving vehicle is so far superior to humans that it’s almost laughable to try and reason out why that may be, for example, machines don’t text and drive, experience fatigue, get distracted, and a million other logical human flaws. Snapchat has become a leader in the world of facial recognition, but only by using AI to better recognize faces. Google search algorithms would never have become so successful at narrowing down what you’re looking for if it weren’t for the AI running behind the screen. However, history has shown that complicated systems and technologies, given enough time, become massively affordable and what was once restricted for the elite, is now available to the masses. A wonderful thing except in regards to AI.
Without getting into a messy talk about AI in general, one clear and present danger is currently knocking on our door, each pound on that door a wake-up call, I’m talking of course, about weaponized drones. Many of us are well aware of weaponized drones. We hear about it on a monthly basis, how a drone dropped a Hellfire missile on a wrong target, an elementary school, a wedding, the list of botched uses of killer robots has become white noise in today’s media chambers. And while those cases are certainly still highly controversial, with the coming of AI, it’s bound to get far worse. My fear comes not from 500-pound killer robots in the sky, but two-ounce killer bots in our personal space.
Stuart Russell, a computer scientist at the University of California, Berkley, has openly expressed how a small, fairly cheap drone can be packed with a small amount of explosive, and using AI, find its exact target and penetrate it through the eyeball and exploding once inside the skull, killing itself and the person instantly. Since the killer drone is so small, buzzing around like a wasp, it’s hard to imagine how one would avoid it. Imagine a truck-load of these bots and you have thousands dead, cheaply and quickly. This seems like a simple enough process with enormous room for errors, but that’s running on the assumption of technology in today’s standards. With AI, it’s hard to imagine how efficient these machines can get. With AI, this simple explosive-packed drone can learn to decipher between enemy and civilian, but if programmed a certain way, also between races, creating the most efficient genocide tool in history.
It may sound like complete science fiction, but the deeper you dig, the clearer this threat becomes. Max Tegmark, a physicist at MIT and author of Human 3.0, explains in fairly simple and harrowing terms how easily capable these killer bots are:
Once mass-produced, small AI-powered killer drones are likely to cost little more than a smartphone. Whether it’s a terrorist wanting to assasinate a politician or a jilted lover seeking revenge on his ex-girlfriend, all they need to do is upload their target’s photo and address into the killer drone: it can then fly to the destination, indentify and eliminate the person, and self-destruct to ensure that nobody knows who was responsible. Alternatively, for those bent on ethnic cleansing, it can easily be programmed to kill only people with a certain skin color or ethnicity.
It should be noted that the threat of these killer drones is not in its ability to kill, but rather, its ability of surgical precision and low cost. Once killing gets outsourced to an AI, the killing will undoubtedly become frighteningly easy and common. The reason nuclear weapons are being diminished is not because of good will, but because we are imperfect beings in control of god-level superpowers, understanding, at least recently, that we are unworthy of those powers.
WATCH THE VIDEO BELOW FOR A GRAPHIC INTRODUCTION TO KILLER, HANDSIZE DRONES.