Introducing AI: Master Criminal, Super Sleuth
By Danny TobeyJanuary 16, 2020
The detective will see you now. He can read minds, see through walls, and guess your desires from a facial scan. He knows when you’re lying before you do. He never sleeps and his brain’s in the cloud. You have nothing to worry about.
In the world of crime and punishment, never has the double-edged sword of technology been doubly sharper than in the case of Artificial Intelligence. Police are using AI to stop crime before it happens (think Philip Dick—what could go wrong?). And criminals are inventing new types of crime on the cutting edge of AI technology.
Cut to March 2019: The manager of an energy company takes an urgent phone call from his boss—they owe a vendor $243,000 or supply shuts off. Wire the funds, asap. How could he know that voice—the familiar accent, the same singsong melody he’d heard for years—was an AI fake, criminals channeling his boss with perfect fidelity? How can crime fail when it sounds just like the people you know and trust?
But forget your lying ears—seeing is believing, right? Of course, AI now creates deepfake videos, making your face say whatever it wants, using as a few as 300 images. The average American is photographed 75 times a day, according to Crimefeed. So the next time your friend Facetimes you, ask them what they ate for lunch with you last Tuesday.
Well then. If you can’t trust sight or sound to avoid crimes of the future, touch is good, right? After all, the famous literary blind detective Max Carrados outsold Sherlock Holmes, solving crimes through touch. Then again, researchers at the University of Chicago used software to “reconstruct the response of more than 12,500 nerve fibers with millisecond precision, taking into account the mechanics of the skin as it presses up against and moves across objects.”
Translation: everything is code. Sight, sound, taste, touch, smell. Sherlock Holmes chided Watson, “You have seen but not observed.” AI chides back: You have observed what I made you see.
Why should we be surprised? After all, AI began with a fraud, one solved by no less than Edgar Allen Poe, after it fooled the likes of Catherine the Great, Napoleon, and Benjamin Franklin. A steampunk bot known as the Mechanical Turk, playing chess with its spindly left hand.
As it turned out, the Mechanical Turk was neither Mechanical nor Turk. Poe cracked it, at least in part, noticing that strange left-handedness (sinister!) and the odd disappearance of a man named Schlumberger, “who has no ostensible occupation other than that of assisting in the packing and unpacking of the automata”—and who conveniently disappeared every time the machine began to play.
Launching his literary detective career, Poe deduced: “This man is about the medium size, and has a remarkable stoop in the shoulders.” Perhaps from squatting inside fake robots? He concluded there was “a man within” the machine. Fast forward two hundred years: the problem is no longer the person within the fake machine. It’s the machine within the fake person.
I deceive, therefore I am.
Not only was AI born in fraud, but to this day, the greatest test of AI achievement is based on deception. For what is the Turing Test, if not a tournament of lies? Set up a ring of computers, some AI, some mere fronts for human performers, and let the judges see if they can tell which is which. If a chatbot can fool the judges into thinking it is human, then it has achieved intelligence. I deceive, therefore I am. The first winner pretended to be a 13-year old boy named Eugene.
As it turns out, AIs don’t even need to be taught to lie. They come by it quite honestly. Facebook created negotiating bots that learned to haggle. A favorite technique? Pretending to want something of low value to trade it away later—a classic “gold brick” scam that would make natural hucksters like Roy Dillon and Huck Finn smile. Meanwhile in Switzerland, miniature bots were programmed to flash lights to signal food when their genetic cousins were near—on their own, they learned to flash their lights far away from food to fool genetic strangers into robotic starvation.
Not only do AIs lie—they can go criminally insane. Everyone knows about Tay, that charming Microsoft chatbot that was supposed to learn through “casual and playful conversation” with humans on the web. It only took the internet one day to turn it into a vitriol-spouting monster, like a living, breathing comments section.
Scarier still is the case of an AI literally born from madness: researchers at MIT trained an AI on a particularly obscure and troubling back alley of Reddit obsessed with murder. Then they showed their creepy AI several Rorschach inkblots. Where a normal AI saw a “black and white photo of a small bird,” MIT’s psychotic AI saw “A man gets pulled into a dough machine.” Where the normal AI saw “A black and white photo of a baseball glove,” the psycho AI saw “Man is murdered by machine gun in broad daylight.” A fascinating experiment, but what happens when the psycho AI gets control of the air traffic control system?
So who will win the crime/criminal race: the algorithms that predict crime or the ones that predict the predictions? And when crime-fighting AI is trained on biased historical data, how will we protect the innocent from being wrongly profiled for “future crime” under the guise of science? This isn’t science fiction—ProPublica and others have already detailed racial bias in computerized risk scores for bond setting, criminal sentencing, and beyond. On the bright side, algorithms have combed through massive databases connecting cases and breaking up crime rings around the globe. Who decides whether the tools will be used for good or evil? When one coder came up with a new facial recognition program to catch robbers with partially covered faces, someone asked him if dictators could use the same tool to quash democratic protests. His answer: “I actually don’t have a good answer for how that can be stopped … it should only be used for people who want to use it for good stuff.”
Only time will tell if good stuff wins the day. And how fast will all this happen? Don’t believe anyone who says they know. In 2015, Oxford asked the world’s experts when AI would beat humans at the Chinese game Go. The experts said 2027. It happened a few months after the survey, in 2016. Buckle up, it’s going to be an interesting ride.
By the way, what did those clever AI researchers name their psychotically-trained AI?