We created technology to solve a problem; our lives were complicated. As a society, someone decided that there had to be a better way to take care of life’s tasks than what was available at the time. One genius gave us moveable type, while another developed the assembly-line and then Alan Turing entered the thunder dome. Turing, a young British wunderkind in his own rite, posited that if humans can use their own knowledge to solve problems and make decisions, why couldn’t a machine do the same?
In his 1950 paper, Computing Machinery and Intelligence, Turing suggested that if a computer could convince a reasonable percentage of people that interacted with it that it was human, and not a computer, it could be regarded as “intelligent.” As we’ve come to witness in recent years, computers are more than capable of passing the Turing test, but this doesn’t necessarily mean that they can think and decide without being programmed to do so.
Artificial intelligence, or “A.I.” as it’s more commonly referred, has made its way into the hearts and minds of many a company in the tech industry. Through the use of “machine learning” which is a process through which an A.I. platform can be taught to understand, sort through, and analyze data at exponentially greater rates than that of a human. In the field of medicine, healthcare providers have looked to artificial intelligence to help bogged-down hospital administrators sort through “piles” of patient data. As we speak, the automative industry is investing an absurd amount of capital into improving autonomous vehicles that will transport humanity both figuratively and literally into the future.
But what if it were possible for someone to trick an AI program into harming an actual person? These programs simply follow what they are told to do, unless of course they are actually sentient, which is yet to be the case. According to a recent study conduced by Tencent’s Keen Security Lab, several engineers were able to takeover a Tesla Model S, which uses an autopilot system, and command it to switch lanes to drive right into oncoming traffic. Of course, this was purely a test and no one was harmed, but it’s terrifying when you think about it. Per the details of the report, Keen’s scientists need only place stickers on the road, resembling a line, and the Tesla’s Autopilot system detected the stickers and changed lanes.
The basic notion of machine learning, which we touched upon earlier, is based on feeding computers information over and over until they “learn” to use the info and act accordingly. Where AI gets tricky is if a hacker were to gain entrance into the backend of an AI program and instruct it to do something potentially lethal. When speaking to the Tesla example, a hacker could easily crash the car, which would presumably contain one to five passengers at any moment in time.
Following Keen’s release of their report findings, a Tesla spokesperson stated:
“We developed our bug-bounty program in 2014 in order to engage with the most tilted members of the security research community.…The primary vulnerability is not a realistic concern given that a driver can easily override Autopilot at ay time by using the steering wheel or brakes and should always be prepared to do so. The findings are all based on scenarios in which the physical environment around the vehicle is artificially altered to make the automatic windshield wipers or Autopilot system behave differently…”
The rise of AI as a means of improving how we interact with our technology has allowed for society to take several leaps forward. Having said that, the risk of adversarial attacks on our global infrastructure becomes increasingly more likely when machines are tasked with “knowing” what to do.