Beware the bots
It’s the end of the year again, and time to reflect on our progress as an industry and a global community.
It’s the end of the year again, and time to reflect on our progress as an industry and a global community. It seems this year, more than any other, there has been a major focus on turning our world over to robots. In fact, it seems as though a primary objective of technology development these days is to eliminate human control altogether through robotics, cyber operations and the like. That scares me—a lot.
Pros and cons of robotics
Up front, let me say that I see huge value in robotics, cyber operations, automation, and artificial intelligence. One could argue, with some chance of success, that a robo-cyber system would have prevented the Macondo disaster by taking critical decisions out of the hands of humans and putting them into the hands of a system designed to evaluate, and act, in micro-seconds. And, with exactly the right programming, perhaps it could have.
But therein rests one of the major problems. With next-generation robotics, the decisions that a robo-cyber system makes are not necessarily the result of go/no-go processes, based on data analysis alone. That is a dry process that fails to take into account integrations of viable options, or the ability to take those options to another level through communication. Below is a case in point, as reported in numerous sources.
Earlier this year, two robots created by Facebook in a lab called Facebook Artificial Intelligence Research (FAIR), were shut down after they developed their own language. It happened while the social media firm was experimenting with teaching the “chatbots” how to negotiate with one another in human speech. When the chatbots were left alone during the experiment, researchers discovered they (named Alice and Bob) developed their own machine language spontaneously. The new language was basic and, while it worked for the bots, it was not aligned with the goals of the AI team.
As UK Robotics Professor Kevin Warwick noted in the Telegraph, “This is an incredibly important milestone, but anyone who thinks this is not dangerous has got their head in the sand. Smart devices, right now, have the ability to communicate, and although we think we can monitor them, we have no way of knowing.”
The purpose of the chatbots is to serve as surrogates for their human owners, performing tasks like making reservations, purchasing online or searching the Internet. According to Newsweek, “there are so many people working on negotiating AI bots that they even have their own Olympics—the Eighth International Automated Negotiating Agents Competition, which took place in mid-August, in Melbourne, Australia. One of the goals was “to encourage design of practical negotiation agents that can proficiently negotiate against unknown opponents in a variety of circumstances.”
Bots behaving like humans
The Facebook researchers wrote a machine-learning software bot, and then let it practice on both humans and other bots, constantly improving its methods. Newsweek notes that “this is where things got a little weird. First of all, most of the humans in the practice sessions didn’t know they were chatting with bots. So the day of identity confusion between bots and people is already here. And then the bots started getting better deals as often as the human negotiators. To do that, the bots learned to lie.” Then the bots started making up their own language to do things faster through a sort of shorthand.
The story doesn’t end there. Other researchers have been working to help bots comprehend human emotions, another important factor in negotiations, through facial expressions. And a Russian company “has been working on emotion-reading software that can detect when humans are lying, potentially giving bot negotiators an even bigger advantage. Imagine a bot that knows when you’re lying, but you’ll never know when it is lying.” It’s scary. Picture a bot that can negotiate, has no feelings, will lie, can create its own language, and can seem like a real person when you talk to it. Then imagine that bot working on a deepwater drilling rig—or driving an autonomous underwater vehicle—or managing a well control system.
But, wait, there is another kind of bot, one created to perpetrate evil, but it doesn’t do its own talking and thinking. “Some of the most popular industrial and consumer robots are dangerously easy to hack and could be turned into bugging devices or weapons,” says Seattle-based cybersecurity firm IOActive Inc. They would be robots attacked by another form of bot, not the kind found in science fiction movies or on the production line in a manufacturing business. These “bots are one of the most sophisticated types of crimeware facing the Internet today. These bots are similar to worms and Trojans, but earn their unique name by performing a wide variety of automated tasks on behalf of the cybercriminals, who are often safely located somewhere far across the Internet,” says Norton. These “bots do not work alone, but are part of a network of infected machines called a ‘botnet.’ Botnets are created by attackers repeatedly infecting victim computers. A botnet is typically composed of large number victim machines that stretch across the globe.”
So, welcome to the new robo-cyber age, an age in which machines increasingly work by themselves and/or against us. It’s an age in which robots start thinking for themselves and develop languages that we humans cannot understand. It is an age in which robots convert other robots to their malevolent causes, stealing information and control of data and processes. In all likelihood, its impacts will get worse, especially for crucial industries like energy. According to Deloitte, energy was number two on the list of industries most affected by cyber attacks in 2016, with nearly three in four U.S. oil and gas companies reporting at least one cyber incident in their annual filings. It will, most likely, get worse before it gets better. Welcome to our—or their—new world.