Photo Credit: shutterstock
There have been so many warnings and caveats about the future of AI, that it almost goes without saying that humans should never just blindly hand over the reins to the machines. But here’s the big problem: even human oversight of AI programs and machine learning algorithms might not be enough. According to a growing number of AI researchers around the globe, humans simply are not capable of overseeing this technology once it reaches a certain point. According to these researchers, most humans are lazy and rely on cognitive shortcuts, and also lack the right technical background to make sense of the technology in the first place.
Are humans up for the difficult task of overseeing AI?
We can already see examples of how humans might not be willing or capable of overseeing AI in everyday life, where relatively simple forms of AI – such as Netflix’s ability to predict which movies you might like to watch, or Facebook’s ability to spot friends and family in photos or determine which items to place in your newsfeed – have made most humans relatively lazy when it comes to exercising oversight over AI. Do you really think that you can make better movie suggestions than Netflix? Do you really think you can make better decisions than Facebook or Google?
Now, extend that thinking to areas like autonomous (i.e. self-driving) vehicles, which are powered by AI. In the classic human oversight model of AI, a human passenger traveling in an AI-powered car would be able to step in at the last minute and avert a potential catastrophe from taking place. Your autonomous vehicle, for example, might confuse a pedestrian wearing a green shirt crossing the road for a green traffic signal. But are you really going to be able to look up from your mobile phone in time to hit the “override” button and slam on the brakes?
As a society, we are not quite there yet when it comes to placing our trust in AI-powered vehicles. However, step by step, our barriers to trust in AI are falling. We now trust the algorithms to help us park our vehicles in tiny parking spots, or to keep us in the right lanes (via lane assist technology). Most people, struggling with a daily commute and unpredictable traffic patterns, would probably be willing to cede some of their control and oversight in exchange for an extra 30 minutes of sleep or a chance to catch up on their lives while machines do all the heavy work. (Deep down, we all want to be like the pampered CEO who gets driven everywhere by an uncomplaining chauffeur.)
When machines make decisions, be wary
It gets even scarier when it comes to areas like using AI for military purposes. Most of us have been conditioned to the need for military drones for carrying out lethal strikes on the enemy. Hey, it saves our men and women from being killed in combat, right? But these drones are still controlled at a distance by soldiers, who must give the final “kill order” before a strike is carried out. What happens when we enter the world of autonomous fighting machines, when there are no humans to oversee what’s happening? A massive drone strike on a suspected terrorist hangout might turn out to be a lethal strike on a civilian hospital used by insurgent fighters.
When to trust the algorithms?
As a rule of thumb, say researchers, we should only use AI-powered algorithms as long as we can trust humans to override them when necessary. For example, in the world of healthcare, it’s perfectly OK to use AI technology like Watson to make a healthcare diagnosis – as long as a human doctor is there to evaluate the final results as well and override the machines when necessary.
Ultimately, the future of AI will be a partnership between humans and machines. As long as humans are the masters, and machines are the ones doing all the heavy lifting, things should be OK. The real problem – and the one that causes tech visionaries like Elon Musk and Bill Gates the greatest anxiety over the future of AI – is that humans might one day decide to hand over some of their management and oversight authority to the machines. They won’t do so on purpose, of course. But just like many of us are perfectly OK with Silicon Valley AI algorithms determining what we see, hear and read on social media, many of us might also decide that it’s safer, cheaper or more convenient to rely on AI doctors or AI soldiers. That, of course, could have some very unpredictable consequences for society.