Photo Credit: Pixabay
While artificial intelligence (AI) gets far more attention in the private sector than in the public sector, governments around the world continue to experiment with AI. From Australia to the Baltics to the United States, AI is becoming much more popular as a way for governments to increase efficiency and save costs. But this AI adoption also has a dark side to it – sometimes it feels like the machines have taken over, and that the robot overlords are running the show. With that in mind, here’s a brief look at the good, the bad, and the ugly of AI in government.
The good, the bad, and the ugly of AI in government
In many ways, AI in government is really just about automating all the repetitive tasks and basic actions that we expect governments to take care of for us. We really don’t need a bunch of government bureaucrats to process forms and enter data when machines can do all of it for us. This can lead to a dramatic reduction in the number of man (and woman) hours required to run government. Governments love this because they have to juggle budgets, and these cost savings are very attractive.
AI-based virtual assistants are also becoming very adept at guiding users through the complex maze of government institutions. In Australia, for example, the government is experimenting with a variety of virtual assistants with names like Alex, Sam, and Melissa to help citizens find what they need online. There’s even an AI-based virtual assistant called Nadia that has been programmed with the voice of Hollywood actress Cate Blanchett. Presumably, if you feel like you’re chatting with Cate Blanchett, you won’t mind the boring tasks of filing government documents or requesting certain forms of government assistance.
The bad
Ok, that’s the good case for AI in government. But there’s also a downside to all this automation. One is a lack of transparency in how decisions are made. Today’s AI algorithms are so sophisticated, and trained on so much data, that they sometimes surprise humans with their results. That might be good, for example, if an AI algorithm can be used to find a potential cure for a public health epidemic or to suggest a new approach to solving a vexing public policy question. But it can be very bad if an AI algorithm starts rejecting applicants for certain forms of public assistance, based on biases or prejudices that might have been hard-coded into its algorithm by software developers.
The ugly
Finally, there’s the ugly. This can refer to the lack of accountability by government bureaucrats. If anything goes wrong, they no longer have to step up to the plate and accept accountability. Instead, they can just blame the algorithm. Say, for example, disaster relief efforts in a region are too slow, and citizens simply are not getting the help they need after a major event like a hurricane, flood or wildfire. They can no longer blame government bureaucrats, if there’s an AI algorithm that they can blame instead.
This is why it sometimes feels like the machines are in control. They are smarter than us, they are making decisions for us, and they are not telling us how they are making those decisions. That’s why citizens need to monitor new AI initiatives in government before – not after – they are rolled out. This will ensure that AI is serving us, and not the other way around.