Photo Credit: pexels
You may not be aware of it, but there’s a new movement afoot and it’s based all around the concept of “algorithmic choice.” In a nutshell, this concept means that the user of every social media platform should have a choice about which algorithms are used on the platform, as well as more transparency into how existing algorithms help to shape the overall user experience.
Put another way, algorithmic choice is all about expanding the array of choices available to the consumer. If you use Facebook, for example, you don’t have any real control over what content is shown to you, or how Facebook calculates will be of greatest appeal to you. Instead, Facebook silently gathers data about you, feeds this data into an algorithm, and out pops a piece of content that Facebook thinks will maximize your engagement on the platform. You might think you have a choice here, but you really don’t.
Problems with social media algorithms
The problem, quite simply, is that the algorithms used by the top social media platforms don’t always work as planned. As reporter Julia Angwin explained in a recent op-ed piece for The New York Times, there are three inherent problems involved with using current algorithms.
First of all, they tend to create filter bubbles, in which you are always shown more of what you already think and believe, instead of being exposed to new ideas and new viewpoints. This, then, leads to increased polarization of the social media space.
Secondly, algorithms lead to rabbit holes filled with misinformation, disinformation, and just plain lies. After all, algorithms are not being optimized to make society a kinder, gentler place. Instead, they are being optimized to make as much money as possible. And this often leads to the distribution of toxic and manipulative content.
And that leads us to the final problem with today’s social media algorithms – they create “engagement traps” online. In short, they are designed to get you to stay on the platform as long as possible. So they favor outrageous content over carefully reasoned content. They favor viral content. And they are designed to keep you liking, sharing, and commenting as much as possible, and that usually means a race to the bottom in terms of the quality of the content.
Is a new paradigm possible?
In her op-ed piece, Angwin suggests that a new paradigm for social media might be possible. She specifically points to the example of Bluesky, a new social media platform being designed by Jack Dorsey, one of the original co-founders of Twitter. One of the goals of Bluesky is to give users choice when it comes to algorithms, and to make the process of content creation and content sharing on the platform as transparent as possible.
So it’s easy to see how Bluesky is the type of platform that proponents of “algorithmic choice” would favor. It encourages the creation of customizable algorithms, and gives users much greater control over what content they see.
Most importantly, platforms based around algorithmic choice help to free users from the potential biases and viewpoints of the platform’s owner. Take X, formerly known as Twitter, as an example here. If you believe that Elon Musk is turning X into a “far right” social media network, but still want to stay on X because that’s where all your friends and followers are, then a measure of algorithmic choice would be invaluable in convincing you to stay. You could simply tell X to stop showing you content from the likes of Tucker Carlson or Donald Trump’s MAGA supporters – or anyone else you might find objectionable.
At the end of the day, the notion of algorithmic choice is not as wonky as it might sound at first. It hints at a new type of social media experience in which you have much greater control over how you spend hours online. And it would help to remove the possibility that you could be subtly manipulated by the owner of a particular social media platform to think as he or she does. All of that is fantastic for both democracy and the smooth functioning of our society.