Photo Credit: shutterstock
For years now, we’ve been hearing about the misinformation problem on social media. As it turns out, we may have been misidentifying the problem all along. Instead of blaming foreign bots, conspiracy theorists, or naive social media users, we should have been blaming a small set of social media super-sharers.
Who are the super-sharers?
That’s the conclusion from an all-star group of researchers from MIT, Cambridge, and Northwestern. As part of a series of comprehensive research projects, they examined how misinformation works online, how it gets shared, and why it is so prevalent. They found, over and over again, that just a small group of “super-sharers” are responsible for 80% of the misinformation being spread.
But here comes the fun part: it turns out that these super-sharers all share several common traits: they are, by and large, white older women (age 58 or older) who skew heavily Republican. In short, they’re white Republican women with a lot of free time on their hands.
If you’re triggered by this conclusion, perhaps you should be. These days, everything seems to be politicized, and this study does strike one as being heavily politicized. Naturally, the researchers did their best to point out that it was “white” Republicans to blame, and not the kinds of diverse Republicans who are now turning out for former president Donald Trump in places like The Bronx.
As the researchers point out, it was largely the same types of super-sharers who were spreading misinformation during the COVID pandemic and the 2020 presidential election.They primarily include pundits, traditional media personalities, and social media influencers. As far as their views go, they’re likely to be anti-vaxxers and election deniers.
The problem, say the researchers, is that this relatively small group of Republican women have so much power. They are followed by nearly 1 in every 20 voters, so even a small piece of misinformation can have a relatively large impact.
Do super-sharers represent a threat to democracy?
As might be imagined, the takeaway conclusion of the researchers was that these super-sharers represent a threat to democracy. The implicit conclusion here is that the social media platforms should be doing their part to take them down, or limit their reach.
But is that really warranted by their actions? As even the researchers acknowledge, these super-sharers are not sharing obviously malicious or provably false content. And they are not even spreading political propaganda, as you might expect from a bot. Instead, they seem to be spreading content that exists in a gray area. In short, it’s content that’s not overly sensational and not provably false, but that advances the “wrong” narrative.
An example of this “gray area content” would be a story about a doctor who died shortly after getting the COVID vaccine. Someone reading that article only for the headline might conclude that the vaccine was to blame, even though there might be another perfectly good medical reason why the doctor passed away. When super-sharers share the story, though, they tend to highlight the potential anti-vax viewpoint.
Time to improve the algorithms
Interestingly, the researchers found that today’s algorithms actually do a pretty good job of flagging misinformation when it appears online. But they have trouble with this “gray area” content described above. So maybe a suggestion would be to tweak the algorithms, so that this content also shows up as a form of flagged content.
That being said, any algorithm tweak should not suppress free speech or censor various voices online. Having an opinion is not wrong, even if this opinion does not fit the narrative of the traditional mainstream media.