Photo Credit: pexels
In many ways, the original premise behind the Facebook algorithm was brilliant. With the help of artificial intelligence and machine learning, Facebook wanted to show the right users the right content at the right time. This used to work for the content that you saw in your daily Facebook feed and it used to work for the ads that you saw when using Facebook. But that’s no longer the case. It now appears the Facebook algorithm is completely, totally broken. So who is to blame?
Algorithmic bias
The problem, quite simply, is that algorithms are not neutral, unbiased creations. They come preloaded with the same types of biases as those of the humans that create them. Case in point: we’ve all heard stories of people who have been unfairly targeted by algorithms. In some cases, it’s people unfairly profiled by police due to the algorithms scanning facial recognition data. In other cases, it’s job applicants who are unfairly rejected due to some bias about race or gender that the algorithm might have had from the very beginning. If, for example, algorithms are primarily trained on resumes of men, they might begin to assume that the best candidates for jobs are men, not women.
And now it appears that the Facebook algorithm used for the company’s ad delivery system might carry the same types of biases. Experts are calling it “algorithmic discrimination,” and that’s exactly what it appears to be: the algorithm picks and chooses who it shows ads, based on certain biases about age, gender, race or ethnicity. For example, landlords who ran ads on Facebook weren’t reaching certain racial or ethnic groups in the broader Facebook audience. The same is true for employers and credit agencies. You might call the people not shown the ads the “algorithmic undesirables.”
What’s most interesting about all this is that even when advertisers specifically told the Facebook ad system that they wanted to reach a certain target audience, the ads were not shown to these audience members in the expected number for some reason. The working assumption about all this is that the algorithm somehow found a proxy variable that allowed it to block ads to certain audiences. Maybe, for example, members of certain racial groups were not shown ads for new apartments because they lived in a certain non-desirable zip code, and the algorithm simply decided that it wasn’t worth the time to show these people ads for expensive apartments.
Was Facebook being racist or was it just the algorithm?
Of course, Facebook claims that it was not being racist. “It’s the algorithm,” they’ll tell you. For now, the courts are buying this argument. In one high-profile case, the courts ruled that Facebook could use machine learning to fix its algorithm and avoid any harsh financial penalties. By tweaking some of the variables for age, gender, race and ethnicity, Facebook says it will be able to guarantee that ads will reach the same percentage of audience members as found in the broader population.
Facebook deserves credit for agreeing to make the fix. But that doesn’t undo the harm that has already been caused. And it certainly doesn’t give us a lot of confidence that the same type of bias won’t occur again in the future. When the algorithms are in charge of determining what content to show and which ads to display, it’s anyone’s guess what will actually be shown. At some point, Facebook needs to stand up and accept responsibility.