Facebook to investigate racist algorithms.

Facebook looks into biases within the system that allow racism on the site.

an iphone with the facebook logo on the screen and the logo scribbbled out

Monday, 27 July, 2020

Biased system.

Facebook has admitted that there is an issue with racism on its platforms. Among others, the tech giant also owns Instagram and WhatsApp. They announced the formation of two groups to investigate their company policies and algorithms on the platforms.

Instagram formed the ‘Instagram Equity Team’ and for Facebook there’s the ‘Facebook Inclusive Product Council’. Both sound a little dystopian, but their purpose does sound like it’s straight from a Blade Runner sequel.

Racist code.

The groups will look for biases in algorithms such as image categorization. In 2019 there was uproar about a popular AI product used by programmers. It was found to label people with dark skin as ‘criminals’ or archaic racist terms.

Platforms have algorithms which are coded to identify images or make recommendations to users. However these recommendations or identifications are down to the person who coded them. If the person coding had racial biases in the first place, that means the entire system is founded on racist code.

All voices matter?

Facebook is criticised for allowing the spread of racist groups and fake news. Most recently Instagram is accused of ‘suppressing black voices’ in the wake of the George Floyd protests.

This suppression is alleged to be in the form of ‘shadow banning’ black users. Shadow banning is the term for hiding content from other users without alerting the poster. This is often a result of content being deemed inappropriate by the algorithm or reported by other users.

BLM (and other) activists accuse the algorithm or moderators of being more harsh on their content compared to other users.

In response, Instagram’s head Adam Mosseri says Instagram is listening to “concerns about whether we suppress black voices…”.

The racial justice movement is a moment of real significance for our company. Any bias in our systems and policies run counter to providing a platform for everyone to express themselves. While we’re always working to create a more equitable experience, we are setting up additional efforts to continue this progress.

- Vishal Shah, Instagram's vice-president of product.

Tone deaf.

Facebook isn’t the only company using the recent surge in Black Lives Matter activism to reflect on their policies. Snapchat launched a probe into their own racism after a June article accused the company of having a ‘racist culture’.

It also added to the controversy they faced after a tone deaf ‘Juneteenth slavery filter’ was released.

Juneteenth is a celebration of the abolition of slavery in the US. For the day Snapchat launched a filter showing the user in chains which broke and disappeared when the user smiled’. It was roundly criticised and taken down.

Snapchat also faced criticism for a Bob Marley filter which was essentially a digital blackface and a ‘slanted eyes’ filter.

Scandals like this highlight the lack of diversity in the tech industry. If more people were in the room when these decisions were being made, would such tone deaf features be released?

So it’s easy to see how code from over 15 years ago might be impacted by cultural or racial biases.

It’s unclear how the groups will work to achieve their goals but Facebook said this information will be released soon.

Natalie Dunning author picture

By:

Natalie Dunning is a freelance writer and Media Psychology researcher based in Manchester.