On Friday, The New York Times revealed a Facebook recommendation algorithm that asks users if they would like to see more “primates videos” within a UK desert video showing people of colour.
The Daily Mail’s video, which is over a year old, is titled “White Man Calls Police Against Black Men in the Marina.” It only shows people, not monkeys.
Next, the question “Do you see more primate videos?” Yes/No options were displayed on some users’ screen, according to screenshots posted to Twitter by Darci Groves, the social media giant’s former designer.
He commented on this by saying, “It is a scandal,” calling on his former colleagues on Facebook to escalate the matter.
“This is clearly an unacceptable error,” a Facebook spokesperson responded to AFP’s request. “We apologize to everyone who saw these offensive recommendations,” he added.
He said the California group deactivated the recommendation tool on this topic “as soon as we noticed what was happening in order to investigate the causes of the problem and prevent it from happening again.”
He continued, “As we said, even though we’ve improved our AI systems, we know it’s not perfect and we have progress to make.”
The case highlights the limitations of AI technologies, which the platform regularly highlights in its efforts to create a personalized feed for each of its 3 billion monthly users.
It is also widely used in content moderation to identify and block problematic messages and images before they are seen.
But Facebook, like its competitors, is regularly accused of failing to fight racism and other forms of hate and discrimination.
The argument raises more tension as several civil society organizations accuse social networks and their algorithms of contributing to the division of American society, in the context of the Black Lives Matter movement demonstrations. to count).
“Introvert. Avid gamer. Wannabe beer advocate. Subtly charming zombie junkie. Social media trailblazer. Web scholar.”