Firm regrets taking Facebook moderation work
Spread the love

In retrospect, a firm contracted to moderate Facebook posts in East Africa should not have taken on the task.

Several former Kenyan employees of Sama, an outsourcing company, claim they were traumatized by graphic posts.

Legal cases are now being brought against the firm in Kenyan courts by some.

As a result of Wendy Gonzalez’s announcement, Sama will no longer take on work involving moderated harmful content.

At the moderation hub, which the firm ran from 2019, former employees described being traumatized after watching videos of beheadings, suicides, and other graphic material.

The first graphic video former moderator Daniel Motaung saw was “a live video of someone being beheaded”.

A lawsuit has been filed by Mr Motaung against Sama and Facebook’s owner Meta. Every company Meta works with is required to provide round-the-clock support. Sama says certified wellness counselors were always available.

Ms Gonzalez told the BBC she wouldn’t take the contract again since it never represented more than 4% of the firm’s business. In January, Sama announced it would end the program.

You ask, ‘Do I regret it?’ Well, I would answer this way. If I knew what I know now, which includes how much energy and opportunities it would take away from the core business, I wouldn’t have entered the agreement.”

As a result of the experience, the firm now has a policy to not accept work that includes moderating harmful content. Furthermore, the company will not do artificial intelligence (AI) work related to weapons of mass destruction or police surveillance.

Because of ongoing litigation, Ms Gonzalez declined to answer whether she believed employee claims of being harmed by graphic material. When asked if she believed moderation work was harmful in general, she said it was “a new area that absolutely needs to be studied”.

It is unusual to find an outsourcing company like Sama. Providing digital skills and an income through outsourced computing tasks for technology firms was its original mission.

Visitors to the firm in 2018 watched employees from low-income areas of Nairobi earn $9 (£7) a day for labeling pedestrians and street lights in videos of driving, which would then be used for artificial intelligence (AI) training. The income helped employees escape poverty, according to employees interviewed.

She says the company still works on similar computer vision AI projects without exposing workers to harmful content.

Having moved over 65,000 people out of poverty is something that I’m extremely proud of,” Ms Gonzales said.

Africans need to be involved in the digital economy and the development of artificial intelligence, according to her.

Throughout the interview, Ms Gonzales stated that she was motivated to take on the work by two considerations: that moderation is an important and necessary task to prevent social media users from harm. In addition, African content should be moderated by African teams.

The ability to moderate local languages effectively in Kenya, South Africa, or elsewhere cannot be expected from someone from Sydney, India, or the Philippines.

In addition, she revealed that she had done all the moderation work herself.

Ms Gonzalez said that moderators at Sama earn around 90,000 Kenyan shillings ($630) per month, a decent wage by Kenyan standards.

“I did the moderation, but it’s not my job in the company to do that.” she said when asked if she would do the work for that amount of money.

As well as working with ChatGPT, Sama also took on work with OpenAI.