The former content moderators for OpenAI’s ChatGPT in Nairobi, Kenya have filed a petition with the Kenyan government, alleging exploitative conditions, psychological trauma, low pay, and abrupt dismissal while working for Sama, a data annotation services company contracted by OpenAI.

These moderators were exposed to distressing content, including graphic violence and abuse, without proper warning and support. They were paid meager wages, ranging from $1.46 to $3.74 per hour. The termination of the contract between OpenAI and Sama left the moderators without income and facing significant trauma. Despite Sama’s claim of offering notice and alternative projects, the moderators dispute this.

Although Sama stated they provided mental health support and benefits, the moderators argue that the support was inadequate. This case underscores the need for ethical treatment of AI workers, raising questions about the responsibility of technology companies in ensuring workers’ well-being during AI model training.

Therefore, Real Research, an online survey app, launched a survey on AI content moderators facing mental health issues to gauge public opinion about the mental health impact and exploitative conditions for content moderators in OpenAI.

Highlights:

  • 26.68% showed support for holding OpenAI’s outsourcing partner responsible for disturbing work conditions in AI content moderation.
  • 33.27% believed regulations for the harmful impact of work as very important.
  • 37.78% were vaguely aware of heavy-work load job positions.

Delving into AI content moderation reveals a complex and distressing narrative. Mophat Okinyi, a former content moderator from Kenya, has become a poignant voice among a group of four individuals who seek justice for what they view as exploitative work conditions. This resonates widely, with 52.24% of respondents fully aware, 26.42% having a faint inkling, and 21.34% unaware of AI content moderators facing mental health issues.

Okinyi’s experience casts a stark light on the eerie responsibilities he undertook – poring over up to 700 text passages daily, some shadowed by graphic sexual violence. According to the survey results, 37.78% were vaguely aware, 32.35% were unaware, and 29.87% were quite well-versed in the existence of such positions.

Respondents-awareness-of-similar-job-positions
Figure 1: Respondents’ awareness of similar job positions.

The Transformation of AI Content Moderation From ‘Ordinary’ to Disturbing

Initially, the moderators reveled in a friendly environment and dealt with ordinary content during training. But the tide turned as the project sailed deeper, and text passages grew longer, shrouded in an increasingly disturbing essence.

The veil of honesty becomes questionable here, with 49.67% firmly asserting the lack of transparency, 40.77% entertaining such suspicions, 6.9% cautiously believing in honesty, and a mere 2.66% standing by the company’s upfront communication.

Respondents-concern-on-the-job-tasks
Figure 2: Respondents’ concern on the job tasks.

Behind the scenes, the 51 moderators stationed in Nairobi were confronted with daunting tasks – sifting through texts, and sometimes images, rife with disturbing scenes spanning violence, self-harm, and unspeakable atrocities. Understandably, emotions waver, with 47.34% slightly concerned, 22.65% dispassionate, 17.39% moderately concerned, and 12.62% grappling with extreme concern.

Read Also: 43% Prefer OpenAI’s ChatGPT Over Google’s Bard

Complex Landscape of  AI Content Moderators Facing Mental Health Issues

Who shoulders the burden of the AI content moderators facing mental health issues? Sama, the orchestrator, captures 26.68% of the sentiments. ChatGPT bears 26.05%, the moderators themselves assume 21.51%, and the data contributors claim 25.76%; it’s a tangled web of responsibilities.

Former moderators don’t merely seek resolution – they champion legislation, brandishing the exposure to harmful content as an occupational hazard that necessitates regulation. In respondents’ eyes, 33.27% find it crucial, 32.18% see moderate importance, 30.32% stand neutrally, 3.08% find it trivial, and 1.15% outright dismiss its importance.

The-importance-of-the-legislation
Figure 3: The importance of the legislation.

This saga reminds us that within the bowels of AI advancement, real lives and mental well-being matter, and the script for responsible AI development must include both ethics and empathy.

Also Read: Two-thirds of Respondents Incorporate AI-based Programs in Their Work

Methodology

 
Survey TitleSurvey on AI Content Moderators Facing Mental Health Issues
DurationAugust 12, 2023 – August 19, 2023
Number of Participants10,000
DemographicsMales and females, aged 21 to 99
Participating Countries Afghanistan, Algeria, Angola, Argentina, Armenia, Australia, Azerbaijan, Bahrain, Bangladesh, Belarus, Benin, Bolivia, Brazil, Brunei, Bulgaria, Burkina Faso, Cambodia, Cameroon, Canada, Chile, China, China (Hong Kong) China (Macao), China (Taiwan), Colombia, Costa Rica, Croatia, Czech Republic, Ecuador, Egypt, El Salvador, Ethiopia, Finland, France, Gambia, Georgia, Germany, Ghana, Greece, Greanada, Guatemala, Honduras, Hungary, India, Indonesia, Iraq, Ireland, Israel, Italy, Ivory Coast, Japan, Jordan, Kenya, Kuwait, Kyrgyzstan, Latvia, Lebanon, Libya, Lithuania, Malaysia, Maldives, Maluritania, Mexico, Moldova, Mongolia, Morocco, Mozambique, Myanmar [Burma], Namibia, Nepal, Nicaragua, Nigeria, Oman, Pakistan, Palestine, Panama, Peru, Philippines, Poland, Portugal, Qatar, Romania, Russia, Saudi Arabia, Serbia, Sierra Leone, Singapore, Slovakia, South Africa, South Korea, Spain, Sri Lanka, Tanzania, Thailand, Togo, Tunisia, Turkey, Turkmenistan, Uganda, Ukraine, United Arab Emirates, United Kingdom, United States, Uruguay, Uzbekistan, Venezuela, Vietnam, Yemen, Zimbabwe.