Invisible Pillars of AI

By: Stuti Majmudar, LAW ‘27

Kenya is a country full of young men desperate for work. The unemployment rate in the country for people aged 15-34 is 67% so, when Mophat Okinyi was hired as a content moderator for ChatGPT through a company called Samasource, he was grateful. In the role, Okinyi ensured that text generated by ChatGPT followed community guidelines which included manually parsing through large volumes of graphic and sexual content and labeling it as inappropriate so that the AI model could be trained to self-regulate in the future. Sifting through graphic texts, images, and videos depicting violence and sexual abuse without any mental health support left Okinyi traumatized. His story is not unique. Across the Global South, workers like Okinyi are the invisible backbone of the AI revolution – they are “ghost workers” enduring grueling conditions for poverty wages while tech giants reap the profits. This system raises urgent ethical questions about exploitation, corporate responsibility, and the true cost of “progress.”

Artificial Intelligence (AI) is often portrayed as a marvel of innovation, a self-learning system capable of revolutionizing the current technological landscape. Yet, a peek behind the curtain reveals a darker reality. AI does not learn on its own; humans need to create foundational frameworks for AI to learn. One element required when building an AI model is data annotation, which involves taking data that the AI model needs to process, like text or images, and labeling or categorizing it so that the model can recognize a pattern and apply it to future inputs. Another element is content moderation, where humans parse through the output of an AI model and flag content in violation of  company’s policies, this content usually involves sexual or graphic violence, in order to ensure that the model does not generate responses that are not in alignment with the company’s values.

While AI researchers are paid six-figure salaries in Silicon Valley, workers in New Delhi, doing the grunt work training these models, are paid as little as $1.46 per hour. In the Philippines, some data annotators aren’t even paid minimum wage, which is merely $6 a day in some regions. The pay disparity is staggering while the labor is relentless. This exploitation mirrors European colonial extractivism—tech companies in the Global North extract value from the Global South by exploiting weak labor laws, circumventing worker protections where they do exist, and working in tandem with governments eager for foreign investment. This exploitation is not going entirely unchallenged and demonstrates a growing push for accountability. 

For instance, in Kenya—which has decent worker protection—labor unions have accused OpenAI and its contractors of violating basic worker protections, but enforcement is often lax. Similarly, in India’s booming AI outsourcing industry, contractors often ignore safety standards, knowing regulators will turn a blind eye. A damning investigation by the Weizenbaum Institute across Venezuela, Germany, Kenya, and Colombia reveals how tech companies weaponize precarity: workers are systematically gaslit into believing that speaking up against their exploitation equals professional exile. This calculated erosion of self-worth functions as corporate discipline, paralyzing labor solidarity while extinguishing calls for dignity. Meanwhile, governments in these countries compound the harm by prioritizing foreign investment over worker welfare—trading human suffering for economic growth promises.

The work is not just underpaid; it is deeply harmful. Every day, data annotators and content moderators like Mophat are exposed to horrific material, from child abuse to beheadings, without adequate psychological safeguards. The lawsuit filed by Kenyan workers against Sama, a third-party contractor for companies like Meta and OpenAI, reveals the brutal conditions that workers endure, alleging that content moderators and data annotators were subjected to toxic and traumatic content without proper mental health support and were paid unfairly. 

Okinyi, one of the plaintiffs, describes viewing up to 700 text passages daily. The job’s psychological toll left him paranoid, avoiding people after reading accounts of sexual abuse as well as projecting paranoid narratives onto the people around him. His marriage collapsed when his pregnant wife abandoned him, citing that he was a changed man. The lawsuit also alleges that data annotators and content moderators were diagnosed with post-traumatic stress disorder, severe depression, anxiety, and vivid flashbacks and nightmares after moderating graphic content. 

In 2023, a Kenyan judge ruled that Meta was the “true employer” of workers like Okinyi, paving the way for further lawsuits against tech giants outsourcing labor in Kenya. Later, in a landmark decision, the Kenyan Court of Appeal rejected Meta’s jurisdictional challenge, allowing content moderators, like Okinyi, to sue for human rights violations and unlawful dismissal in Kenyan labor courts. A separate case forced Meta to provide “proper medical, psychiatric, and psychological care” for its content moderators and data annotators. Yet the fight is far from over. Kenyan President Ruto, eager to position Nairobi as a tech hub, is siding with the tech giants. He plans to sign a bill making it harder to sue tech firms that outsource labor to Kenyan workers—leaving vulnerable employees with even fewer protections. In bill was passed by the Kenyan Senate, however a group of 35 Kenyan tech workers are challenging the draft bill

Labor groups across Africa, India, and Southeast Asia are pushing back against exploitative practices in the industry. In Kenya, the Content Moderators Union has been vocal in demanding better pay and mental health care for workers exposed to traumatic content. Investigations by outlets like Time and The Guardian highlight how workers like Okinyi are often left to cope alone, underscoring the urgency of systemic change. In India, the All India IT and ITeS Employees’ Union is campaigning for stronger protections for data workers. However, without pressure from consumers and governments, the workers’ efforts face an uphill battle. 

The AI industry should not thrive on exploitation. To build a more ethical future, three key steps are necessary: Corporate transparency, Government enforcement, and Consumer advocacy. Tech companies must disclose to the public where and how they outsource labor, ensuring accountability for working conditions. Recent legislative setbacks, such as the proposed scale back of the EU Corporate Due Diligence Directive (CSDDD), threaten to undermine accountability. If accepted, the changes would exempt the smaller, exploitative AI companies, that need more oversight, while applying these rules to the larger companies. This makes government action in the Global South even more critical. These governments must adopt and rigorously uphold effective labor laws, penalize violations, and mandate mental health support. Finally, users of AI tools must demand fair wages and safe conditions for the workers behind the technology. Supporting organizations like Content Moderators Unit and the All India IT Union amplifies the voices of those affected and prevents others from enduring the trauma Okinyi faced. 

The true cost of AI shouldn’t be measured in efficiency and innovation alone, but in the human toll it extracts. Transparency and government accountability are essential; without them the AI revolution will continue to exploit the vulnerable. As consumers, we must confront uncomfortable questions: Who pays the price for our convenience? Are we willing to prioritize humanity over unchecked progress? 

With undeniable power concentrated in the hands of private corporations, a multistakeholder approach—combining corporate disclosure, regulatory enforcement, and public pressure—is the only way to ensure ethical AI. Compliance frameworks must be mandatory along with independent audits, to evaluate adherence. Until then, the industry’s progress will remain founded on the suffering of workers like Okinyi.

Stuti Majmudar is a second-year law student with an interest in the intersectionality of law, technology, and human rights.

This blog is a part of iLIT’s student blog series. Each year, iLIT hosts a summer research assistant program, during which students may author a blog post on a topic of their choosing at the intersection of law, policy, and technology. You can read more student blog posts and other iLIT publications here.

Similar Posts