Introduction
The promise of AI as an automated, error-free technology often masks the unseen human labour that makes it possible. From labelling raw data to moderating harmful content, “ghost workers” form the backbone of AI ecosystems. Yet, their contributions remain invisible, underpaid, and unprotected. The debate on AI is incomplete without recognising the human cost of automation, a matter of global ethics, labour rights, and governance.
The Hidden Human Cost of AI
Why is AI’s invisible labour in the news?
AI companies, especially in Silicon Valley, outsource essential annotation and moderation work to low-paid workers in developing countries. Recent revelations of exploitative conditions, such as Kenyan workers earning less than $2 an hour for traumatic tasks like filtering violent content, have exposed the dark underbelly of AI. This has amplified global concerns about modern-day slavery, violation of labour rights, and the absence of legal safeguards in AI supply chains.
Areas of Human Involvement in AI
- Data Annotation: Machines cannot interpret meaning; humans label text, audio, video, and images to train AI models.
- Training LLMs: Models like ChatGPT and Gemini depend on supervised learning and reinforcement learning, requiring annotators to correct errors, jailbreaks, and refine responses.
- Subject Expertise Gap: Workers without domain knowledge label complex data, e.g., Kenyan annotators labelling medical scans, leading to inaccurate AI outputs.
Are Automated Features Truly Automated?
- Content Moderation: Social media “filters” rely on humans reviewing sensitive content (pornography, beheadings, bestiality). This causes severe mental health risks like PTSD, anxiety, and depression.
- AI-Generated Media: Voice actors, children, and performers record human sounds and actions for training datasets.
- Case Study (2024): Kenyan workers wrote to U.S. President Biden describing their labour as “modern-day slavery.”
What Challenges Do Workers Face?
- Poor Wages: Less than $2/hour compared to global standards.
- Harsh Conditions: Tight deadlines of a few seconds/minutes per task; strict surveillance; risk of instant termination.
- Union Busting: Workers raising concerns are dismissed, with collective bargaining actively suppressed.
- Fragmented Supply Chains: Work outsourced via intermediary digital platforms; lack of transparency about the actual employer.
Why Is This a Global Governance Issue:
- Exploitation in Developing Countries: Kenya, India, Pakistan, Philippines, and China host the bulk of annotators, highlighting global North-South labour inequities.
- Digital Labour Standards: Current international labour frameworks inadequately cover digital gig work.
- Ethical Responsibility: Big Tech profits from AI breakthroughs while invisibilising the labour behind them.
- Need for Regulation: Stricter global and national laws must ensure fair pay, transparency, and dignity at work.
Way Forward
- Transparency Mandates: Disclosure of supply chains by tech companies.
- Fair Labour Standards: Minimum wages, occupational safety norms, and psychological health safeguards.
- Recognition of Workers: From “ghost workers” to “digital labour force.”
- Global Collaboration: Similar to climate treaties, AI labour governance requires multilateral regulation.
Conclusion
Artificial Intelligence is not fully autonomous—it rests on millions of invisible workers whose exploitation challenges the ethics of the digital age. For India and the world, the future of AI must balance innovation with human dignity, equity, and justice. Without recognising and regulating this labour, the AI revolution risks deepening global inequalities.
Value Addition |
Global Frameworks and Conventions
Conceptual Tools and Keywords
Reports and Studies
Indian Context
|
PYQ Relevance
[UPSC 2023] Introduce the concept of Artificial Intelligence (AI). How does Al help clinical diagnosis? Do you perceive any threat to privacy of the individual in the use of Al in healthcare?
Linkage: AI aids clinical diagnosis by analysing medical scans and predicting outcomes with high accuracy, but it relies on human annotators to label sensitive data. The article shows how even untrained workers in Kenya were tasked with labelling medical scans, raising concerns of reliability. Such outsourcing also heightens the risk of privacy violations in handling patient data across insecure global supply chains.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024