To understand the role of these workers, it's essential to delve into the process of creating AI algorithms. The basis of any algorithm lies in the data it uses to learn and improve. This data, whether textual, visual or other, must be annotated and analyzed by humans before being used to train AI models. These "annotators" work in the shadows, spending hours identifying, categorizing and analyzing thousands, even millions of pieces of data, in order to make them understandable to machines.
A poignant example of this reality is that of Mophat Okinyi, whose story reveals the dark underbelly of the AI industry. Tasked with proofreading texts containing sensitive subjects, Mophat was confronted with shocking and traumatizing content on a daily basis. Despite his crucial role in training AI models, his modest remuneration and the precarious conditions of his employment left deep scars, even affecting his personal life. His experience is unfortunately not unique; hundreds of millions of workers around the world are engaged in similar tasks, often under difficult conditions and with little recognition.
These workers, scattered across the globe, form an invisible army that is literally building the future click by click. From the Philippines to Kenya to Venezuela, they spend their days performing often repetitive and tedious tasks, such as marking objects on images or verifying online content. Their work fuels the most high-profile technological advances, from autonomous cars to facial recognition and algorithmic surveillance. Without them, AI algorithms wouldn't be able to learn and improve, highlighting the crucial importance of their contribution.
Yet despite their essential role, these workers often face precarious conditions and inadequate remuneration. Wages are often low, well below international standards, and contracts are often temporary or informal. Many click workers have to juggle irregular hours and difficult working conditions, with no guarantee of financial stability or job security. In some extreme cases, such as one documented in China, workers are exploited, and forced to spend hours annotating data for derisory wages, well below the minimum legal requirement.
This widespread precarity raises profound ethical questions about the very nature of AI and the way it is developed. Technology companies, often based in developed countries, take advantage of cheap and unstable labor in developing countries to train their AI models, while hiding behind a veil of secrecy and confidentiality. This opacity makes it difficult to know exactly who benefits from this work and under what conditions it is carried out.
For many click workers, these jobs represent a valuable safety net in economies that are often unstable or in crisis. However, this is often not enough to guarantee a decent life, and many find themselves trapped in a cycle of uncertainty and dependency. Hopes of fair remuneration and recognition for their contribution to AI often remain in vain, leaving these workers in a vulnerable situation and at risk of exploitation.
Faced with this troubling reality, voices are being raised to demand greater transparency and accountability from technology companies. Initiatives such as the creation of unions or workers' organizations aim to defend the rights of click workers and fight against their exploitation. Yet the road to truly ethical and equitable AI remains long and fraught with obstacles.
Ultimately, recognizing and valuing the work of AI's invisible workers is absolutely necessary to ensure that technological advances benefit everyone, not just a privileged elite.
Because if AI is capable of changing our lives, we mustn't let it ruin those of the people who have worked so hard to build it.