Beyond Silicon Valley: The global fight against AI’s ecological footprint


Adoption of artificial intelligence is on the rise worldwide, but the pace is uneven. As the global economy shifts increasingly toward AI-driven production and processes, wealthier nations are reaping the benefits faster, and poorer countries risk being left further behind, exacerbating economic and social divides, the United Nations has warned.

At the same time, Silicon Valley relies on resources in nations including Chile, Kenya, and the Philippines to develop its chips, train its AI models, and build its data centers. Workers and local communities in these countries are now pushing back against the demands and practices of big tech companies, which are resulting in enormous environmental and social costs to them, Carine Roos, a doctoral researcher at the University of Sheffield in the U.K., told Rest of World.

“Many discussions still approach AI primarily as a digital technology, but in many of these countries, AI is becoming visible through the infrastructures that sustain it: data centers, mineral extraction, energy demand, water-intensive cooling systems, and digital labor chains,” she said. 

“While much of the economic value generated by AI remains concentrated in technological centres such as Silicon Valley, many of its environmental and social costs are in these territories,” Roos said. “This has prompted communities to question how these projects reshape lives and development trajectories.”

Rest of World spoke to some of the individuals and communities standing up to AI companies. 


Rodrigo Vallejos, 28, Santiago, Chile

Since 2022, environmental activist Rodrigo Vallejos has been monitoring data centers in Santiago, including those operated by Amazon, Google, and Microsoft. Chile has nearly 70 data centers, many clustered in and around Santiago. Vallejos wants greater environmental accountability from the companies. 

While a law student, Vallejos examined hundreds of publicly available documents on data centers in the country. He found that Microsoft had received government clearance in 2023 for a $317-million data center in Quilicura. The company claimed its data center would have a cooling system that would eliminate the need to use water for more than half the year. In documents submitted to authorities, however, Vallejos found that Microsoft’s cooling system would rely partially on groundwater in an already water-stressed area. Microsoft said its data centers only used “small quantities of water” for humidifying. Vallejos and his neighbors filed over 100 citizen complaints against Microsoft, which were included in discussions for a National Data Center Plan, but when the plan was published in 2024, it didn’t include the stricter environmental requirements. 

That setback didn’t deter Vallejos. Last September, he filed a complaint against the lack of information about water consumption at Google’s data center in Cerrillos. Last year, Google said its site used far less water than the previous year, “or roughly the amount consumed by a golf course.” 

While he awaits the government’s response, Vallejos has two objectives: no more data centers in Santiago, and “appropriate environmental compensation from the companies with data centers, since they consume large amounts of water and energy,” he told Rest of World. “The environmental consequences of data centers will be worse in the future. The important thing is that these companies fulfill their corporate environmental commitments and contribute to the water reserves’ reconstruction. If not, their environmental claims are just greenwashing.”


Tania Rodríguez, 54, Santiago, Chile

Tania Rodríguez (extreme right), meets with other Mosacat activists at a member’s home in Santiago, Chile.

Tania Rodríguez was a school teacher in Santiago, but in the past few years she’s become a prominent activist against data centers. She is one of the founding members of the Socio Environmental Community Movement for Land and Water, or Mosacat, an organization that fights against resource exploitation in the country. 

Google built its first data center in Santiago in 2015, when nobody was really aware of their environmental impact. In 2020, according to official documents, Google’s second data center was authorized to extract more than 7 billion liters annually. When Mosacat discovered the staggering volume of water that would be diverted to the data center, it held several demonstrations. In response, Santiago’s environmental tribunal in 2024 suspended construction until Google reassessed its environmental impact. 

Rodríguez and her small team of 10–15 continue to monitor AI companies and file complaints. They also tried to be involved in the country’s National Data Center Plan, but “dropped our dialogue with the government because we realized it basically had been building the projects for the tech companies,” Rodríguez told Rest of World.

Mosacat’s campaign against Google has brought international attention to the group, connecting them with other organizations and activists fighting data centers worldwide, she said. “We’re not against Big Tech, but in favor of nature. We don’t want our countries to get steamrolled by extractivism.” 


The Town of Quilicura, Chile

On January 31, people in Quilicura — located in one of Chile’s most water-stressed regions — volunteered to answer questions that may otherwise be posed to an AI chatbot. Volunteers answered more than 25,000 questions from participants in 67 countries in real time on Quili.AI, which estimated how much water would have been used if a question had gone to a chatbot. What the organizers did not anticipate was “the thousands of deeply human, often surprising exchanges,” according to Corporación NGEN, a nonprofit that organized the event.

To a question on how to stay hopeful, a volunteer discussed the myth of Sisyphus and echoed the words of Albert Camus: “There is nothing more urgent than asking ourselves why we live.” Three local artists drew images in response to prompts, including a dog smoking a pipe, a turkey high-fiving a cat, and a French bulldog with wings.

“For communities living alongside data centers, the environmental impact of AI isn’t abstract — it’s felt daily,” said Lorena Antiman, cultural mediator at Corporación NGEN.

“Quili.AI is about awareness — specifically around casual prompting — and creating space for a broader conversation about how these systems scale responsibly in water-stressed regions,” she said. “If people pause and think before casually prompting AI — or begin asking how and where these systems operate — that’s meaningful progress.”


Olimpia Coral Melo, 35, Puebla, Mexico 

When she was 18, Olimpia Coral Melo’s ex-boyfriend shared an intimate video of her on social media without her consent. That set her off on a seven-year journey to criminalize the nonconsensual sharing of sexual content, and help protect other women from digital violence.

Assisted by Defensoras Digitales, a women’s activist group against cyberbullying and harassment, Melo was instrumental in a series of legislative reforms that led to the Olimpia Law in 2021, which criminalizes the distribution of nonconsensual sexual imagery. It also helped shape the Olimpia Law in Argentina, and the Take it Down Act in the U.S.

But Mexico’s Olimpia Law is still limited in its impact: So far, only five people have been found guilty under the law — a shockingly low number, considering over 18 million people were reported to be victims of cyber harassment in 2024 — more than half of them women. The Olimpia Law also does not consider AI-generated nonconsensual sexual imagery, or hold social media platforms accountable.

Melo is now focusing her activism on adding deepfakes to the Olimpia Law, and having tech companies look at the problem with a “clear, ethical lens,” she told Rest of World. The conversation around digital violence cannot be limited to individual responsibility of those who post or share content. The problem is structural, and digital platforms have direct responsibility. Not only do they host the content, they amplify, recommend, and often monetize it.”


Joan Kinyua, 36, Nairobi, Kenya

Joan Kinyua started a job at data annotation contractor Samasource in 2016. Kinyua and her colleagues labeled data and images for Meta, autonomous vehicle companies, and others. Despite the sometimes violent and explicit imagery they viewed, there were no mental health safeguards, and the environment was “not only uncomfortable but exploitative,” Kinyua told Rest of World. Unionizing and raising concerns, she said, were risky.

Kinyua moved to CloudFactory, another data annotation contractor, and also began doing clickwork on Remotasks for extra income. She often started at 5 a.m. to complete tasks for a few hours before her regular job. After clocking off work at 4 p.m., she sometimes hid in the washroom to complete her platform tasks on the office laptop. The tasks had strict time limits, and failing to finish within the allotted time could leave her “without a single penny for hours of effort,” she said.

In eight years of working data annotation jobs at different companies, Kinyua experienced low pay, exploitative conditions, a lack of labor protections, and gender and class biases, she said. That led her to set up the Data Labelers Association with nine others last year, to push for ethical labor standards, and for the invisible workforce powering AI to be recognized, respected, and protected, she said. DLA advocates for fair pay, transparency, and accountability from companies and the government.

“We are building a movement where digital labor is visible, valued, and organized, and where the human foundation of AI is finally recognized as central, not peripheral, to innovation,” Kinyua said. “We are not anti-AI. We understand the transformative power of technology, but we believe that no technological advancement should come at the expense of human dignity, fair pay, mental well-being, or labor protections.”


Code AI, Philippines

Launched in January 2025, the Coalition of Digital Employees – Artificial Intelligence, or Code AI, is an initiative of the powerful BPO Industry Employees Network that represents about 1.8 million workers in the Philippines. It was formed after a call center employee was fired for revealing to Rest of World that an AI program acted as their quality assurance manager. 

Code AI has been helping workers seek compensation after being replaced by AI, and demanding greater labor protections for call center agents, data annotators, content moderators, and other tech workers who are vulnerable to job loss from AI. Renso Bajala, who spearheaded Code AI’s initial campaign, has since become an organizer with the coalition, assisting others who’ve been fired by their employers due to AI. The group has helped about 1,000 workers demand fair compensation, and assert their right to assemble and seek legal action, he told Rest of World.

“We try to harness the collective power among employees,” Bajala said. When tech workers are laid off, “a lot of times they are in a rush to find the next job rather than speak out.”

Code AI was involved in drafting the Magna Carta for BPO Workers, a draft legislation that aims to protect their rights. While the bill does not have specific provisions against companies citing AI to lay off workers, it strengthens workers’ overall position, Bajala said. Many call center employees who were laid off now work as freelance data annotators, he said.

Still, Code AI has struggled to keep up with how quickly AI is reshaping the workforce, Bajala said. “Sometimes it feels like we’re just getting to know an issue somewhere, and in the middle of that, we suddenly have to deal with a mass layoff somewhere else.”



Source link

Leave a Reply

Translate »
Share via
Copy link