The Intersection of Technology and Human Rights: Lessons from AI Development

“By sharing real-world examples of how artificial intelligence and advanced data science methods can be applied for good, we seek to inspire the reader to envision new possibilities for impactful change.”
— Juan M. Lavista Ferres, AI for Good: Applications in Sustainability, Humanitarian Action, and Health

Artificial intelligence (AI) holds immense promise for advancing human rights. From enhancing disaster response to improving healthcare access, AI-driven initiatives have demonstrated the potential to address some of the world's most pressing challenges. For instance, Mercy Corps' deployment of the AI tool Methods Matcher has enabled aid workers to access life-saving information during humanitarian crises, exemplifying AI's capacity to support vulnerable populations effectively. businessinsider.com

However, it's crucial to recognize that AI systems are not inherently neutral. The data used to train these models often reflect existing societal biases and inequalities. When such data inform AI algorithms, there's a risk of perpetuating or even exacerbating these issues, particularly in areas like surveillance, criminal justice, and resource allocation.

By exploring both the transformative potential and the ethical pitfalls of AI, we begin to uncover how claims of neutrality often mask deeply embedded biases. These hidden assumptions are not mere technical oversights, they reflect subjective worldviews baked into the very design of AI systems. Understanding how and where bias arises, from dataset curation to model assumptions and output interpretation, reveals a broader social pattern: the illusion of objectivity often serves as a convenient shield, allowing institutions to sidestep accountability for ethically fraught decisions.

False Objectivity: The Hidden Biases in Code

AI is trained using vast amounts of data collected from real-world sources, much of which reflects human behavior, decisions, and history. People contribute directly by labeling data, designing algorithms, and choosing which datasets to use, while historical records, often biased or incomplete, form the foundation. This human involvement means AI systems inherit the values, prejudices, and blind spots embedded in the data they learn from.

Design choices in AI, from selecting which data to include, to defining what success looks like, are often presented as neutral or purely technical decisions. But these choices actually embed the designers’ assumptions about what matters, who counts, and which outcomes are acceptable. What seems like an objective standard is frequently a reflection of specific cultural, social, or political values, shaping AI systems in ways that can reinforce existing power structures while remaining invisible beneath a veneer of neutrality.

The belief in AI’s objectivity allows societies to distance themselves from the ethical consequences of these technologies. By framing algorithms as impartial arbiters, institutions can shift blame away from human decision-makers, creating a convenient loophole where responsibility becomes diffused or denied altogether. This illusion of neutrality not only obscures the values embedded in AI but also enables harmful outcomes to persist unchecked, as accountability is evaded under the guise of “just following the data.”

Automation and the Erosion of Consent

In a world increasingly shaped by invisible systems, where data flows silently and decisions are made before we even realize a question has been asked, one of the most foundational human rights, consent, has begun to quietly dissolve.

We see this play out in the ordinary: Algorithmic welfare systems, like those trialed in the Netherlands or the U.S., automatically determine eligibility for critical services, often flagging individuals for fraud without human review. These systems don’t just make decisions, they redefine the terms of our participation.

Consent used to require a question. A form. A choice. Now, it’s often assumed. Or worse, engineered out altogether. The logic of automation is efficiency. Predictive systems are trained to anticipate our needs before we articulate them. But in doing so, they can sidestep the act of asking altogether. If a system “knows” what’s best, what role is left for personal agency?

And maybe, it begins with a new kind of question—not what technology can do, but what it should ask before it acts.

The Global Divide: Whose Rights, Whose Machines?

AI isn’t neutral—it’s built on choices. And those choices are rarely global.

Today, policy is shaped in Brussels, Silicon Valley, Beijing. But the tools built there are exported everywhere, often into contexts they were never designed for. Facial recognition used in refugee camps. Predictive policing rolled out in post-colonial states. Tools of power, shipped with no manual for justice. This is tech colonialism in real time.

When AI is built in the Global North and deployed in the Global South, without regard for local norms, languages, or laws, it doesn’t just fail, it dominates. It imposes. Whose values are encoded in these systems? Who gets to decide what “fair” looks like? And who lives with the consequences?

Take the deployment of Kenya’s national digital ID system, Huduma Namba. Developed with influence from foreign consultants and modeled on Western biometric infrastructure, it was rolled out with little public input, raising alarm over surveillance, data security, and exclusion. Communities without formal IDs or fingerprints, often the poor, rural, or elderly, were left out of basic services entirely.

A truly global conversation about AI can't be led by a few. It must include the many, especially those at the margins, whose rights are most often overwritten in the name of innovation. If AI is to serve humanity, it must be built with humanity. All of it.

Lessons from the Edge of Development

After years of study in international relations and artificial intelligence, a few truths have crystallized. Not just theories, but patterns, seen again and again across systems, borders, and communities:

1. Accountability is architectural.
You can’t bolt it on after launch.
If justice isn’t part of the blueprint, no amount of oversight will retrofit it. It has to be designed into the structure—into the data, the feedback loops, the decision paths.

2. Global ethics must be participatory.
Ethics imposed from the top, or from the West, aren’t ethics.
They’re rules. And rules without listening are rarely just. The most resilient, responsive systems are the ones built with, not for, the people they affect.

3. Transparency without power redistribution is empty.
We can’t just “open the black box” and call it a day.
If communities still have no say, no remedy, no right to refuse—what does seeing the algorithm actually change?

Beyond the Machine

AI is not just a tool—it’s a mirror. It reflects who we are, what we value, and who we’re willing to leave behind.

If we want AI to support human rights, not erode them, we have to move beyond technical fixes and toward structural change. That means centering the people most affected. Designing for dignity, not just efficiency. Asking different questions, and listening harder to the answers.

The future of AI is still being written. Let’s make sure it’s one in which justice isn’t an afterthought, but the starting point.


Previous
Previous

One Nation, Under AI? One Big ‘Beautiful Bill’ Could Silence States for a Decade

Next
Next

The Politics of Culture: Gramsci's Theory in the Age of Global Power