Building on the insights from my previous article, "The Ghost in the Machine: How We Taught AI to See the World Through Our Eyes," which explored how AI systems inherit and amplify social biases embedded in data, this article dives deeper into why purely mathematical approaches are insufficient. To truly confront AI bias, we must embrace an ethical framework grounded in human dignity, autonomy, and democratic consent, ideas powerfully articulated by philosopher John Locke.
The Seductive Illusion of Mathematical Fairness
Consider COMPAS, the risk assessment tool used across the U.S. criminal justice system. When ProPublica investigated it in 2016, they found that while the algorithm was equally accurate across racial groups, it was twice as likely to falsely flag Black defendants as high-risk. Northpointe, the company behind COMPAS, countered that their algorithm achieved "equalized odds" with similar true positive rates across groups.
Both were mathematically correct. Both called their system "fair." Both were wrong about what fairness actually means.
This isn't a bug in the math, it's a feature of mathematical thinking itself. Different fairness metrics often conflict, and choosing between them requires value judgments that no algorithm can make. Mathematical models don't just simplify reality — they impose a particular way of seeing it, reducing human complexity to variables that can be measured and optimized.
Amazon's experimental recruiting tool demonstrated this perfectly. The system learned from hiring data that men were more likely to be hired for technical roles. When it faithfully reproduced this pattern, it wasn't being biased — it was being mathematically accurate. But accuracy in service of historical discrimination is just discrimination with better PR.
Why John Locke Matters for AI Ethics
To move beyond mathematical fairness, we need ethical foundations that prioritize human dignity over algorithmic efficiency. John Locke's political philosophy offers exactly this framework, with three principles that directly challenge how AI systems currently operate.
A Note on John Locke
John Locke (1632-1704) was a 17th-century English philosopher whose ideas fundamentally shaped modern democratic thought. His concept of the "tabula rasa", the mind as a blank slate shaped by experience, revolutionized how we understand human development and learning. More importantly for our discussion, Locke's Two Treatises of Government established the foundational principles of natural rights, government by consent, and the right to revolution that inspired the American Revolution and continue to influence democratic governance worldwide. His vision of legitimate authority as serving the people, not ruling over them, provides the perfect lens for examining how AI systems should relate to the humans they affect.
Natural Rights and Human Agency
Locke's concept of the state of nature emphasizes that humans are born free and equal, endowed with reason and the capacity for self-determination. This principle directly challenges how AI systems treat people not as autonomous agents deserving respect, but as data points to be classified and optimized.
An AI system grounded in Lockean principles would prioritize human agency over algorithmic efficiency. It would recognize that people have the right to be treated as individuals, not as statistical representations of their demographic group. Human dignity cannot be reduced to variables in a model, no matter how sophisticated.
The Foundation of Consent
For Locke, legitimate power rests on the consent of the governed. In the AI context, this means communities and individuals should have meaningful input into how AI systems that affect them are designed and deployed. This isn't just about privacy notices, it's about genuine democratic participation in shaping the technologies that shape our lives.
Consider how different AI development would look if affected communities had real power to reject systems, they deemed harmful. If tenants could refuse biased rental screening algorithms, if job seekers could demand transparent hiring processes, if defendants could challenge risk assessment tools that perpetuate injustice.
The principle of consent transforms AI from something done to people into something done with people.
Purpose-Driven Design
Locke reminds us that legitimate authority serves a purpose, protecting natural rights and promoting the common good. AI systems should be held to the same standard, designed with clear purposes aligned with enhancing human flourishing, not just maximizing profit or efficiency.
This means asking different questions during AI development: Does this system enhance human agency or diminish it? Does it promote equality or entrench hierarchy? Does it serve the common good or private interests? These questions can't be answered with mathematics alone; they require moral reasoning and democratic deliberation.
From Philosophy to Practice: Building Consent-Based AI
Grounding AI in Lockean principles requires new institutions and practices that embody democratic values:
Algorithmic Impact Assessments: Before deploying AI systems in high-stakes domains, organizations should conduct comprehensive assessments that include input from affected stakeholders and evaluation of whether the system's purpose aligns with democratic values.
Community Oversight Boards: For AI systems that affect entire communities, like predictive policing algorithms or school funding formulas, oversight should include representatives from affected communities with power to approve, modify, or reject systems.
Algorithmic Transparency Rights: Individuals should have the right to understand how AI systems make decisions that affect them, including information about the system's purpose, training data, known limitations, and embedded values.
Democratic Technology Assessment: Society needs new mechanisms for democratic deliberation about AI development, citizen panels, public hearings, and forums where communities can engage with the ethical implications of AI systems before deployment.
The Path Forward: Math as Tool, Not Master
Mathematical fairness metrics are like thermometers, useful for measuring something, but useless for deciding what temperature we want. They can tell us whether an algorithm treats different groups differently, but they can't tell us whether those differences are justified or how to balance competing values.
The solution isn't to abandon mathematical tools but to subordinate them to ethical principles and democratic processes. Math should serve human values, not substitute for them.
Toward AI That Reflects Who We Want to Be
AI bias will never be fully "solved" by mathematics alone. As John Locke's philosophy teaches us, technology gains legitimacy through transparency, consent, and alignment with purposes that serve human flourishing.
This isn't just about making AI less biased, it's about making AI more human. It's about creating technologies that enhance rather than diminish human agency, that promote rather than undermine democratic values, that serve rather than supplant human judgment.
The goal is not to eliminate bias entirely, an impossible task, but to create technologies that are accountable to society and resonate with a shared vision of justice. Only then can AI become a true partner in human progress.
The algorithms of tomorrow will be shaped by the choices we make today. The question is not whether we can build perfect AI, but whether we can build AI that makes us more perfect, more just, more free, more human.
The ghost in the machine is real, but it's not beyond our power to exorcise. We just have to want to do the work, not just the technical work, but the moral work of building a more just world.
________________________________
https://plato.stanford.edu/entries/locke-political/
https://iep.utm.edu/locke-po/
https://lawliberty.org/the-real-john-locke-and-why-he-matters/
https://ethics.org.au/big-thinker-john-locke/
https://www.youtube.com/watch?v=bZiWZJgJT7I&t=2s&ab_channel=TheSchoolofLife