The Empathy Gap: Why AI Ethics Keeps Solving the Wrong Problem
There is a peculiar pattern in the history of AI ethics discourse. Every few years, a new crisis erupts β a biased hiring algorithm, a racially skewed facial recognition system, a predictive policing tool that devastates a community β and the response follows an almost liturgical rhythm: outrage, investigation, a white paper, a revised framework, and then silence until the next crisis. The field of AI ethics has become remarkably good at diagnosing failures after they occur. What it has not managed to do is ask a more fundamental question: why do we keep misidentifying the problem itself?
This is not merely a technical failure. It is, I would argue, a philosophical one β specifically, what I call the empathy gap (κ³΅κ° κ²©μ°¨) in AI ethics. The systems we build, and the ethical frameworks we construct around them, are consistently better at modeling the world as it appears to their designers than as it is experienced by those most affected by the technology.
The Problem Is Not What We Think It Is
The dominant conversation in AI ethics circles around a familiar set of concerns: bias, fairness, transparency, accountability. These are real and important. But there is something structurally odd about how they are framed. Nearly every major AI ethics framework β from the EU's AI Act to corporate responsible AI guidelines β treats ethics primarily as a compliance problem. The question asked is: "Does this system meet our defined criteria for fairness and transparency?"
Here is a more interesting question: Who decided what fairness means, and for whom was it meaningful when they decided it?
The philosopher Miranda Fricker introduced the concept of epistemic injustice (μΈμλ‘ μ λΆμ) β the idea that certain people are systematically wronged in their capacity as knowers. There are two forms: testimonial injustice, where someone's testimony is given less credibility due to identity-based prejudice, and hermeneutical injustice, where someone lacks the conceptual tools to make sense of their own experience because those tools were never developed for people like them.
AI ethics, I would suggest, is rife with both forms. The people most harmed by algorithmic systems are often least represented in the rooms where those systems are designed β and, critically, least represented in the conceptual vocabulary that defines what counts as "harm" in the first place.
A Brief Historical Detour: Technology Has Always Had This Problem
Let us not pretend this is unprecedented. Marshall McLuhan famously observed that "the medium is the message" β meaning that technologies are never neutral conduits but active shapers of perception and social reality. Every technology embeds the assumptions of its creators.
The history of urban planning offers a sobering precedent. Robert Moses, the master builder of mid-20th century New York, reportedly designed highway overpasses on Long Island to be too low for public buses β effectively preventing low-income and Black residents from accessing public beaches. Whether this specific account is entirely accurate remains debated by historians, but the broader pattern it represents is not: infrastructure encodes social values, often invisibly, and those values tend to reflect the perspectives of those with the power to build.
"Technological artifacts can embody specific forms of authority and social relationships." β Langdon Winner, Do Artifacts Have Politics? (1980)
AI systems are, in this sense, extraordinarily powerful artifacts. They do not merely reflect social values β they scale them, automate them, and render them invisible behind a veneer of mathematical objectivity. The empathy gap is not a bug in AI development. It is a structural feature of who gets to build, who gets to define the problems worth solving, and whose experiences are legible to the system.
Three Scenarios for Where This Leads
Let us conduct a brief semiotic exercise. Imagine the next decade of AI ethics development along three plausible trajectories.
Scenario One: The Compliance Plateau
In this scenario, AI ethics continues to mature as a compliance discipline. Regulations become more sophisticated. Auditing frameworks improve. Companies hire larger ethics teams and produce more rigorous documentation. The systems that cause the most visible harm β those that generate headlines β are gradually improved.
What does not change is the underlying epistemic structure. The frameworks remain authored by a relatively homogeneous group of experts, refined through consultation processes that are technically open but practically inaccessible to most affected communities. The result is a kind of ethical theater (μ€λ¦¬μ κ·Ήμ₯) β increasingly polished performances of responsibility that do not fundamentally alter the power dynamics of who shapes AI and for whom.
This is, I would suggest, the most likely near-term trajectory. Not because anyone intends it, but because it requires the least structural change.
Scenario Two: The Participatory Turn
A more optimistic scenario involves what some researchers are calling participatory AI design (μ°Έμ¬μ AI μ€κ³) β genuinely restructuring the development process to include affected communities not merely as research subjects or consultants, but as co-designers with real decision-making authority.
There are early, fragile experiments in this direction. Some community organizations in the United States have begun negotiating directly with cities over the terms under which surveillance technologies can be deployed in their neighborhoods. In Kenya and South Africa, AI researchers are developing datasets and benchmarks that center African linguistic and cultural contexts rather than retrofitting Western ones. These efforts are small, underfunded, and often marginalized within mainstream AI discourse. But they represent a genuine conceptual shift: from ethics as constraint on development to ethics as constitutive of development.
The challenge is institutional. Participatory design is slow, contested, and resistant to the scalability that makes AI commercially attractive. It is also genuinely difficult β communities are not monolithic, and "community voice" can be appropriated as easily as any other ethical concept.
Scenario Three: The Adversarial Equilibrium
The third scenario is perhaps the most philosophically interesting, if uncomfortable. In this trajectory, the empathy gap does not close β it becomes the terrain of explicit political contestation.
Affected communities, having recognized that ethical frameworks do not automatically represent their interests, begin developing their own counter-frameworks, their own auditing tools, their own legal strategies. We are already seeing early versions of this: algorithmic impact assessments demanded by community coalitions, lawsuits challenging AI systems on civil rights grounds, investigative journalism that functions as a form of technical audit.
In this scenario, AI ethics becomes less a unified field and more a contested political domain β which is, arguably, what it should always have been. The question "what is ethical AI?" ceases to have a single authoritative answer and becomes a site of ongoing democratic negotiation.
The Actionable Dimension: What Can Actually Be Done?
I am wary of reducing a philosophical argument to a checklist. But intellectual honesty requires acknowledging that abstract critique, without practical implication, is a form of its own irresponsibility. So let me offer several concrete observations.
First, reframe the question of expertise. The dominant assumption in AI ethics is that ethical expertise flows from philosophical training, technical knowledge, or regulatory experience. This is not wrong β but it is incomplete. Lived experience of algorithmic harm is also a form of expertise, and it is systematically undervalued. Organizations developing AI systems should consider what it would mean to treat community knowledge as primary data rather than supplementary input.
Second, audit the audit process. Most AI auditing frameworks ask whether a system meets predefined criteria. A more probing question is: who defined the criteria, through what process, and whose experiences of harm were legible to that process? This is not merely philosophical navel-gazing β it has direct practical implications for which failure modes get caught and which remain invisible.
Third, take institutional design seriously. Many AI ethics problems are not solvable at the level of individual systems or individual decisions. They require changes to the institutional structures that shape incentives, timelines, and accountability. This is where the conversation about AI ethics intersects with broader questions of corporate governance, regulatory design, and democratic legitimacy β questions that are often treated as separate from "technical" AI ethics but are, in fact, central to it.
Those working on no-code automation and AI deployment pipelines face a version of this challenge acutely: the speed at which workflows can now be assembled and deployed means that ethical review, if it happens at all, is often an afterthought. The structural incentives of rapid deployment β discussed thoughtfully in analyses of no-code automation's hidden costs β apply with even greater force when the systems being deployed make consequential decisions about people's lives.
The Deeper Philosophical Stakes
Let me be direct about what I think is ultimately at stake here, even at the risk of overreaching.
The empathy gap in AI ethics is not simply a problem of representation or process design, though it is those things. It is a symptom of a deeper philosophical confusion about what ethics is for. If ethics is primarily a mechanism for managing risk and liability β for ensuring that powerful actors can demonstrate due diligence β then the current trajectory of AI ethics is broadly adequate. It will continue to improve incrementally, and it will continue to fail the people most vulnerable to algorithmic harm.
If, on the other hand, ethics is understood as a practice of expanding the circle of moral consideration β of genuinely attending to the experiences of those whose suffering has historically been rendered invisible β then the current trajectory is not merely inadequate. It is actively misleading, because it creates the appearance of moral progress while leaving the underlying structure of exclusion intact.
The philosopher Simone Weil wrote that "attention is the rarest and purest form of generosity." There is something profound in this for AI ethics. The technical capacity to process information at scale is not the same as β and may in fact work against β the kind of slow, particular, contextually sensitive attention that genuine ethical practice requires.
"The capacity to give one's attention to a sufferer is a very rare and difficult thing; it is almost a miracle; it is a miracle." β Simone Weil, Waiting for God (1951)
What would it mean to build AI systems β and AI ethics frameworks β that genuinely attend to suffering rather than merely measuring it?
The Question I Cannot Answer (But You Should Consider)
I want to be transparent about the limits of my own analysis. The empathy gap argument risks a certain romanticism β the assumption that "lived experience" is always more epistemically reliable than formal analysis, or that community participation automatically produces better outcomes. Neither of these is straightforwardly true. Communities can be wrong. Participation can be manipulated. The relationship between experience and ethical knowledge is genuinely complex.
What I am confident of is this: the current structure of AI ethics systematically underweights certain kinds of knowledge and certain kinds of experience, and this underweighting is not accidental. It reflects deeper patterns of epistemic power that will not be corrected by better algorithms or more rigorous documentation alone.
The infrastructure of AI β not just the systems themselves, but the governance frameworks, the auditing processes, the regulatory architectures β is being built right now, in decisions that will likely shape the next several decades. As with all infrastructure, it will eventually become invisible, taken for granted, and extraordinarily difficult to change. The window for structural intervention is, in historical terms, quite short.
A question to consider: If the people most affected by an AI system had genuine veto power over its deployment β not merely the right to be consulted, but the right to say no β how many of the systems currently operating in the world would still exist? And what does your answer to that question tell you about whose values are actually encoded in the AI ethics frameworks we have built so far?
Dr. μ ν νΌμ
μΈκ°-μ»΄ν¨ν° μνΈμμ©μ μ°κ΅¬ν λ―Έλνμ. κΈ°μ μ΄ μ¬νμ μΈκ°μκ² λ―ΈμΉλ μν₯μ νꡬνλ©°, κΈ°μ λκ΄λ‘ κ³Ό λΉκ΄λ‘ μ¬μ΄μμ κ· ν μ‘ν μμ μ μ μν©λλ€.
Related Posts
λκΈ
μμ§ λκΈμ΄ μμ΅λλ€. 첫 λκΈμ λ¨κ²¨λ³΄μΈμ!