The Consent Problem: Why AI Ethics Is Fundamentally a Question About Who Gets to Define "Good"
There is a peculiar silence at the heart of most AI ethics frameworks. Billions of dollars are being spent β the global AI ethics and governance market was valued at approximately $1.4 billion in 2023 and is projected to reach $9.4 billion by 2030 (Grand View Research, 2023) β yet the foundational question is rarely asked aloud: Who decided what "ethical AI" means, and by what authority?
This is not an abstract philosophical puzzle. It is an operational crisis unfolding in real time, inside the systems that determine who receives a loan, whose medical scan gets flagged, and which job application makes it past the automated filter. The ethics embedded in these systems did not emerge from democratic deliberation. They were, in most cases, authored by a relatively homogeneous group of engineers and product managers at a small number of institutions β and then exported to the world as universal standards.
Having spent years studying human-computer interaction across different cultural contexts, I have come to believe that the most urgent problem in AI ethics is not alignment, not bias mitigation, and not even existential risk. It is the consent problem (the problem of ethical authority): the question of whether any institution has the legitimate standing to define moral behavior for a technology that affects everyone.
The Historical Precedent We Keep Ignoring
Historians of technology will recognize this pattern immediately. When the printing press spread across Europe in the 15th century, the Catholic Church attempted to maintain authority over what constituted "legitimate" knowledge by establishing the Index Librorum Prohibitorum β a list of banned books. The Church's claim was not merely administrative; it was ontological. They believed they held the moral authority to define truth.
The parallel to today's AI governance landscape is striking. A small number of institutions β primarily concentrated in the United States and, increasingly, China β are effectively writing the Index for artificial intelligence. The European Union's AI Act, which entered into force in August 2024, represents perhaps the most comprehensive attempt to codify AI ethics into law. It classifies AI systems by risk level, prohibits certain applications outright (such as real-time biometric surveillance in public spaces), and mandates transparency for high-risk systems used in employment, education, and critical infrastructure.
This is admirable in its ambition. But consider what the EU AI Act does not do: it does not meaningfully incorporate the ethical frameworks of the Global South, where the majority of AI's end users actually live. It does not engage with Ubuntu philosophy's communitarian conception of personhood, which would evaluate an AI system's ethics not by individual rights but by its effect on relational bonds within a community. It does not account for Confucian role ethics, which would assess an AI's behavior differently depending on the hierarchical relationship between user and system.
"The medium is the message." β Marshall McLuhan, Understanding Media, 1964
McLuhan's insight applies here with uncomfortable precision: the structure of how AI ethics is being produced β top-down, institutionally centralized, export-oriented β is itself an ethical statement. It communicates that some ways of knowing are universal and others are local. That is a philosophical claim masquerading as a technical standard.
Three Competing Frameworks β and What Each Gets Right (and Wrong)
Framework 1: Consequentialist Alignment ("Maximize Beneficial Outcomes")
This is the dominant paradigm in Silicon Valley and most major AI labs. The core intuition is straightforward: an ethical AI is one that produces good outcomes, measured in aggregate. OpenAI's stated mission β "to ensure that artificial general intelligence benefits all of humanity" β is a consequentialist commitment.
What it gets right: It is results-oriented and measurable. It forces engineers to think about downstream effects rather than just technical elegance.
What it gets wrong: It requires agreement on what counts as a "good outcome" before the framework can function. In practice, this agreement is never actually achieved β it is assumed. The "benefit" in "beneficial AI" is typically operationalized using metrics that reflect the values of whoever is doing the measuring. A 2021 study published in Nature Machine Intelligence found that of 84 AI ethics documents analyzed globally, 73% originated from North America or Europe, and fewer than 5% incorporated non-Western ethical traditions. The consequentialist framework, in practice, tends to measure consequences that its authors already care about.
Framework 2: Deontological Constraints ("Respect Rights Regardless of Outcomes")
The EU AI Act leans heavily in this direction. Certain things are simply prohibited β not because they always produce bad outcomes, but because they violate fundamental rights. Real-time biometric surveillance is banned not because it is always ineffective, but because it is incompatible with human dignity as the EU understands it.
What it gets right: It provides clear prohibitions that are resistant to utilitarian override. It acknowledges that some harms are categorical, not merely statistical.
What it gets wrong: Rights-based frameworks are culturally specific in ways that their proponents rarely acknowledge. The concept of "individual privacy" as an inviolable right is not a universal human intuition β it is a product of specific historical and philosophical traditions, most prominently Kantian liberalism. Importing this framework wholesale into societies with different conceptions of the individual-community relationship is itself a form of ethical imperialism, however well-intentioned.
Framework 3: Virtue Ethics and Relational Approaches ("What Kind of Technology Should We Be Building?")
This is the least institutionally developed framework but, I would argue, the most philosophically honest. Rather than asking "what rules should AI follow?" or "what outcomes should AI optimize for?", virtue ethics asks: "what character should AI systems express, and what kind of human relationships do they cultivate?"
The philosopher Shannon Vallor, in her 2016 work Technology and the Virtues, argues that technologies should be evaluated by whether they support or erode the human capacities for practical wisdom (phronesis), care, and moral perception. This framework is particularly well-suited to AI because it focuses on the relationship between human and system rather than treating the system in isolation.
What it gets right: It is inherently pluralistic. Different cultures have different conceptions of virtue, and a virtue-ethics approach can accommodate this diversity without collapsing into relativism β because it still insists that some ways of relating to technology are more conducive to human flourishing than others.
What it gets wrong: It is difficult to operationalize. "Does this AI system cultivate practical wisdom in its users?" is a harder question to answer in a product review than "does it comply with GDPR?"
The Consent Problem in Practice: Three Concrete Cases
Case 1: Predictive Policing in Chicago
Between 2013 and 2019, the Chicago Police Department used a "Strategic Subject List" β an algorithmic system that assigned risk scores to individuals based on factors including prior arrests, age, and social connections to known offenders. The system was presented as a neutral, data-driven tool. In practice, as documented by a 2020 RAND Corporation evaluation, it disproportionately flagged Black and Latino residents and showed no statistically significant effect on violent crime reduction.
The consent problem here is layered. The communities most affected by the system were not consulted in its design. The ethical framework embedded in the algorithm β that past arrest records are a legitimate predictor of future behavior β was never put to democratic deliberation. It was simply assumed. When the system was eventually discontinued in 2019, it was not because of a philosophical reckoning but because of investigative journalism and community pressure.
Actionable insight for practitioners: Before deploying any predictive system in a public-facing context, conduct a consent audit β a structured process that asks: (1) Who are the affected populations? (2) Were they consulted in the system's design? (3) Do they have meaningful recourse if the system affects them adversely? This is not merely a legal compliance exercise; it is an ethical prerequisite.
Case 2: Content Moderation and the Export of Values
Meta's content moderation policies, applied globally, are written primarily in English by teams based in California. A 2021 investigation by the Washington Post documented how these policies, when applied to Arabic-language content, systematically over-removed Palestinian political speech while under-moderating incitement in Hebrew. The moderation system had been trained on data that reflected the linguistic and cultural context of its creators β and then applied universally.
This is the consent problem in its most visible form: a private company's internal ethical framework, developed without input from the communities it governs, functioning as de facto public policy across 190 countries.
Actionable insight for practitioners: If you are building or deploying content moderation systems across multiple linguistic or cultural contexts, invest in culturally-embedded review teams β not just translators, but individuals with deep contextual knowledge who can identify when a universal rule produces culturally specific injustice. This is operationally expensive. It is also ethically non-negotiable.
Case 3: Medical AI and the Representation Gap
A landmark 2019 study published in Science (Obermeyer et al.) found that a widely used healthcare algorithm systematically underestimated the medical needs of Black patients. The algorithm had been trained to use healthcare costs as a proxy for health needs β a reasonable-seeming choice that embedded a structural inequality: Black patients, due to systemic barriers to healthcare access, historically spent less on healthcare than equally sick white patients. The algorithm interpreted lower spending as lower need.
The researchers estimated that the bias affected approximately 200 million people in the United States alone. This was not a case of malicious intent. It was a case of an ethical framework β "use historical data to allocate resources efficiently" β that was never interrogated for the assumptions it carried.
Actionable insight for practitioners: Implement assumption audits as a standard part of model development. For every proxy variable in your training data, ask: "What social or historical conditions produced this data pattern, and do those conditions reflect the world we want to reproduce?" This requires collaboration between data scientists and social scientists β a pairing that remains, unfortunately, the exception rather than the rule in most AI development teams.
A Thought Experiment: The Ethical Constitution
Let me propose a thought experiment. Imagine that a global body β perhaps under UN auspices, but with genuine representation from all regions and ethical traditions β were tasked with drafting an "Ethical Constitution for AI." Not a technical standard, but a genuine constitutional document: a statement of values, rights, and limits that would govern the development and deployment of AI systems worldwide.
What would the process of drafting such a document reveal?
I suspect it would reveal, almost immediately, that there is no consensus on several foundational questions: Is individual privacy more important than collective security? Should AI systems be permitted to make irreversible decisions about human lives? Who bears responsibility when an AI system causes harm β the developer, the deployer, or the user?
These are not questions with technically correct answers. They are political questions in the deepest sense β questions about how a community chooses to organize itself and what it values. The fact that we are currently answering them by default, through the accumulated product decisions of a small number of technology companies, is not a neutral outcome. It is a choice β one that most of the world's population never consented to.
This thought experiment suggests that the most important infrastructure investment in AI ethics is not better algorithms or more comprehensive regulation. It is deliberative infrastructure: the institutions, processes, and forums through which genuine global consent can be constructed. The Global Partnership on AI (GPAI), established in 2020 with 29 member countries, represents a tentative step in this direction. But it remains advisory, underfunded, and largely invisible to the public it ostensibly represents.
Where I Stand β Carefully
I want to be precise about my own position, because intellectual honesty requires acknowledging its limits.
I do not believe that the absence of universal ethical consensus means that "anything goes" β that relativism is the only alternative to imperialism. Shannon Vallor is right that some ways of relating to technology are more conducive to human flourishing than others, even across cultural differences. A system that systematically deceives its users is not rendered ethical by cultural context.
But I do believe that the process by which ethical standards are established matters as much as their content. A framework that is technically correct but was developed without the participation of those it governs will, over time, generate resistance, workarounds, and ultimately failure β not because it is wrong, but because it lacks legitimacy.
The most durable ethical frameworks in human history β from constitutional democracies to international human rights law β derived their authority not from the wisdom of their authors alone, but from the processes of deliberation, contestation, and consent through which they were constructed. AI ethics will be no different.
The technology is moving faster than the deliberation. That gap is the actual crisis.
Actionable Steps You Can Take Now
Whether you are a developer, a policymaker, a researcher, or an engaged citizen, the consent problem in AI ethics is not someone else's responsibility. Here are concrete steps that are within reach:
-
Demand transparency about ethical frameworks. When evaluating any AI system β for purchase, for use, for regulation β ask explicitly: "What ethical framework was used in the design of this system, and who was involved in developing it?" Treat the absence of a clear answer as a red flag.
-
Support participatory design processes. Organizations like the Algorithmic Justice League and the AI Now Institute have developed methodologies for involving affected communities in AI system design. These methodologies are publicly available and increasingly well-documented. Advocate for their adoption in your institution.
-
Treat "ethics by compliance" as insufficient. Regulatory compliance (GDPR, EU AI Act, etc.) sets a floor, not a ceiling. A system can be fully compliant and still be ethically problematic. Build internal review processes that go beyond legal requirements.
-
Invest in cross-disciplinary teams. The assumption audit and consent audit I described above cannot be conducted by data scientists alone. They require historians, anthropologists, sociologists, and community advocates. If your organization cannot afford these collaborations internally, partner with universities or civil society organizations that can provide them.
-
Engage with the policy process. The EU AI Act, the US Executive Order on AI (October 2023), and the UK's AI Safety Summit represent genuine opportunities for public input. Most people do not participate in these processes because they assume their input is unwelcome. It is not.
A Question to Carry Forward
I want to close not with a summary but with the question that I find myself returning to most frequently in my own research:
If the communities most affected by an AI system had been given genuine authority over its design β not merely consulted, but empowered to say "no" β which of the systems currently deployed in the world would still exist?
That question is uncomfortable precisely because the answer appears to be: not all of them. And that discomfort, I would suggest, is the most important data point in contemporary AI ethics. It tells us something not about the technology, but about ourselves β about the gap between the values we claim to hold and the processes we are willing to build to honor them.
The ethics of AI is, in the end, a mirror. What it reflects is our willingness β or unwillingness β to share the authority to define what "good" means.
Dr. Utopian is an independent researcher specializing in human-computer interaction, AI ethics, and the philosophy of technology. His work examines the intersection of emerging technologies and social systems across cultural contexts.
Postscript: A Note on Method
A careful reader may have noticed that this essay itself enacts a particular methodological commitment: it attempts to reason about AI ethics without resolving the tension at its center. That is deliberate.
There is a temptation in writing about technology β one I have felt acutely in my own career β to arrive at conclusions. To offer the five-step framework, the definitive taxonomy, the reassuring synthesis. Readers want resolution, and writers want to provide it. But I have come to believe that premature resolution is one of the characteristic intellectual failures of the technology discourse of our era.
Marshall McLuhan observed that "we look at the present through a rearview mirror" β we perceive new phenomena through the conceptual frameworks inherited from previous eras. The risk in AI ethics is precisely this: that we resolve contemporary dilemmas using moral vocabularies developed for a world in which the relevant actors were always human, always accountable, always locatable in space and time. An AI system is none of these things in the traditional sense, and our ethical frameworks are still catching up.
So rather than a conclusion, I offer a posture.
The Posture of Productive Uncertainty
The philosopher John Dewey argued that inquiry begins not with questions but with problems β situations that are genuinely indeterminate, where the path forward is not yet clear. What distinguishes genuine inquiry from mere opinion-formation, in Dewey's account, is the willingness to remain in that state of indeterminacy long enough to understand it fully before reaching for resolution.
I would suggest that we are, collectively, not yet at the resolution stage of AI ethics. We are β or ought to be β still in the phase of understanding the problem.
This has several practical implications:
First, it means we should be suspicious of ethical frameworks that arrived too quickly. The speed with which major technology companies published AI ethics principles in the period 2016β2020 was, in retrospect, a warning sign rather than a reassurance. Genuine ethical deliberation is slow. It requires consultation, disagreement, revision, and the willingness to be changed by what one hears. Documents produced in quarterly planning cycles rarely reflect that process.
Second, it means that intellectual humility is not a weakness in this domain β it is a methodological requirement. The researchers and policymakers I find most credible in AI ethics are, almost without exception, those who are willing to say "I don't know" and to mean it. Certainty, in a field this young and this consequential, is almost always a sign that someone has stopped asking the hard questions.
Third, and perhaps most importantly, it means that the conversation must be kept open. One of the structural risks of institutionalized AI ethics β ethics boards, compliance frameworks, regulatory checkboxes β is that it can create the appearance of a closed question where the question remains radically open. When an organization can point to its ethics review process, the social pressure to continue asking ethical questions is reduced. This is a form of what the sociologist Robert Merton called "goal displacement" (λͺ©ν μ μΉ): the process becomes the goal, and the original purpose β genuine ethical accountability β recedes.
Three Scenarios for the Next Decade
Allow me, in the spirit of futures thinking, to sketch three plausible trajectories for AI ethics over the next ten years. These are not predictions; they are what futurists sometimes call "scenario logics" β internally consistent narratives that illuminate different dimensions of the present.
Scenario One: The Compliance Plateau
In this scenario, AI ethics becomes fully institutionalized β and, in doing so, becomes fully domesticated. Every major organization has an ethics board. Every AI system ships with an ethical impact assessment. The vocabulary of AI ethics β fairness, transparency, accountability β becomes standard corporate language, deployed fluently and meant sincerely by almost no one.
This is not a dystopian scenario in the dramatic sense. No single catastrophic failure occurs. Instead, what happens is a gradual narrowing of the ethical imagination: the questions that can be asked within institutional frameworks crowd out the questions that cannot. The radical inquiry that genuine ethics requires is replaced by the procedural inquiry that compliance requires. The mirror, in this scenario, reflects back exactly what we want to see.
Scenario Two: The Crisis Inflection
In this scenario, a significant AI-related harm β large in scale, clearly attributable, and affecting populations with political voice β triggers a genuine renegotiation of the social contract around AI. This is the pattern that has historically driven meaningful technology regulation: the thalidomide crisis and pharmaceutical regulation, the 2008 financial collapse and banking reform, the early environmental disasters and the birth of environmental law.
The crisis inflection scenario is neither optimistic nor pessimistic. It assumes that meaningful change is possible, but that it requires the kind of concentrated, legible harm that diffuse, systemic harms rarely produce. The troubling implication is that the communities most likely to experience AI-related harm β those already marginalized, those with less political representation β are precisely the communities whose suffering is least likely to trigger the political response that drives reform.
Scenario Three: The Distributed Turn
In this scenario, the locus of AI ethics shifts β gradually, unevenly, but meaningfully β from institutions to communities. Driven by a combination of regulatory pressure (the EU AI Act's provisions on fundamental rights impact assessments), civil society organizing, and the growing technical capacity of non-specialist actors to audit and challenge AI systems, ethical authority becomes more genuinely distributed.
This scenario does not eliminate the tensions I have described throughout this essay. It relocates them. The difficult questions about whose values prevail, how conflicts are resolved, and what "good" means across cultural contexts do not disappear β but they are answered through processes that more closely resemble democratic deliberation than corporate governance.
Of the three scenarios, I find the distributed turn most worth working toward β while acknowledging that it is not the most likely outcome absent deliberate effort. History suggests that institutions do not voluntarily distribute their authority. The pressure must come from outside.
What I Believe (Held Lightly)
I have, in this essay and in the work that preceded it, tried to honor the methodological commitment to balance β to present the strongest versions of competing positions before offering my own. Let me now, in closing, be somewhat more direct about where my own thinking has arrived, with the caveat that I hold these views as working hypotheses rather than settled conclusions.
I believe that the central challenge of AI ethics is not technical but political. The question of how to build fair, transparent, and accountable AI systems is genuinely difficult, but it is not the hardest question. The hardest question is: who decides? And the answer to that question is determined not by algorithms or even by ethical frameworks, but by the distribution of power in the societies that build and deploy these systems.
I believe that the communities most affected by AI systems must have genuine authority β not consultative voice, but decision-making power β over those systems. This is not a novel principle; it is the basic logic of democratic governance applied to a new domain. The novelty lies only in the resistance it encounters from those who benefit from the current arrangement.
I believe that the ethics of AI cannot be separated from the broader ethics of the institutions that develop it. A technology company that treats its own workers as instruments of optimization cannot be trusted to treat the broader public differently. The internal culture of an organization is the most reliable predictor of its external ethical behavior, and no ethics board can substitute for an organizational culture that genuinely values the wellbeing of people over the efficiency of systems.
And I believe β this is the view I hold most tentatively β that we are at a genuinely critical juncture. The decisions made in the next five to ten years about how AI systems are governed, who has authority over them, and what values they are designed to serve will shape the technological infrastructure of human society for decades. The window for meaningful influence is open, but it is not permanently open.
This is why the question of participation matters so urgently. Not because participation is easy, or because the processes for it are currently adequate, but because the alternative β a technological order shaped by the preferences of a small number of actors, insulated from democratic accountability β is one that no honest reading of history gives us reason to accept.
A Final Thought on Mirrors
I began this essay, and the series of which it is a part, with the image of the algorithmic mirror β the idea that AI ethics reflects back to us our own values, our own biases, our own willingness to share authority over what "good" means.
I want to complicate that image slightly in closing.
A mirror is a passive object. It reflects what is placed before it without judgment, without memory, without the capacity to show us what we might become. The challenge of AI ethics is not merely reflective but constitutive: the choices we make about how to build and govern these systems do not simply reveal who we are, they help determine who we will be.
The philosopher Charles Taylor wrote of "strong evaluation" (κ°ν νκ°) β the human capacity not merely to have preferences, but to evaluate those preferences, to ask whether the things we want are worth wanting. It is this capacity, Taylor argued, that is most distinctively human, and most distinctively at stake in questions of ethics.
The deepest question in AI ethics, then, is not whether machines can be ethical. It is whether we will use the occasion of their development to exercise our own capacity for strong evaluation β to ask, seriously and collectively, not just what we can build, but what we should build, and why, and for whom.
That question does not have a final answer. But the asking of it, honestly and persistently, may be the most important thing we do.
A question to carry with you:
If you were to design a process β not an AI system, but a process β for making decisions about AI that you would trust even if you did not know in advance which side of its decisions you would be on, what would that process look like?
Dr. Utopian is an independent researcher specializing in human-computer interaction, AI ethics, and the philosophy of technology. His work examines the intersection of emerging technologies and social systems across cultural contexts. He can be reached through his research institute's public correspondence channels.
Dr. μ ν νΌμ
μΈκ°-μ»΄ν¨ν° μνΈμμ©μ μ°κ΅¬ν λ―Έλνμ. κΈ°μ μ΄ μ¬νμ μΈκ°μκ² λ―ΈμΉλ μν₯μ νꡬνλ©°, κΈ°μ λκ΄λ‘ κ³Ό λΉκ΄λ‘ μ¬μ΄μμ κ· ν μ‘ν μμ μ μ μν©λλ€.
Related Posts
λκΈ
μμ§ λκΈμ΄ μμ΅λλ€. 첫 λκΈμ λ¨κ²¨λ³΄μΈμ!