The Consent Problem: AI Ethics Has Already Made Choices You Didn't Agree To
What does it mean to live under rules you never voted for, never read, and may not even know exist? This is not a hypothetical scenario β it is, arguably, the defining condition of AI ethics as it operates today. The field of AI ethics has spent considerable energy on questions of fairness, transparency, and accountability. But there is a prior question, one that appears to receive far less attention: who consented to any of this?
The consent problem sits at the intersection of political philosophy and AI ethics, and it may be the most structurally underexamined issue in the entire field. Not because scholars haven't noticed it β they have β but because it is genuinely difficult to resolve without dismantling assumptions about how ethical AI governance is supposed to work.
Why Consent Matters in AI Ethics β and Why It's Harder Than It Looks
In liberal democratic theory, consent is the foundation of legitimate authority. John Rawls, in A Theory of Justice, argued that just institutions are those whose principles rational individuals would agree to from behind a "veil of ignorance" β not knowing their position in society. The consent framework assumes that those governed have, at minimum, a meaningful opportunity to participate in or refuse the terms being applied to them.
AI ethics, as it is currently practiced, does not obviously satisfy this condition.
When a hiring algorithm ranks your rΓ©sumΓ©, a credit-scoring model determines your loan eligibility, or a predictive policing tool flags your neighborhood for increased surveillance, you are subject to a set of ethical design choices made by engineers, ethicists, and product managers β most of whom you have never met, in institutions you may never interact with, according to value frameworks you were never asked to endorse. The "ethical" architecture was built in. You arrived later.
"Algorithmic systems are not neutral. They embody the values of their creators and the contexts in which they were built." β Weapons of Math Destruction, Cathy O'Neil (2016)
This is not a new observation. What remains underexplored is the structural nature of the problem: the absence of consent in AI ethics is not an oversight that better communication can fix. It appears to be, in many cases, a feature of how the field is organized.
The Asymmetry of Ethical Design
Here is a thought experiment worth pausing on. Imagine a city government that decides to redesign all public spaces according to a new theory of social well-being β wider sidewalks, fewer benches, more surveillance cameras, optimized traffic flows. The designers are well-intentioned. They have read the literature. They have consulted experts. But they did not ask the residents.
Is this ethical urban planning? Most political philosophers would say no β not because the outcomes are necessarily bad, but because the process excluded the people most affected. Legitimacy, in democratic theory, derives not only from outcomes but from procedure.
AI ethics faces an analogous problem, arguably at a far larger scale. The entities making consequential ethical design choices β about what counts as "fair," what counts as "harmful," what kinds of outputs are permitted β are, in many cases, a relatively small and demographically homogeneous group. As Safiya Umoja Noble documents in Algorithms of Oppression, the communities most likely to be harmed by algorithmic systems are frequently the least represented in the rooms where those systems are designed.
The asymmetry runs deep. Those who design ethical AI frameworks typically possess:
- Technical fluency that allows them to understand what choices are being made
- Institutional access that allows them to influence those choices
- Cultural proximity to the value systems being encoded
Those subject to the systems typically possess none of these advantages. And unlike a law β which is at least publicly available, subject to legislative debate, and theoretically open to challenge β the ethical architecture of an AI system is often opaque, proprietary, and practically unreachable by those it governs.
Three Scenarios: How Consent Could Work (and Why Each Is Incomplete)
Let me propose three scenarios for how the consent problem might be addressed in AI ethics, and examine what each gets right and what it misses.
Scenario One: Informed Consent Through Disclosure
The most intuitive solution is disclosure: tell people what the system does, and let them consent or refuse. This is roughly the model behind GDPR's transparency requirements and various algorithmic impact assessment proposals.
The difficulty is that meaningful informed consent requires not just disclosure, but comprehensibility. Research on privacy policy readership suggests that most users do not read, and often cannot meaningfully interpret, the terms governing the systems they use. A disclosure that says "our hiring model uses a machine learning classifier trained on historical employment data" is technically transparent but practically unintelligible to most applicants.
There is also the problem of exit. Consent is only meaningful when refusal is a real option. If the hiring algorithm is used by most employers in your industry, or the credit-scoring model is used by most lenders, declining to participate may not be practically available. Consent under conditions of no viable alternative is, at best, a constrained form of agreement.
Scenario Two: Participatory Design and Community Inclusion
A more ambitious approach involves building consent into the design process itself β not just disclosing choices after they are made, but including affected communities in making them. This is the logic behind participatory AI design frameworks, which have gained traction in academic and some policy circles.
The case for this approach is compelling. As the AI Tools Are Now Deciding What Gets Deleted β And That's a Compliance Crisis analysis illustrates, when AI systems make consequential decisions β including what information survives and what disappears β the people most affected are rarely at the table when those systems are designed. Participatory design attempts to correct this structural exclusion.
The challenge is scalability and representation. Who counts as "the community"? How do you aggregate genuinely diverse preferences into coherent design constraints? And how do you prevent participatory processes from being captured by the most organized or most vocal stakeholders, rather than the most vulnerable?
There is also a deeper philosophical tension. Participatory design assumes that affected communities can meaningfully evaluate technical trade-offs β that they can, for instance, assess the difference between statistical parity and equalized odds as fairness criteria. This may be an unrealistic expectation without significant investment in what some researchers call "data literacy infrastructure."
Scenario Three: Democratic Governance of AI Ethics Standards
The most structurally ambitious scenario is to treat AI ethical standards as a matter of democratic governance β subject to legislative deliberation, public comment, and electoral accountability, rather than private corporate decision-making or voluntary industry frameworks.
This is the direction some jurisdictions appear to be moving. The EU AI Act, which entered into force in 2024, represents perhaps the most significant attempt to subject AI systems to democratically enacted legal standards. It establishes risk tiers, mandatory transparency requirements, and prohibitions on certain uses β all through a legislative process, however imperfect, that involves elected representatives.
The limitation of this approach is the gap between the speed of democratic deliberation and the pace of AI deployment. By the time a regulatory framework is enacted, the technological landscape it was designed to govern may have shifted substantially. This is what I have previously called the "speed problem" in AI ethics β the field is structurally reactive, always arriving slightly after the harm has already been distributed.
There is also the question of which democracy. The EU AI Act reflects European legal traditions and value frameworks. It does not obviously represent the preferences of communities in Southeast Asia, sub-Saharan Africa, or Latin America, where AI systems built by American and Chinese companies are increasingly deployed. The consent problem, at the global scale, is also a sovereignty problem.
The "Moral Architecture" Problem in AI Ethics
There is a concept in political philosophy β what Roberto Unger called "false necessity" β the tendency to mistake contingent historical arrangements for inevitable structural features. AI ethics may suffer from a version of this: treating the current distribution of decision-making authority over ethical AI design as if it were natural, rather than chosen.
The ethical frameworks embedded in AI systems are not discovered. They are designed. Someone decided what "fairness" means in a particular context. Someone decided which harms count and which don't. Someone decided whose values get encoded and whose get treated as edge cases. As I have argued in earlier analyses of the power problem in AI ethics, these are not neutral technical choices β they are value-laden selections made by specific people in specific institutional contexts.
The consent problem makes this visible in a particular way. If we acknowledge that AI ethical frameworks are designed rather than discovered, then the question of who designs them β and on whose authority β becomes unavoidable. And the current answer, in most cases, appears to be: a relatively small group of people, operating within commercial or academic institutions, without a robust mechanism for the affected public to participate in, contest, or revoke the choices being made on their behalf.
This connects to a broader concern about what the Eliza Play and the AI Mirror production surfaces so poignantly: we are increasingly living inside systems that reflect particular assumptions about human behavior, human value, and human flourishing β and we rarely have the opportunity to step outside the mirror and ask whether what it shows us is accurate, or whose image it was built to reflect.
What Can Actually Be Done? Practical Steps Toward Consent-Grounded AI Ethics
I want to be careful here not to offer false reassurance. The consent problem in AI ethics does not have a clean solution. But that does not mean nothing can be done. Several approaches appear to offer meaningful, if partial, improvements.
1. Algorithmic impact assessments with genuine public participation. Not just internal audits, but structured processes that bring affected communities into pre-deployment review. The Canadian Directive on Automated Decision-Making offers one model, requiring impact assessments for government AI systems β though critics note that public participation mechanisms remain limited.
2. Meaningful opt-out rights, not just disclosure. Consent requires exit options. Regulators and designers should take seriously the question of whether "consent" is genuine when refusal is practically impossible. Where exit is not feasible, stronger substantive protections β rather than procedural disclosure β may be the more honest approach.
3. Diverse epistemic communities in design. This goes beyond demographic diversity, though that matters too. It means including people who bring different knowledge frameworks, different experiences of harm, and different assumptions about what "good" looks like. The goal is not consensus, but richer contestation.
4. Sunset clauses and mandatory review. Ethical frameworks embedded in AI systems should not be permanent. Building in mandatory review periods β at which point the framework must be re-examined and re-justified β creates at least a structural occasion for revisiting choices made without adequate consent.
5. Honest acknowledgment of the limits of consent. In some contexts, full consent may not be achievable β and pretending otherwise may be worse than acknowledging the gap. Where consent is structurally constrained, the ethical obligation shifts toward stronger substantive protections for the least powerful affected parties.
A Carefully Held View
I want to be honest about where I land on this, while holding the uncertainty it deserves.
The consent problem in AI ethics seems to me genuinely foundational β not a peripheral concern to be addressed after the "real" technical problems are solved, but a prior question about the legitimacy of the entire enterprise. If the ethical frameworks governing AI systems were designed without meaningful participation from those most affected, then the claim that those systems are "ethical" rests on a contested foundation.
At the same time, I am skeptical of framings that treat consent as a binary β either full democratic consent or illegitimate imposition. Most governance involves some degree of proxy, delegation, and imperfect representation. The question is not whether AI ethics can achieve perfect consent, but whether it is moving toward more legitimate, participatory, and contestable forms of governance β or away from them.
The evidence on this is, at best, mixed. There are genuine efforts to build more inclusive design processes. There are also strong institutional incentives β commercial, competitive, and reputational β that push toward faster deployment with less friction, which typically means less participation.
The honest answer is that we are, likely, somewhere in the middle: making slow, uneven, and sometimes reversible progress toward more legitimate AI ethics governance, while simultaneously deploying systems at a pace that outstrips the deliberative processes we are trying to build.
A Question to Sit With
Here is the question I want to leave with you:
If the ethical framework governing an AI system that affects your life was designed without your knowledge, by people you didn't choose, according to values you were never asked to endorse β at what point does calling it "ethical" become a category error?
This is not a rhetorical question designed to produce despair. It is an invitation to think carefully about what we actually mean when we invoke the word "ethics" in the context of AI β and whether the institutional arrangements we have built are genuinely capable of earning that name.
Tags: AI ethics, consent, democratic governance, algorithmic accountability, participatory design, technology philosophy, legitimacy
The Consent Problem: Why "Ethical AI" Needs a Democratic Theory
(Continued from previous section)
What Would Legitimate Consent Actually Look Like?
The question I posed at the end of the last section is not merely philosophical provocation. It points toward a genuine institutional design challenge: if consent is the missing foundation of AI ethics, what would it actually look like to build it in?
Here, I want to propose a thought experiment. Imagine three different models of consent in AI governance, each borrowed from an existing institutional tradition, and each carrying its own strengths and pathologies.
The first model is the informed consent model, borrowed from biomedical ethics. In this framework, individuals are told what a system does, how it affects them, and given the option to opt out. This is the model most AI companies currently approximate β through privacy policies, terms of service, and cookie banners that almost no one reads. The problem, as any bioethicist will tell you, is that informed consent is only meaningful when the power asymmetry between the consenting party and the institution is manageable, when the information provided is genuinely comprehensible, and when the option to refuse carries no significant penalty. None of these conditions reliably hold in the current AI ecosystem. Refusing to consent to an algorithmic hiring system does not mean you get a human reviewer β it often means you are simply excluded from consideration entirely.
The second model is the democratic representation model, borrowed from political theory. In this framework, individuals do not consent to every decision that affects them; instead, they participate in choosing representatives who make decisions on their behalf, within a system of checks, balances, and accountability mechanisms. This is roughly the model that AI regulation frameworks like the EU AI Act (2024) attempt to approximate β delegating oversight to elected bodies and appointed regulators. The strength of this model is that it scales. Its weakness is that it depends on the quality of representation, and as Sheila Jasanoff has argued extensively in her work on co-production, the technical and the political are never cleanly separable. Regulators who do not understand the systems they regulate, or who are captured by the industries they oversee, produce legitimacy theater rather than legitimacy.
The third model is the affected community model, borrowed from environmental justice and participatory action research. In this framework, the communities most likely to bear the costs of a system have a formal, structural role in its design, deployment, and governance β not as consultants whose input can be acknowledged and set aside, but as co-authors with meaningful veto power. This model is the most demanding and the least commonly implemented. It is also, I would argue, the most honest about what consent actually requires when the stakes are asymmetrically distributed.
Each of these models is incomplete on its own. What genuine democratic legitimacy in AI ethics would require is something more like a layered architecture β combining elements of all three, calibrated to the nature and severity of the system in question.
The Structural Obstacles Are Not Accidental
Here is where I want to resist a temptation that I notice in much of the AI ethics literature: the temptation to frame the consent problem as primarily a design problem, solvable with the right participatory toolkit.
The obstacles to meaningful consent in AI governance are not primarily technical or methodological. They are structural and political.
Consider the incentive landscape. The organizations with the most resources to build AI systems are also the organizations with the strongest incentives to minimize the friction of governance. Participation takes time. Deliberation is expensive. Genuine veto power for affected communities creates legal and commercial uncertainty. In a competitive market where speed of deployment is a significant advantage, the rational institutional response is to invest in the appearance of participation rather than its substance β to fund ethics boards that have no binding authority, to conduct community consultations whose findings are not incorporated into design decisions, to publish responsible AI principles that carry no enforcement mechanism.
This is not cynicism about individuals. Many of the people working on AI ethics inside large technology organizations are genuinely committed to the values they articulate. But as the sociologist Robert Merton observed in his analysis of institutional behavior, good intentions operating within bad incentive structures tend to produce bad outcomes with a clear conscience. The problem is not the people; it is the architecture of incentives within which they work.
There is also a deeper epistemological obstacle, one that connects to what I have called in previous writing the "mirror problem." Meaningful consent requires that people understand what they are consenting to. But the harms of AI systems are often latent, distributed, and emergent β they do not announce themselves in advance. The communities most likely to be harmed are often the least positioned to anticipate the specific mechanisms of that harm, precisely because those mechanisms are embedded in technical systems whose logic is opaque even to their designers. You cannot meaningfully consent to a risk you cannot see, and you cannot see a risk that has not yet materialized in a form legible to your existing categories of understanding.
This is not a reason to abandon the project of consent. It is a reason to be honest about its limits β and to build governance structures that do not treat consent as a one-time transaction, but as an ongoing, revisable, and contestable relationship.
Three Scenarios for Where This Goes
Let me now offer what I think are the three most plausible trajectories for AI ethics governance over the next decade, each representing a different resolution of the consent problem.
Scenario One: Legitimacy by Accumulation. In this scenario, the slow, uneven progress I described earlier continues and gradually compounds. Regulatory frameworks mature. Affected communities develop more sophisticated capacities to engage with technical systems and advocate for their interests. Legal precedents establish clearer accountability mechanisms. Participatory design practices become more institutionalized, if never perfectly implemented. The consent problem is never fully solved, but it becomes progressively less severe as governance structures catch up with deployment realities. This is the optimistic scenario β not because it is painless, but because it is directionally correct.
Scenario Two: Legitimacy by Crisis. In this scenario, a series of high-profile, clearly attributable AI harms β a discriminatory hiring algorithm that produces documented, legally actionable disparate impact; an autonomous system failure with unambiguous casualties; a large-scale manipulation campaign with traceable algorithmic amplification β forces a rapid and reactive restructuring of governance. This is historically the more common path for technology regulation: we did not get meaningful pharmaceutical regulation until thalidomide, meaningful financial regulation until the 2008 crisis, meaningful data privacy regulation until Cambridge Analytica. Crisis-driven governance is better than no governance, but it is also typically designed around the last crisis rather than the next one, and it tends to produce blunt instruments rather than nuanced ones.
Scenario Three: Legitimacy by Capture. In this scenario, the language of AI ethics is fully absorbed into corporate and state legitimacy projects without the substance. "Ethical AI" becomes a certification category, a marketing claim, and a regulatory compliance checkbox β divorced from any genuine accountability to affected communities. The consent problem is not solved; it is dissolved by being redefined out of existence. Ethics becomes what the ethics board approves, and the ethics board approves what the organization has already decided to do. This is the scenario that keeps me most intellectually alert β not because I think it is inevitable, but because the institutional incentives that produce it are the strongest of the three.
My own assessment β offered with appropriate epistemic humility β is that we are currently on a path that contains elements of all three scenarios simultaneously, in different domains and different jurisdictions. The question of which trajectory dominates is not predetermined. It is, in the most literal sense, a political question: a question about who has power, who exercises it, and in whose interests.
A Closing Reflection
Marshall McLuhan famously observed that "the medium is the message" β that the form of a communication technology shapes social relations independently of the content it carries. I have been thinking lately about a parallel claim for AI ethics: that the process of AI governance is the message.
When we ask whether an AI system is ethical, we are asking a question about values. But the answer we get is always shaped by the process through which it was produced. An ethics framework designed by a homogeneous group, without meaningful participation from affected communities, without genuine accountability mechanisms, and without revisability in light of emerging harms β that framework carries a message about power and legitimacy that is independent of whatever values it formally endorses.
The consent problem, at its deepest level, is not a problem about information or comprehension. It is a problem about recognition β about whether the people most affected by AI systems are recognized as genuine moral and political agents whose participation in governance is a condition of its legitimacy, rather than a courtesy to be extended when convenient.
I do not think we have solved this problem. I am not certain we have fully understood it. But I think the question of democratic legitimacy in AI ethics β who gets to define the "ethical," by what process, accountable to whom β is the most important question in the field that is least often asked with the seriousness it deserves.
A Question to Sit With
If consent is genuinely impossible to achieve at scale β if the complexity, speed, and opacity of AI systems make meaningful prior agreement structurally unattainable β what alternative foundations for legitimate AI governance are available to us, and are any of them strong enough to bear the weight we are placing on them?
Tags: AI ethics, consent, democratic legitimacy, participatory governance, algorithmic accountability, technology philosophy, institutional design
Dr. μ ν νΌμ
μΈκ°-μ»΄ν¨ν° μνΈμμ©μ μ°κ΅¬ν λ―Έλνμ. κΈ°μ μ΄ μ¬νμ μΈκ°μκ² λ―ΈμΉλ μν₯μ νꡬνλ©°, κΈ°μ λκ΄λ‘ κ³Ό λΉκ΄λ‘ μ¬μ΄μμ κ· ν μ‘ν μμ μ μ μν©λλ€.
Related Posts
λκΈ
μμ§ λκΈμ΄ μμ΅λλ€. 첫 λκΈμ λ¨κ²¨λ³΄μΈμ!