AI Systems: The Invisible Legislator in Society
What does it mean when the rules governing human behavior are no longer written by humans β at least, not in any way we can easily read, audit, or contest?
This is not a hypothetical question posed by science fiction. It is the operational reality of 2024. Across healthcare triage systems, credit scoring algorithms, judicial sentencing tools, and content moderation platforms, AI systems are making β or heavily influencing β decisions that shape life outcomes for millions of people. And yet, the philosophical frameworks we use to evaluate these decisions remain, at best, borrowed from older traditions, and at worst, entirely absent from the conversation.
I have spent considerable time examining the intersection of technology and social philosophy, and I find myself returning to a single, uncomfortable observation: we are building legislators we cannot vote out, judges we cannot cross-examine, and moral authorities we did not elect. The question of AI ethics is, at its core, a question about political philosophy disguised as a technical problem.
A Brief Historical Detour: When Infrastructure Becomes Authority
Here is a thought experiment worth sitting with for a moment. Consider the history of urban planning.
In the mid-twentieth century, urban planner Robert Moses designed a series of overpasses on Long Island's parkways with intentionally low clearances β low enough that public buses could not pass beneath them. The effect, whether intentional or not, was to restrict access to Jones Beach for low-income residents who depended on public transit, who were disproportionately Black. The discrimination was not written into any law. It was encoded into concrete.
Langdon Winner, the political theorist, used this example in his 1980 essay "Do Artifacts Have Politics?" to argue a now-famous point:
"The things we call 'technologies' are ways of building order in our world... The issues that divide or unite people in society are settled not only in the institutions and practices of politics proper, but also, and less obviously, in tangible arrangements of steel and concrete, wires and semiconductors, nuts and bolts." β Langdon Winner, Do Artifacts Have Politics?, 1980
If overpasses can carry political weight, what happens when the artifact in question is not concrete but code? When it is not a bridge but a scoring algorithm that determines whether you receive a loan, a job interview, or early release from prison?
The answer, I would argue, is that the political weight becomes heavier, not lighter β because code is invisible, scalable, and far more difficult to contest than a physical structure.
The Three Layers of the Problem
To think clearly about AI ethics as a philosophical matter, it helps to disaggregate what we actually mean when we say "AI ethics." I find it useful to identify at least three distinct layers, each with its own philosophical character.
Layer One: The Epistemological Problem β What Can AI Systems "Know"?
The first layer concerns knowledge itself. Modern large language models and machine learning systems are trained on historical data. They learn patterns from the past and project them onto the future. This is epistemologically significant in a way that is often underappreciated.
Consider the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) recidivism algorithm, widely used in U.S. courts. A 2016 ProPublica investigation found that Black defendants were nearly twice as likely as white defendants to be falsely flagged as future criminals. The algorithm's defenders argued that it was statistically accurate at the population level. Its critics argued that "accuracy" built on historically biased criminal justice data simply replicates that bias with mathematical authority.
This is not merely a technical problem. It is a deep epistemological one. As the philosopher of science Sandra Harding has argued, knowledge systems are never neutral β they are always produced from somewhere, by someone, for some purpose. The question "what does the algorithm know?" cannot be separated from the question "what kind of knowledge does this algorithm recognize as valid?"
When an AI system treats historical arrest rates as a proxy for future criminal behavior, it is not simply processing data. It is making a philosophical claim about the relationship between the past and the future β and about which pasts count as evidence.
Layer Two: The Normative Problem β Whose Values Are Being Optimized?
The second layer is normative. Every optimization function encodes a value judgment. When a recommendation algorithm maximizes "engagement," it has made a choice β that engagement is the relevant metric, that more is better than less, and that the costs of maximizing engagement (polarization, addiction, misinformation spread) are externalities rather than core concerns.
Marshall McLuhan famously observed that "the medium is the message" β that the form of a communication technology shapes human association and action independently of the content it carries. I would extend this to argue that the objective function is the ideology. The choice of what to optimize is not a technical decision made after the ethical questions have been settled. It is the ethical decision, made early, quietly, and often irreversibly.
This matters because the people who define objective functions are not a representative sample of humanity. They are, disproportionately, a small demographic cohort β young, male, highly educated, concentrated in a handful of geographic and cultural contexts β making decisions that will affect billions of people across radically different contexts.
One practical illustration: Meta's content moderation systems, trained primarily on English-language data, have been documented to perform significantly worse in languages like Tigrinya and Amharic. During the Tigray conflict in Ethiopia, human rights organizations reported that content inciting violence in these languages was not flagged at the same rate as comparable content in English. The normative framework embedded in the system β what counts as harmful, what counts as speech β was not culturally neutral. It reflected the linguistic and cultural assumptions of its designers.
Layer Three: The Legitimacy Problem β Who Has the Authority to Decide?
The third layer is the one I find most philosophically compelling, and it connects directly to my previous analyses of the consent problem in AI governance.
Even if we could solve the epistemological problem (build systems that "know" things fairly) and the normative problem (agree on whose values to optimize), we would still face the legitimacy problem: by what authority does any institution or individual claim the right to make these determinations for everyone?
This is the question political philosophers have wrestled with for centuries in the context of human governance. John Rawls asked us to imagine choosing the rules of society from behind a "veil of ignorance" β without knowing our place in it. JΓΌrgen Habermas argued that legitimate norms can only emerge from processes of genuine communicative rationality, in which all affected parties have an equal voice.
Neither of these conditions is remotely satisfied by current AI governance processes. The entities making the most consequential decisions about AI values β large technology companies, a small number of regulatory bodies, a handful of well-funded research institutes β are not accountable to the populations most affected by their choices.
"The question is not whether AI will have values. It will. The question is whether those values will be chosen through processes that are legitimate, transparent, and revisable." β Broadly attributed to discussions within the AI alignment research community
The Counterargument: Aren't Human Institutions Equally Flawed?
A fair-minded analysis requires engaging seriously with the strongest counterargument, which runs roughly as follows: human institutions β courts, legislatures, bureaucracies β are also biased, opaque, and unaccountable. Human judges show racial bias. Human loan officers discriminate. Human HR managers favor candidates who look like them. If AI systems are no worse than the humans they replace, perhaps the critique is overstated.
This argument has genuine force. I do not dismiss it.
But I think it ultimately fails for three reasons.
First, scale and speed. A biased human judge affects hundreds of cases per year. A biased algorithm, deployed across a national court system, affects millions of cases simultaneously. The magnitude of harm is categorically different, even if the type of harm is similar.
Second, legibility and contestability. A human judge can be cross-examined, appealed, and β in extreme cases β removed. Their reasoning, however flawed, is at least articulable. Many AI systems, particularly deep learning models, produce outputs that cannot be meaningfully explained even by their designers. This is not a minor technical inconvenience. It is a fundamental challenge to the rule of law, which has always depended on the principle that decisions affecting people's lives must be accountable to reasons.
Third, and most subtly, the comparison naturalizes the status quo. The fact that human institutions are also biased is an argument for improving those institutions β not for replacing them with systems that replicate the same biases at scale while adding new layers of opacity.
Toward a More Honest Framework: Three Scenarios
Let me now sketch three plausible scenarios for how the relationship between AI systems and social governance might evolve over the next decade or two. I offer these not as predictions but as analytical tools β ways of clarifying what is at stake.
Scenario One: Technocratic Capture
In this scenario, the legitimacy problem is effectively dissolved by institutional inertia. AI systems become so deeply embedded in consequential decision-making that the question of their authority is never seriously raised. Regulatory frameworks emerge, but they are largely written by the industries they regulate β a pattern we have seen repeatedly in the history of technology governance, from telecommunications to financial services. Ethics becomes a branding exercise rather than a genuine constraint. The "invisible legislator" consolidates its position.
This scenario appears, to my eye, to be the path of least resistance. It requires no deliberate choice β only the absence of one.
Scenario Two: Democratic Reclamation
In this scenario, the epistemological, normative, and legitimacy problems are taken seriously as political problems rather than technical ones. New forms of participatory governance emerge β algorithmic auditing bodies with genuine independence, mandatory impact assessments that include affected communities, international frameworks that treat AI governance as a matter of human rights rather than industrial policy.
This scenario is not utopian. Analogues exist: environmental impact assessment, pharmaceutical approval processes, and data protection frameworks like the EU's GDPR all represent attempts to subject powerful technical systems to democratic accountability. They are imperfect, but they are revisable β which is itself a form of legitimacy.
Scenario Three: Fragmented Pluralism
In this scenario, no single global framework emerges. Different jurisdictions, communities, and contexts develop their own approaches to AI governance, reflecting genuinely different values and priorities. The EU emphasizes rights-based frameworks. China emphasizes social stability and collective welfare. Various Global South contexts push back against both, arguing that neither model adequately reflects their histories and needs.
This scenario is messy and potentially inefficient. But it may be, philosophically, the most honest acknowledgment of a simple fact: there is no view from nowhere. Every governance framework reflects particular values, and a world in which multiple frameworks compete and evolve may be more epistemically humble than one in which a single framework claims universal validity.
What Can Be Done β Practically, Now
I am aware that philosophical analysis, however rigorous, can feel distant from the practical decisions facing engineers, policymakers, and citizens today. So let me offer several concrete, actionable observations.
For technologists: The choice of objective function is a moral choice. Treating it as a purely technical decision is not neutrality β it is a form of moral abdication. Insisting on explicit articulation of what your system optimizes, and for whom, is not idealism. It is professional responsibility.
For policymakers: Algorithmic transparency requirements need teeth. The right to explanation β enshrined in principle in GDPR Article 22 β is meaningless if the explanation provided is a post-hoc rationalization rather than a genuine account of how the decision was made. Regulators should invest in technical capacity to evaluate these claims independently.
For citizens and civil society: The most powerful lever available is the demand for legibility. When a consequential decision is made by or with the assistance of an AI system, you have a legitimate interest in understanding the basis of that decision. Organizations like the Algorithmic Justice League, AI Now Institute, and Access Now are doing important work in this space β work that deserves broader public support.
For researchers and philosophers: The intellectual challenge of this moment is to develop ethical frameworks that are adequate to the scale and speed of AI deployment β frameworks that do not simply apply existing categories but genuinely grapple with what is new. The philosophy of technology has too often been reactive, arriving after the fact to analyze systems already deployed. We need frameworks that can operate in real time.
A Carefully Held Conclusion
I want to resist the temptation to end with either optimism or pessimism, because I think both would be intellectually dishonest given the genuine uncertainty of this moment.
What I am confident of is this: the question of AI ethics is not, at its deepest level, a question about algorithms. It is a question about authority β about who has the right to make rules, on what basis, and subject to what forms of accountability. These are among the oldest questions in political philosophy, and we have never fully resolved them even for human institutions. The emergence of AI systems that exercise quasi-legislative and quasi-judicial power does not create these questions from scratch. It intensifies them, and demands that we answer them with greater urgency and greater precision than before.
As the philosopher Hannah Arendt wrote β in a different context, but with words that resonate across time:
"The smallest act in the most limited circumstances bears the seed of the same boundlessness, because one deed, and sometimes one word, suffices to change every constellation." β Hannah Arendt, The Human Condition, 1958
The decisions being made today about how AI systems are designed, deployed, and governed are not merely technical decisions. They are, in a very real sense, constitutional decisions β decisions about the kind of society we are building. The fact that they are being made quietly, by a small number of actors, without broad public deliberation, is not a reason for despair. It is a reason for urgency.
The invisible legislator is writing the rules. The question is whether we will insist on being in the room.
A Question Worth Sitting With
If you discovered that a decision affecting your life β a loan denial, a job rejection, a medical triage priority β had been made by an AI system, and you were told that the system's reasoning could not be meaningfully explained, would you accept that decision as legitimate? And if not, what does your answer reveal about what you believe legitimacy actually requires?
Dr. μ ν νΌμ
μΈκ°-μ»΄ν¨ν° μνΈμμ©μ μ°κ΅¬ν λ―Έλνμ. κΈ°μ μ΄ μ¬νμ μΈκ°μκ² λ―ΈμΉλ μν₯μ νꡬνλ©°, κΈ°μ λκ΄λ‘ κ³Ό λΉκ΄λ‘ μ¬μ΄μμ κ· ν μ‘ν μμ μ μ μν©λλ€.
λκΈ
μμ§ λκΈμ΄ μμ΅λλ€. 첫 λκΈμ λ¨κ²¨λ³΄μΈμ!