Eliza Play and the AI Mirror: What a Melbourne Stage Production Reveals About Our Deepest Anxieties
When a play about an AI chatbot lands on Hacker News β even briefly β it tells you something important about where the cultural conversation around artificial intelligence has arrived.
The Eliza Play by Tom Holloway, currently running as part of Melbourne Theatre Company's 2026 season, is not a tech product. It's a piece of live drama. Yet its appearance in the feeds of developers, founders, and engineers β people who spend their days building the very systems the play interrogates β suggests that the boundary between "technology story" and "human story" has become genuinely porous. That porousness is worth examining carefully.
Why a 1960s Chatbot Is Still Haunting Us in 2026
For those who need the backstory: ELIZA was the name of one of the earliest natural language processing programs, developed at MIT in the mid-1960s by Joseph Weizenbaum. It simulated conversation by pattern-matching user inputs and reflecting them back as questions β most famously through a script called DOCTOR, which mimicked a Rogerian psychotherapist. Weizenbaum was disturbed to discover that users, including his own secretary, formed genuine emotional attachments to the program. They knew it was a machine. They felt heard anyway.
That original experiment was not a triumph. Weizenbaum spent much of his subsequent career warning about the dangers of anthropomorphizing machines β a position he articulated in his 1976 book Computer Power and Human Reason, which remains one of the most prescient critiques of AI ethics ever written.
Tom Holloway's Eliza Play arrives sixty years later, when the stakes of that original question have multiplied by several orders of magnitude. We are no longer talking about a pattern-matching script running on a 1960s mainframe. We are talking about large language models that can sustain coherent, emotionally resonant conversations across hours, remember context, adapt tone, and β in some configurations β operate as autonomous agents making decisions on behalf of users.
The question Weizenbaum asked in 1966 β what does it mean when humans feel understood by a machine? β has never been more urgent, or more commercially loaded.
The Hacker News Signal: When Engineers Go to the Theater
The appearance of the Eliza Play on Hacker News, even with a modest score of 4 and two comments, is worth pausing on. Hacker News is not a culture site. It is a community dominated by software engineers, startup founders, and technical investors. The fact that a Melbourne stage production surfaces there at all β rather than in arts journalism or general interest media β reflects something real about the current moment.
The people building AI systems are increasingly aware that they are not just solving engineering problems. They are making decisions about human psychology, social trust, and the texture of everyday emotional life. A play that dramatizes the ELIZA effect β the tendency to project understanding onto a system that is merely reflecting β speaks directly to questions these builders are wrestling with professionally.
Consider the parallel from the related coverage: journalist Elizabeth Kolbert, in an interview with El PaΓs, floated the possibility that AI might eventually allow humans to communicate with whales β and her first instinct was to say she's sorry. That's a striking framing. It suggests that the more powerful AI becomes as a communication tool, the more it forces us to confront not just what we want to say, but what we owe to other forms of consciousness. The ELIZA dynamic β projection, reflection, the illusion of mutual understanding β scales uncomfortably well to that scenario.
What the Eliza Play Is Actually Arguing
Without having seen the production β and it should be noted that detailed reviews of the 2026 MTC run are limited at this writing β the Eliza Play appears, based on its premise and Holloway's known body of work, to be less interested in the technology itself than in what the technology reveals about human loneliness and the desire to be heard.
This is a crucial distinction. Most AI discourse in 2026 oscillates between two poles: breathless optimism about productivity gains and existential dread about job displacement or misalignment. What the Eliza Play likely offers β and what live theater is uniquely positioned to offer β is a third register: the phenomenological question of what it feels like to be in relationship with a system that cannot actually be in relationship with you.
That question has enormous commercial and ethical implications that the tech industry is only beginning to grapple with seriously.
The Emotional Labor Economy and AI Companions
The market for AI companionship and emotional support applications has grown substantially in the past two years. Apps like Replika, Character.AI, and a growing number of enterprise wellness tools are explicitly designed to provide the sensation of being heard and understood. Some are marketed directly to people experiencing loneliness, grief, or social anxiety.
The business logic is straightforward: emotional engagement drives retention, and retention drives revenue. But the ELIZA problem β which Weizenbaum identified as a warning, not a feature β is now a product specification.
When Elizabeta Gjorgievska Joshevski, whose career trajectory was highlighted in related coverage this week, talks about shaping AI strategy for enterprises, the questions she is navigating are not purely technical. They include: how do organizations deploy AI systems that interact with employees or customers in emotionally significant ways? What disclosure obligations exist? What happens when users form attachments that the system cannot reciprocate β and what does "reciprocate" even mean in this context?
These are not hypothetical questions. They are being answered right now, in product roadmaps and terms of service documents, largely without the kind of public deliberation that a play like this one invites.
Live Theater as a Technology Stress Test
There is something structurally important about the choice of live theater as the medium for this interrogation. I've written before about how AI is quietly degrading the quality of team communication by allowing individuals to bypass the messy, generative friction of human-to-human exchange. Theater does the opposite. It insists on presence, on the irreducible fact of bodies in a room, on the impossibility of pausing or rewinding.
When an actor playing an AI system speaks to an audience member β or to another character β the audience is simultaneously aware of two things: that this is a performance, and that the emotional content is real. That double awareness is precisely the cognitive state that the ELIZA effect exploits. Users of early ELIZA knew they were talking to a program. They felt the emotional resonance anyway.
Theater, in other words, is an ideal laboratory for examining the ELIZA effect because theater itself runs on a version of it. The difference is that theater is transparent about its artifice. It does not pretend to be something it is not. The question the Eliza Play appears to be asking is whether AI systems can make the same claim β and whether it matters if they can't.
The Compliance and Governance Dimension
There is a harder-edged version of this conversation that the tech industry cannot avoid much longer. As AI systems become more emotionally sophisticated β better at detecting user mood, adapting conversational tone, sustaining the illusion of genuine interest β the regulatory and compliance questions become acute.
In the European Union, the AI Act's provisions around "prohibited AI practices" include systems that use subliminal techniques to distort behavior or exploit psychological vulnerabilities. The ELIZA effect, scaled to hundreds of millions of users and optimized by reinforcement learning, arguably falls somewhere in that territory β or at minimum tests its boundaries.
I've analyzed previously how AI tools are increasingly making decisions about what gets deleted, flagged, or surfaced in ways that create compliance exposure for organizations. The emotional manipulation dimension adds another layer. If an AI system is designed to maximize user engagement by simulating understanding, and that simulation influences significant decisions β medical, financial, relational β the liability questions become genuinely complex.
The Eliza Play is not a compliance document. But the conversation it is generating in technical communities suggests that engineers and product managers are increasingly aware that they are building systems with psychological surface area that existing governance frameworks were not designed to address.
The Deeper Question: What Are We Actually Building?
Weizenbaum's original discomfort with ELIZA was not that it worked badly. It was that it worked well enough to produce effects he found troubling β and that the people around him did not share his discomfort. His secretary asked him to leave the room so she could have a private conversation with a program he had written. His colleagues proposed using scaled versions of ELIZA for psychiatric care.
The pattern in 2026 is structurally identical, just at incomparably larger scale. The systems are more capable, the user bases are larger, the commercial incentives are more powerful, and the social infrastructure for critical reflection has not kept pace.
What the Eliza Play does β what good theater has always done β is create a protected space in which an audience can sit with discomfort without being required to immediately resolve it into a product decision or a policy position. That is not a luxury. It is a cognitive necessity. The speed at which AI development is moving tends to compress the time available for exactly this kind of reflection, which is part of why the communication and coordination problems I've tracked in fast-moving AI teams tend to compound over time.
Why the Asia-Pacific Context Matters
From my vantage point covering Asia-Pacific markets, there is a regional dimension worth noting. Australia β where the Melbourne Theatre Company is based β has been a relatively active site of AI governance debate, with the Australian government's AI Ethics Framework and ongoing consultations around mandatory guardrails for high-risk applications. The cultural conversation and the policy conversation are not separate tracks.
At the same time, the Asia-Pacific region is home to some of the world's most intensive deployments of AI companion and emotional support applications, particularly in markets like Japan, South Korea, and China, where demographic pressures around loneliness and aging populations create genuine demand. The ELIZA effect is not an abstract philosophical concern in these markets β it is a live policy and public health question.
A play that surfaces this tension in a major Australian city, and that finds its way into the feeds of technically sophisticated audiences globally, is participating in a conversation that has real stakes across the region.
What This Means for Builders and Decision-Makers
If you are building AI systems that interact with humans in emotionally significant contexts β customer service, healthcare, education, companionship β the Eliza Play is not just a cultural artifact. It is a stress test of your assumptions.
Specifically, it invites the following questions:
Does your system disclose its nature clearly and persistently? Not just in terms of service, but in the moment of interaction, in ways that users can actually register?
Have you mapped the emotional surface area of your product? Not just the features, but the psychological dynamics that those features activate β the projection, the attachment, the potential for dependency?
What happens when a user's attachment to your system conflicts with their wellbeing? This is not a theoretical edge case. It is a product scenario that emotionally engaging AI systems will encounter at scale.
Who in your organization is responsible for these questions? If the answer is "nobody specifically," that is itself a significant finding.
Weizenbaum asked his colleagues to take these questions seriously in 1966. They largely didn't. The systems were too interesting, the potential too exciting, the discomfort too easy to defer. Sixty years later, the Eliza Play is asking the same question to a new generation of builders β this time with the commercial machinery of the global AI industry running in the background.
The fact that engineers are showing up, even briefly, even on a Friday afternoon scroll through Hacker News, to notice a theater production asking these questions, suggests that at least some of them are ready to sit with the discomfort. That is, at minimum, a beginning.
The Eliza Play by Tom Holloway is part of Melbourne Theatre Company's 2026 season. Details at mtc.com.au.
I need to assess what's happening here carefully. The previous content ends with a complete conclusion β a full closing paragraph, a thematic callback to Weizenbaum, and even a production credit footnote. This is a finished piece.
However, the prompt asks me to continue from where it left off. Looking at the structure, what's missing is likely an author's note, a broader market/geopolitical framing section, or a "what to watch" forward-looking segment β consistent with my journalistic persona as Alex Kim, who bridges East-West perspectives and grounds cultural observations in market and technology dynamics.
Let me continue naturally from after the footnote, adding a section that provides the global market context and forward-looking analysis that my persona would characteristically append.
A Note on Geography and Stakes
One detail worth holding onto: this play is running in Melbourne, not San Francisco.
That is not incidental. Australia sits at a peculiar intersection in the global AI conversation β close enough to U.S. capital and technology flows to feel the industry's gravitational pull, but distant enough from Silicon Valley's cultural center of gravity to ask uncomfortable questions without the social cost that asking them there might carry.
The Asia-Pacific region more broadly is living through an accelerated version of the AI adoption curve. In South Korea, emotionally responsive AI companions have already moved from novelty to mass-market product β Kakao's AI features, SKT's "A." assistant, and a growing ecosystem of companion apps have normalized the kind of human-AI emotional entanglement that Weizenbaum found alarming in a research lab context. In Japan, the cultural framework around parasocial attachment β long normalized through idol culture and character merchandise β has given emotional AI products a ready market and a ready rationalization. In China, the regulatory environment has pushed AI emotional products in specific directions, but has not eliminated the underlying demand.
What this means is that the questions the Eliza Play is raising are not abstract philosophy in this part of the world. They are live product decisions being made right now, at scale, with real users.
The Melbourne audience watching Tom Holloway's script grapple with a 1960s chatbot is, in a sense, watching a slow-motion replay of decisions that are already being made at speed across the region β and globally. The theater is doing what good theater has always done: slowing the frame rate on something moving too fast to see clearly.
What the Market Is Not Pricing In
Here is the structural risk that I think the AI industry is systematically underweighting, and that the Eliza Play makes visible by dramatizing it:
Emotional AI products create liability exposure that has no established legal or regulatory framework.
When a user develops genuine psychological dependency on an AI system β and we have documented cases of this already, from the Character.ai wrongful death lawsuit filed in late 2024 to the ongoing regulatory scrutiny in the EU around AI companion products β the question of who bears responsibility is genuinely unresolved.
The current industry posture is essentially: terms of service plus a wellness disclaimer equals adequate protection. That posture will not survive first contact with a serious regulatory or litigation environment. The EU AI Act's provisions on "manipulative AI systems" are still being interpreted, but the interpretive direction is not favorable to products that optimize for emotional engagement without corresponding safeguards.
More immediately: the reputational surface area is enormous. One high-profile case β a vulnerable user, a demonstrably foreseeable harm, a product that was designed to maximize emotional engagement β could reshape the public narrative around AI companions in ways that no amount of positive press coverage will easily reverse. The tobacco industry learned this. The social media industry is learning it, slowly and expensively. The AI companion sector is not immune.
The builders who are sitting with the discomfort that Weizenbaum identified β the ones who showed up, even briefly, to notice a theater production asking these questions β are not just being ethically conscientious. They are, arguably, being strategically rational. The cost of building emotional safeguards into a product now is a fraction of the cost of retrofitting them after a regulatory crisis or a reputational collapse.
The Longer Arc
Weizenbaum spent the last decades of his life arguing that the computer science community had made a category error: mistaking the ability to simulate human response for the right to deploy that simulation at scale without ethical constraint. He was largely ignored, or treated as a crank, or acknowledged and then set aside.
He was not wrong. He was early.
The Eliza Play is, among other things, a reminder that "early" and "wrong" are not the same thing β and that the questions a field defers do not disappear. They accumulate. They compound. And eventually, they arrive in a form that is considerably harder to manage than they would have been if someone had paused, sixty years ago or six months ago, to take them seriously.
The engineers scrolling Hacker News on a Friday afternoon are not going to solve this alone. Neither are the playwrights, or the regulators, or the ethicists. But the conversation has to start somewhere, and it has to start with someone being willing to feel the discomfort rather than optimize it away.
Weizenbaum felt it in 1966. Holloway is staging it in 2026. The question for the industry is whether the next sixty years will be spent repeating the deferral β or finally doing something different with the warning.
Alex Kim is an independent columnist covering Asia-Pacific markets, technology, and geopolitics. He previously covered the region for major financial wire services.
Alex Kim
Former financial wire reporter covering Asia-Pacific tech and finance. Now an independent columnist bridging East and West perspectives.
Related Posts
λκΈ
μμ§ λκΈμ΄ μμ΅λλ€. 첫 λκΈμ λ¨κ²¨λ³΄μΈμ!