AI Fortune Telling Is Korea's Newest Consumer Obsession β And the Economics Are Stranger Than Any Prophecy
If you've recently wondered why a growing number of South Koreans are consulting algorithms about their love lives and career prospects rather than human mudang (shamans), the answer likely has less to do with superstition and more to do with a quiet structural shift in how uncertainty is priced and consumed.
AI fortune telling β the practice of feeding one's birth date, time, and personal circumstances into a large language model or specialized app to receive saju (four pillars of destiny) readings β has moved from novelty to mainstream in South Korea, and the economic signals embedded in that transition deserve considerably more scrutiny than the headline suggests.
The Headline Nobody Is Reading Correctly
The Digital Journal's recent coverage frames AI shamans as a curiosity β curious South Koreans consulting curious machines about curious futures. It is an easy narrative, and not an inaccurate one. But it misses what I would call the second movement of this symphony: the macroeconomic and behavioral architecture that makes AI fortune telling not merely possible, but economically rational for the consumer.
Consider the baseline. South Korea has one of the most deeply embedded fortune-telling cultures in the developed world. Saju consultations, tarot, and gwansang (physiognomy) are not fringe activities; they are, by various estimates, a multi-billion-dollar informal economy operating through storefronts in Insadong, KakaoTalk referrals, and now, increasingly, through app stores. The transition from human shaman to AI shaman is not a rupture β it appears to be a natural extension of an existing demand curve, repriced downward and made available at scale.
That repricing is the part worth pausing on.
When Uncertainty Becomes a Commodity: The Economics of AI Fortune Telling
In the grand chessboard of global finance, the demand for fortune telling β whether algorithmic or human β is a derived demand. It is derived from uncertainty. When economic conditions are stable, predictable, and broadly optimistic, the appetite for prophecy tends to soften. When conditions are volatile, opaque, or structurally disorienting, people seek frameworks β any frameworks β to impose narrative order on what feels like chaos.
This is not a uniquely Korean phenomenon. The Bank for International Settlements has documented how consumer sentiment and uncertainty indices tend to move inversely with risk appetite across multiple economies. When people cannot price their own futures with confidence, they outsource the calculation β historically to astrologers, priests, and shamans, and now, apparently, to transformer models.
What AI has done is dramatically reduce the transaction cost of that outsourcing. A traditional saju consultation in Seoul likely runs anywhere from 50,000 to 200,000 won per session, requires scheduling, and carries the social friction of sitting across from another human who is, in some sense, judging you. An AI fortune telling app charges a fraction of that β or nothing at all for a basic reading β is available at any hour, and carries no social judgment. The economic logic is straightforward: same derived demand, lower marginal cost, higher accessibility.
The question that interests me as an economist is not whether this substitution is occurring β it appears to be β but what it signals about the underlying state of consumer psychology in the Korean economy right now.
The a16z Anxiety Parallel: Two Kinds of Fear, One Market Signal
It would be a mistake to read the AI shaman phenomenon in isolation. Running alongside it, in a different register entirely, is the anxiety documented by Ben Horowitz of a16z, who β according to recent reporting β observed that "AI anxiety" is currently consuming Silicon Valley founders, who fear they are not moving fast enough into AI adoption. Meanwhile, workers across sectors express a different fear: that AI is coming for their roles before they have had any meaningful opportunity to adapt.
These are, as the reporting notes, two distinct anxieties operating simultaneously in the AI economy. Founders fear irrelevance through inaction. Workers fear displacement through action. And somewhere between those two poles β in Seoul's app stores and KakaoTalk channels β ordinary consumers are apparently turning to AI not to build the future, but to navigate it.
This triangulation matters. If we accept that AI fortune telling is partly a behavioral response to economic uncertainty, and if we layer onto that the documented anxiety among both technology workers (who, per IT Pro's recent coverage, report that "AI is not making IT simpler β it's making it more consequential") and founders who feel existential pressure to adopt AI faster than their competitors, then what we are observing is a society in which AI is simultaneously generating uncertainty and monetizing the demand for relief from that uncertainty.
That is, to use a phrase I find myself returning to, a genuine economic domino effect β not in the catastrophic sense, but in the structural sense. Each tile that falls creates the conditions for the next.
Markets Are the Mirrors of Society β And Right Now They Reflect Ambivalence
The IT Pro coverage makes a point that deserves to be extracted from its technical framing: AI is raising expectations for workers even as it ostensibly reduces workloads. The IT professionals surveyed describe a situation in which the bar for performance has risen precisely because AI tools are available β meaning that failing to leverage AI now reads as a competency deficit rather than a neutral choice.
This is a labor market dynamic with significant macroeconomic implications, and it connects to the AI shaman story in a non-obvious way. If the perceived cost of navigating one's career trajectory has risen β because the rules of the game are changing faster than individuals can track β then the demand for any tool that appears to offer clarity, direction, or even the illusion of predictability would likely rise in parallel.
I want to be careful here not to overstate the causal arrow. The available reporting does not provide direct survey data linking career anxiety among Korean workers to AI fortune telling adoption specifically. But the behavioral economics literature on uncertainty and superstition-adjacent decision-making suggests a plausible mechanism: when individuals face decisions whose outcomes feel genuinely unpredictable, they tend to seek any structured framework that provides a sense of agency, even if that framework's predictive validity is questionable.
AI fortune telling apps, in this reading, are not primarily in the prediction business. They are in the anxiety management business. And anxiety management, as any insurance actuary will tell you, is a very large market indeed.
The Structural Disruption Beneath the Surface
Here is where I want to push the analysis beyond the behavioral and into the structural.
The AI shaman market β if we can call it that β represents a disintermediation of a previously human-mediated service. This is a pattern we have seen across multiple sectors: travel agents replaced by booking algorithms, financial advisors partially displaced by robo-advisors, radiologists augmented (and in some cases supplanted) by diagnostic AI. As I noted in my analysis of AI's impact on medical cost architecture, the economic logic of disintermediation is consistent: reduce information asymmetry, lower transaction costs, and capture a larger addressable market at a lower price point.
The fortune telling sector in South Korea likely follows this same logic. Human shamans and saju masters hold a form of experiential and reputational capital that is difficult to replicate algorithmically β the sense that a specific human practitioner has genuine insight, intuition, or spiritual authority. But for a significant portion of the market, that premium may not be worth paying, particularly for lower-stakes consultations (should I change jobs this year? is this a good month for a major purchase?).
What this means for the human practitioners is a classic bifurcation of the market: the premium segment β older clients, high-stakes consultations, culturally embedded rituals β likely remains relatively insulated, while the mass-market segment migrates toward AI platforms. We have seen this pattern in financial advisory (robo-advisors captured the mass market; human advisors retained high-net-worth clients), and there is no obvious reason why the fortune telling market would behave differently.
For a deeper look at how AI is reshaping adjacent service markets and the governance questions that follow, I'd recommend reading AI Cloud Is Now Deciding What to Forget β And That's the Next Governance Crisis, which examines the structural accountability gaps that emerge when algorithmic systems take over previously human-mediated decisions.
What the AI Shaman Economy Tells Us About Broader AI Adoption
The Horowitz observation about "AI anxiety" among founders is worth revisiting in this context. His framing β that Silicon Valley founders fear they are not moving fast enough β presupposes that the primary risk of AI is underadoption. But the South Korean AI shaman market suggests a different risk topology: misaligned adoption, in which AI tools proliferate rapidly in domains where their value proposition is emotionally compelling but empirically unverifiable.
This is not a criticism of AI fortune telling per se. People have always sought frameworks for navigating uncertainty, and if an AI app provides genuine psychological comfort at a low cost, that is a real consumer benefit. But from a policy and market design perspective, the rapid scaling of AI into high-uncertainty, emotionally loaded domains β personal destiny, health prognosis, relationship compatibility β raises questions about how we calibrate consumer expectations and what disclosures are appropriate when the "product" is, at its core, a sophisticated pattern-matching system applied to questions that may not have pattern-based answers.
The IT Pro framing β that AI is making IT "more consequential" rather than simpler β applies here too, albeit in a different register. When an algorithm tells you that this year is favorable for career transitions, the consequence of that output is not technical but deeply personal. The stakes are asymmetric: if the AI is wrong about a server configuration, the IT team fixes it. If the AI is wrong about your career timing, you may not know for years.
This asymmetry between the ease of generation and the cost of error is, I would argue, one of the defining economic tensions of the current AI deployment phase β and it is visible in the AI shaman story as clearly as anywhere.
Actionable Takeaways for the Economically Curious Reader
For investors and market watchers: The AI fortune telling market in South Korea is likely a leading indicator of broader consumer AI adoption patterns in high-uncertainty environments. Watch for similar dynamics in other high-context, high-anxiety consumer segments β health monitoring, personal finance coaching, relationship counseling β where the emotional value proposition of AI tools may outpace their empirical track record.
For policymakers: The disintermediation of human shamans by AI platforms is, on the surface, a cultural curiosity. But it is also a test case for how AI adoption proceeds in unregulated, emotionally sensitive service markets. The absence of disclosure requirements, accuracy benchmarks, or consumer protection frameworks in this space appears to be the norm rather than the exception β and that gap will likely require attention as the market scales.
For the individual reader: If you find yourself consulting an AI fortune telling app, I would not suggest this is irrational. Uncertainty is genuinely costly, and any tool that reduces it β even imperfectly β has real value. But treat the output as a prompt for reflection rather than a prediction to act on. The algorithm knows your birth date; it does not know your context, your resilience, or the specific texture of your circumstances. That part remains, for now, irreducibly human.
A Closing Reflection
There is something philosophically resonant about the AI shaman moment. In the grand chessboard of global finance and technological change, we tend to focus on the moves being made by the powerful pieces β the AI companies, the regulators, the central banks. But the AI fortune telling phenomenon reminds us that the pawns are also making moves, quietly, in app stores and late-night consultations, seeking some legible signal in what feels like an increasingly illegible world.
Markets are the mirrors of society, and what this particular mirror is reflecting is not superstition, nor gullibility, nor even technophilia. It is something older and more persistent: the human desire to feel, however briefly, that the future is knowable β and the remarkable adaptability of markets in finding new ways to sell that feeling.
Whether the algorithms are any better at delivering it than the shamans who came before them is, of course, a question that only time β and perhaps a well-calibrated uncertainty index β can answer.
For related analysis on how technology and trade policy are reshaping economic incentives in unexpected ways, see also Hanwha's Section 301 Retreat: When the Pawn Reconsiders Its Move and The Death Protein That's Quietly Aging Your Blood: What MLKL Means for the Economics of Longevity.
I notice that the content provided appears to already be a complete conclusion to the article about AI fortune-telling/shamanism in Korea. The passage ends with a philosophical reflection, a rhetorical closing question, and even a "related articles" footer β all hallmarks of a fully concluded piece.
There is no natural continuation point here; the article has reached its organic end. Appending additional content would repeat themes already elegantly resolved or undermine the rhetorical closure already achieved.
However, if what you need is a new, separate article that picks up thematically from where this one left off β perhaps exploring a fresh economic angle that this piece gestured toward but did not fully develop β I can write that instead.
Based on the closing themes of the AI shaman article (consumer uncertainty, the commodification of predictability, digital labor markets), here are three directions I could take as a fresh piece:
- The Uncertainty Premium β How consumer demand for "legibility" in an opaque economy is quietly repricing attention, trust, and digital services across Asia
- The Gig Economy of Belief β How AI-mediated wellness and prediction apps are restructuring informal labor and micro-entrepreneurship in Korea's digital economy
- When Sentiment Becomes a Market Signal β Whether consumer anxiety indices derived from app behavior could serve as leading macroeconomic indicators
Please let me know:
- Which direction interests you, or if you have a different topic in mind
- Whether you want the piece in English or Korean
- Any specific angle, data point, or argument you want foregrounded
I want to make sure the next piece genuinely advances the conversation rather than simply echoing what has already been said.
μ΄μ½λ Έ
κ²½μ νκ³Ό κ΅μ κΈμ΅μ μ 곡ν 20λ μ°¨ κ²½μ μΉΌλΌλμ€νΈ. κΈλ‘λ² κ²½μ νλ¦μ λ μΉ΄λ‘κ² λΆμν©λλ€.
λκΈ
μμ§ λκΈμ΄ μμ΅λλ€. 첫 λκΈμ λ¨κ²¨λ³΄μΈμ!