We are living through a moral panic about AI and “the end of human creativity”. Generative systems can now write plausible short stories, compose background music, and produce concept art and film storyboards in seconds. Unions and industry bodies talk in explicitly existential terms: an “existential threat” to musicians’ incomes, a “direct threat” to film and TV writers, or the possibility that “creativity is, and should remain, human-centred”.[1]
There is something understandable about this rhetoric. If you have spent years trying to make a living as a writer, illustrator, or musician, then an algorithm trained on billions of words, images, and songs, including work much like yours, can feel like a usurper. Artists’ open letters, strikes, and lawsuits are not imaginary overreactions: there is real evidence of generative systems already displacing illustrators, graphic designers, and other commercial creatives in some sectors. [2]
But I want to approach the moment with two linked suspicions.
-
Why are these jobs suddenly treated as metaphysically special - more “essentially human” than those of textile workers, typists, or human computers, whose jobs were also largely automated?
-
What exactly do people think is at stake for human value here - and is existential philosophy any help in thinking about that?
The first suspicion is historical; the second is existential. They come together in an awkward way: existentialists are, in general, deeply sceptical of the idea that anyone’s job description can capture their worth.
We’ve been here before, just not with Midjourney
When the Jacquard loom spread through French silk weaving in the early 19th century, it automated not just brute force but pattern-making itself. Complex designs that once required highly skilled artisans could now be produced by semi-skilled workers using punch cards. The loom was attacked, its inventor assaulted, and it helped fuel the Luddite movement in Britain, where textile workers smashed machines they believed were destroying their livelihoods. [3]
Similarly, for much of the 19th and early 20th centuries “computer” was a job title. Teams of mostly women did the numerical work required for astronomy, ballistics, and later aerospace research; human computers at NASA’s Langley lab, including the now-famous “West Area Computers”, calculated trajectories and analysed wind-tunnel tests by hand. By the 1970s, electronic computers had displaced most of these roles. [4]
Textile design and orbital mechanics are not trivial, unskilled tasks. They involve creativity, planning, and judgement. Yet we do not typically talk about the mechanisation of weaving or the replacement of human computers as the point where “human creativity was devalued” or “human beings became obsolete”. So why the special angst around generative AI and today’s creative workers?
Part of the answer is sociological. Over the last few decades, “creative industries” have been repackaged as the spearhead of post-industrial economies, symbolic of national soft power and a promised land of meaningful work. UNCTAD’s 2024 Creative Economy Outlook notes that creative sectors contribute between 0.5% and 7.3% of GDP across countries and up to 12.5% of employment; in the UK alone, estimates typically cluster around £120–125 billion a year. [5]
These are mostly white-collar, often middle-class professions. When automation came for manufacturing workers or clerical staff, we allowed ourselves to imagine that “the knowledge economy” would be a stable refuge. We are now discovering that many of the jobs most exposed to AI are precisely higher-skilled white-collar roles - legal, financial, and yes, creative - rather than the “routine manual” work that dominated earlier automation debates. [6]
So there is a defensive reflex at work. If even writers and artists are not safe, who is? Yet this is already a misleading way to pose the question. Most of the empirical literature suggests that AI is, at present, mainly transforming tasks, not cleanly erasing entire occupations, and often in ways that workers themselves report as improving speed or quality, though with serious caveats about inequality and bargaining power. [7]
For creative industries specifically, the picture is mixed and contested. UNCTAD and others document real instances where generative tools have replaced illustrators, graphic artists, or junior designers, particularly in cost-sensitive markets like gaming and advertising [5]. At the same time, studies like Doshi et al. (2024) find that access to generative models can make individual stories more creative and better-rated, especially for less experienced writers, but also that they reduce the diversity of outputs across a group, nudging everyone toward similar solutions [8].
In other words, there is something new here: a technology that both democratizes creative tooling and risks homogenising culture, while shifting bargaining power towards the owners of the infrastructure. But that is not yet a metaphysical catastrophe for “the human”. It is a political and economic problem.
This is where I think existentialism is useful. It gives us a way of talking about meaning, anxiety, and value that does not assume that “being a human artist” is a sacred essence that can only be expressed in certain professions.
Existentialism in one paragraph (and a bit)
“Existentialism” is a cluster of conversations rather than a single doctrine, but the standard caricature is not entirely wrong: it is about what it is like to be a finite, anxious, meaning-seeking creature in an often indifferent world. Four themes matter for our question:
Freedom and self-creation. Sartre’s slogan that “existence precedes essence” means, roughly, that human beings are not born with a fixed nature or purpose, but must continually make themselves through their choices.
Anxiety and possibility. For Kierkegaard, anxiety is “the dizziness of freedom”: the vertigo we feel when we realise that nothing guarantees what we will become, and that our possibilities include terrible as well as noble options.
Nihilism and revaluation. Nietzsche’s announcement that “God is dead” is not a biological claim but a diagnosis: the old moral and metaphysical certainties have collapsed, and we must learn to create values rather than receive them.
Ambiguity and the freedom of others. Simone de Beauvoir argues that we are both free subjects and constrained objects: embodied, gendered, situated beings whose projects always unfold with and against others. Ethical freedom, for her, means willing not just one’s own projects but the freedom of others as well.
Existentialists also care a great deal about art. In the existentialist aesthetics tradition, art is a way in which human beings disclose themselves and their world, a mode of freedom that invites others to take up new possibilities of seeing and acting.
But art in this sense is not limited to people whose tax return lists “novelist” or “guitarist” under occupation. It is, at least potentially, a dimension of any life.
This is the first important point: from an existentialist standpoint, there is nothing ontologically special about “creative jobs” as such. What matters is the project: how a person takes up their situation, assumes responsibility for it, and relates to the freedom of others.
So the question “what if AI replaces artists?” needs reframing. The more interesting question is: what sort of human projects become possible or impossible if certain forms of paid creative work become rarer, more precarious, or more tightly integrated with AI systems controlled by a few firms?
You are not your job (even if your landlord disagrees)
Sartre is helpful for resisting the idea that losing a creative job automatically means losing one’s value as a person.
On his view, we are always in tension between facticity (the brute facts of our situation: economic conditions, social institutions, the existence of large language models) and transcendence (our capacity to project ourselves into possibilities that are not yet real). “Existence precedes essence” is, among other things, a warning against defining yourself as nothing but a role: the waiter who plays at being a waiter so completely that he forgets his freedom is Sartre’s stock example of bad faith.
By analogy, the composer who becomes nothing but “a person whose worth is measured by how many sync deals I can sell before a generative model takes my job” is also in danger of bad faith. Their situation has undeniably changed; royalty forecasts from collecting societies now model scenarios in which AI eats 20–25% of music income by 2028 unless regulation intervenes.
That facticity is not negotiable at the level of individual choice.
But the interpretation of the situation is. To treat a shift in the labour market as a proof that “human creativity doesn’t matter” is precisely the kind of hasty essence-making Sartre criticises. Jobs are one of the most visible forms our projects take, and losing one can be a profound assault on dignity, especially in societies that tether healthcare, housing, and basic status to employment. Still, from a Sartrean point of view, no algorithm can deprive you of the basic structure of your existence: that you are condemned to be free, obliged to choose, and unable to offload responsibility for what you make of your life onto a machine.
That’s the consoling part. The less consoling part is that Sartre refuses to let us blame “technology” in the abstract. The deployment of AI in workplaces is itself a human project: somebody decided to buy the system, to restructure the firm, to chase a certain margin. Existentialism is deeply opposed to the quietism that says “the market” or “the technology” simply made it happen.
So if creative jobs are being gutted in ways that leave people unable to support themselves, this is not a metaphysical tragedy; it is a political failure. And political failures are, in principle, correctable.
AI and the dizziness of freedom
It is easy to feel that AI is closing down possibilities. You train in animation for years; a model can now create acceptable storyboard art in seconds. You learn to write ad copy; a chatbot produces ten options while your coffee is still warm.
But another way to read the situation, a more uncomfortable one, is that AI is multiplying possible paths while eroding the scripts that previously told us which ones were “sensible”.
Kierkegaard describes anxiety not as mere fear (which has a determinate object) but as the vertigo that comes from recognising our freedom - the sense that we could step off the ledge, or quit the job, or abandon the script, and nothing in the structure of the world will stop us.
Generative AI accelerates this vertigo in at least two ways:
Instrumentally, by lowering the cost of trying things. A songwriter with access to AI tools can now experiment with orchestration, video, and marketing material that would previously have required whole teams. And there is evidence that such tools can, in some contexts, make individual creative outputs more polished or imaginative, especially for those with less prior expertise.
Structurally, by destabilising life-plans. If no one can tell you whether “novelist”, “illustrator”, or even “junior lawyer” will still be viable professions in twenty years, you confront more nakedly the fact that these were never guaranteed life-rafts.
Kierkegaard’s point is that this anxiety is not an enemy to be eliminated. It is the affective cost of being a free being rather than a cog. Attempts to escape it by clinging to guarantees (“AI will never be able to do X”, “the market will always reward human art”, “the robots will certainly replace us all so nothing matters”) are, on his view, forms of self-deception.
From this angle, the fear that “AI will take creative jobs” can mask a deeper discomfort: that our culture had tacitly made creative employment into a kind of secular salvation. The promise was that if you were talented and persistent enough, you could translate your inner life into a recognised role (novelist, game designer, indie musician) and thereby anchor your identity. AI does not so much destroy human worth as expose how fragile that bargain always was.
This does not tell us what to do about AI in policy terms. But it does suggest that a purely protective stance, attempting to freeze the current distribution of roles in place, is, at best, a holding pattern. At some point we will have to decide what counts as a worthwhile life when the old occupational markers are no longer reliable.
After the “death of the author”?
Someone who takes Nietzsche seriously is suspicious of reassurances that “human creativity” is safe because AI lacks consciousness, intentionality, or some other metaphysical property. He is more interested in what cultures make of their crises.
Nietzsche’s famous proclamation that “God is dead” diagnoses a situation where the highest values (truth, morality, divine order) no longer command our genuine belief, even if we keep mouthing them. The danger is not atheism per se but nihilism: the sense that nothing matters, that all values are arbitrary.
In the AI context, we can sketch an analogy. Much of modern Western culture has tacitly worshipped “creativity” as a highest value. Creative work is supposed to be self-expressive, liberating, inherently good. Now we have machines that, trained on oceans of past work, can produce convincing facsimiles of style and genre. At the same time, industrial use of generative AI threatens to flood the world with cheap synthetic images, songs, and prose. What some creators dismiss as “slop”.
This combination can make it feel as if creativity itself has been cheapened. If a model can spit out a plausible “new” Radiohead-ish track or fantasy novel cover in seconds, trained, moreover, on Thom Yorke’s actual work without his consent, then perhaps all along creativity was just pattern recombination, and there was nothing particularly noble about it.
Nietzsche’s alternative is neither to nostalgicly defend “real” art against “fake” AI art nor to shrug and embrace cultural nihilism. It is to insist that values, including aesthetic ones, are achievements. They are created, contested, and re-created by people who care enough to fight for them. In The Gay Science, art is one of the highest forms of affirmation: the capacity to shape chaos into something that can be loved.
On this view, the meaning of AI for the arts is not predetermined by its technical capacities. If platforms choose optimisation metrics that reward engagement over depth, and if we collectively accept a cultural diet of cheap pastiche, then generative systems will indeed contribute to a flattening of taste and a marginalisation of difficult, idiosyncratic work. Doshi et al.’s finding that generative AI can reduce collective diversity of output is one empirical mechanism by which that flattening could occur.
But Nietzsche would likely say: that is our failure, not the machine’s. It reflects a refusal to take on the harder task of curating, commissioning, and valuing work that resists easy reproduction - whether that is live performance, local experimentation, or art that deliberately plays against the statistical grain.
In that sense, the real “death” we should worry about is not of the author but of the audience: a public that no longer exercises taste, patience, or curiosity because the cheapest possible stimulation is always a click away.
Situations, scraping, and the freedom of others
So far I have mostly been downplaying metaphysical claims about creative work. Beauvoir pushes in the opposite direction: she reminds us that how we treat creative workers materially and legally is an ethical question, precisely because their projects depend on others’ recognition.
For Beauvoir, human existence is ambiguous: we are at once free subjects and constrained objects. Our freedom is real, but always exercised in particular situations shaped by history, institutions, gender, class, and now, I would add, digital infrastructures.
“To will oneself free,” as she puts it in The Ethics of Ambiguity, is also to will others free, because our projects only take on meaning when others can take them up, respond to them, and pursue their own.
Seen through this lens, some features of the AI–creativity story look less like neutral technological progress and more like a straightforward violation of other people’s freedom:
Training data without consent. Artists, writers, musicians and news organisations have repeatedly protested the unlicensed use of their work to train generative models that then compete with them in the marketplace. Open letters signed by tens of thousands of creators, lawsuits against AI firms, and coalition campaigns like “Make It Fair” in the UK all highlight the same basic asymmetry: capital-rich tech companies extract value from copyrighted works at scale, while individual creators are left to chase uncertain licensing income. [9]
Labour without bargaining power. The 2023 WGA and SAG-AFTRA strikes were not only about residuals and day rates; they were explicitly about controlling the use of AI to generate scripts, scan actors’ bodies, and create “digital doubles”, often with proposals that would have granted studios long-term rights in exchange for a single day’s pay. [10]
Precarity in an already precarious sector. Cultural-labour scholars have long argued that creative industries are characterised by chronic insecurity, overwork, and weak social protection, often romanticised as “doing what you love”. AI arrives in this context not as a neutral tool but as a further lever for intensifying competition and pushing down rates.
From a Beauvoirian standpoint, what is objectionable here is not that AI produces images or texts as such. It is that the freedom of some (tech firms and clients seeking lower costs or faster turnaround) is being expanded at the cost of undermining the material conditions of others’ freedom.
The UNDP’s 2025 Human Development Report makes a remarkably similar point in less philosophical language: AI is not destiny but “a matter of choices”, and the distribution of its benefits and harms depends on policies around data, labour, and social protection.
If we take existentialism seriously, then, the ethical question is not “how do we preserve creative jobs because they are uniquely human?” It is “how do we arrange our institutions so that as many people as possible can pursue meaningful projects, creative or otherwise, without being reduced to mere training data or expendable ‘junior roles’ squeezed out by automation?” [11]
So: are creative jobs special?
At this point, I think we can say two things at once.
1. Creative jobs are not metaphysically special. Existentialists deny that there is any fixed human essence that only artists instantiate. Textile workers, human computers, carers, teachers, and programmers all face the same fundamental condition: they must make something of themselves in a world that did not ask for them and imposes no guaranteed script. AI may automate certain tasks, but it cannot relieve us of that responsibility.
2. The way we treat creative work is politically and ethically special. Cultural production is one of the main sites where we collectively articulate meaning, memory, and identity. It also relies on symbols that are infinitely copyable, which makes it particularly vulnerable to certain business models: scraping everything, training a model, selling access to synthetic derivatives, and externalising the costs. If these practices undermine the ability of diverse human voices to be heard and to support themselves, that is a loss not just for those workers but for the rest of us.
Existentialism cuts through both complacency and panic here.
- It undercuts the comforting thought that “my job is my essence”, which turns technological change into an attack on my being rather than my situation.
- It also undercuts the fatalistic thought that “AI will do what AI will do”. Technologies are always embedded in human projects, and we remain answerable for which projects we pursue.
In that sense, the “specialness” of creative work lies less in the sanctity of particular job titles and more in the fact that they are paradigmatic sites where freedom, recognition, and material conditions collide. They make visible what is at stake - for everyone - in how we handle AI and automation.
A few tentative implications
I’ll end with some deliberately sketchy conclusions, mainly as my own thoughts are still developing in this area and humbly speaking, it is far too soon for anything more firm than a sketch.
1. We should stop saying that AI makes humans “worthless”. This is bad metaphysics and worse politics. AI systems, as UNDP and OECD both stress, are pattern-recognisers and generative tools that cannot frame problems or care about outcomes.
Human worth does not rest on outperforming machines at particular tasks, but on our capacity for responsibility. To ourselves, to others, and to the shared world.
2. We should be more honest about class and privilege. The panic over creative jobs reveals that many of us implicitly treated certain white-collar occupations as the “real” locus of human value, quietly discounting the experiences of those whose jobs were automated earlier. An existentialist ethics asks us to take everyone’s situation seriously, not just the cultural producers we happen to identify with.
3. We should treat AI in creative industries as a governance problem, not a metaphysical drama. Questions about training data consent, licensing, fair compensation, and the design of platforms are boringly institutional, which is precisely why they matter. Reports from bodies like UNCTAD, the BFI, and parliamentary committees already offer concrete recommendations here; the open letters and strikes are attempts by creators to assert their freedom within those structures.
4. We should rethink the relation between “doing art” and “making a living”. This is the hardest and most uncomfortable point. Existentialists insist that meaningful projects do not have to be commodified to be real. But under current conditions, time and security for such projects does depend on income. One response is to fight for sector-specific protections; another is to ask whether broader mechanisms (public funding, basic income, labour guarantees) are needed so that people can pursue creative projects without tying their existence to a shrinking number of well-paid cultural jobs. None of this is specific to AI, but AI makes the question harder to ignore.
Finally, and most speculatively: perhaps generative AI will eventually become just another tool, like the camera or the synthesiser, which initially seemed to threaten older forms of art but were eventually folded into new ones. Or perhaps it will solidify a regime in which a few platform owners control the means of cultural production in increasingly extractive ways.
Existentialism does not tell us which future will arrive. What it does insist on is that whichever future we get will be one we participated in making. By our votes, our consumption, our professional norms, and our willingness (or otherwise) to stand in solidarity with those whose situations are being transformed.
The question, then, is not whether AI will replace some creative jobs. It almost certainly will, just as earlier waves of automation replaced other forms of skilled work. The question is whether, in responding to that fact, we will treat ourselves and others as cogs or as the free, anxious, meaning-seeking beings we already are.
References
[1] Burke, K. 2024 “Music sector workers to lose nearly a quarter of income to AI in next four years, global study finds” https://www.theguardian.com/music/2024/dec/04/artificial-intelligence-music-industry-impact-income-loss
[2] Zhao, B. 2024 “Replacement of human artists by AI systems in creative industries” https://unctad.org/news/replacement-human-artists-ai-systems-creative-industries
[3] Geselowitz01, M. 2019 “The Jacquard Loom: A Driver of the Industrial Revolution” https://spectrum.ieee.org/the-jacquard-loom-a-driver-of-the-industrial-revolution
[4] McLennan, S. and Gainer, M. “When the Computer Wore a Skirt: Langley’s Computers, 1935–1970” https://www.nasa.gov/history/langleys-computers-1935-1970
[5] UN trade & development, 2024 “Creative Economy Outlook 2024” https://unctad.org/publication/creative-economy-outlook-2024
[6] OECD, 2023 “Employment Outlook 2023”, https://www.oecd.org/en/publications/oecd-employment-outlook-2023_08785bba-en.html
[7] OECD, 2024 “How is AI changing the way workers perform their jobs and the skills they require?”https://www.oecd.org/content/dam/oecd/en/publications/reports/2024/11/how-is-ai-changing-the-way-workers-perform-their-jobs-and-the-skills-they-require_842aa075/8dc62c72-en.pdf
[8] Doshi, A. and Hauser, O., 2024 “Generative AI enhances individual creativity but reduces the collective diversity of novel content” https://www.science.org/doi/10.1126/sciadv.adn5290
[9] Schor, Z. 2024 “Andersen v. Stability AI: The Landmark Case Unpacking the Copyright Risks of AI Image Generators” https://jipel.law.nyu.edu/andersen-v-stability-ai-the-landmark-case-unpacking-the-copyright-risks-of-ai-image-generators
[10] Kinder, M. 2024 “Hollywood writers went on strike to protect their livelihoods from generative AI. Their remarkable victory matters for all workers.” https://www.brookings.edu/articles/hollywood-writers-went-on-strike-to-protect-their-livelihoods-from-generative-ai-their-remarkable-victory-matters-for-all-workers
[11] ILO 2023 “The future of work in the arts and entertainment sector” https://www.ilo.org/media/252276/download?