The Jobs Nobody Could Predict
If you are reading this, there is a good chance you just read the big AI Essay that has taken the world by storm. Or maybe someone sent it to you, or you saw it trending, or a friend at dinner brought it. Over forty million people viewed his post in its first few days. For a piece about artificial intelligence written by a tech guy, that is an extraordinary number. It broke out of the AI world and into the broader public conversation in a way that almost nothing in this space has before.
I want to talk about why.
Not just why the article exists, but why it landed the way it did, why it is resonating with people who have never thought much about AI, and why this particular moment feels different. Then I want to do something that I think this piece, as well written and important as it is, does not quite do: I want to ground the anxiety in history, because the historical record has a lot to say about where we are right now, and most of the loudest voices in AI are not saying it.
Nothing I write here is meant to diminish this piece. It is very well written, and most of what he calls out is probably close to correct or outright correct. But I want to add some context that I think matters.
It is worth noting that Matt (the author) is not a disinterested observer. He is the CEO of an AI company and an AI investor. He also faced fraud accusations in 2024 after his open-source model Reflection 70B failed to replicate its claimed benchmarks in every independent test, with some evaluators finding evidence it may have been routing responses through another company's model. He admitted he got ahead of himself. That does not invalidate his essay, but it is context worth having when calibrating how much urgency to assign to the message.
Why You Are Seeing This Article
In the AI world, something like this goes viral every single week. An essay, a demo, a benchmark result, a product launch. The cycle is relentless. But until recently, it stayed insulated inside the industry. Matt's essay is one of the first clear cases where something that went massively viral inside the AI world actually made it outside. That crossover did not happen by accident. There are macro trends driving it, and understanding those trends matters more than any single article.
Consider what has happened in just the last few weeks. Anthropic and OpenAI released new models that made previous versions feel like a different era, models so capable at writing software that the tech world immediately began asking whether SaaS as a business model is dead. If AI can build custom software on the fly, why would you pay a monthly subscription for a one-size-fits-all tool? SaaS stocks dropped. The conversation jumped from AI Twitter to the financial press to dinner tables. Suddenly, this was not an abstract debate about the future. It was showing up in people's portfolios and in their company's Slack channels. That is the kindling that was already burning when Matt's essay landed.
The Leadership Anxiety Wave, and the Incentives Behind It
In the last few months leading into the new year, something shifted in the public posture of AI industry leaders. At the World Economic Forum, in long-form essays, in interviews, the people running the biggest AI companies began expressing their own personal anxieties about the pace and speed at which AI will replace jobs. Dario Amodei, the CEO of Anthropic, published a 20,000-word essay warning that fifty percent of entry-level white-collar jobs could be displaced within one to five years. This was not a journalist speculating. This was the person building the technology saying it out loud.
Typically, this kind of information stays visible within the industry. It circulates among people who follow AI closely, and then it gets distilled outward, slowly, to friends, families, colleagues. But the volume and the directness of these recent statements has been different. When the CEO of one of the most important AI companies tells you that half of entry-level white-collar jobs might disappear within five years, that lands differently than a think piece in a trade publication. I believe this has had a dramatic effect on increasing anxiety among people who are not in AI. And that anxiety is the soil in which Matt's essay took root.
But there is a dichotomy here worth sitting with. These leaders are running profit-driven companies. And consider what happens when the CEO of an AI company says that AI will replace jobs within the next year or two versus saying it will take ten or fifteen years. If it is next year, there is a tremendous amount of value that will be captured at the company level in a very short window, which drives up near-term valuations. If it is ten years out, investors shrug and move on. I am not saying these leaders are being dishonest. I think they genuinely believe what they are saying. But the incentive structure is real, and it is important to be clear-eyed about it. The people telling you to be afraid are the same people who benefit financially from the urgency of that fear.
Software Was the Canary
Matt correctly calls out that the software industry has already been dramatically impacted by AI. Writing code is one of the tasks that AI does best, and the effects are visible and measurable. Microsoft's CEO has confirmed that thirty percent of the company's code is now AI-written. Burning Glass Institute data shows that the share of software development job postings requiring three years of experience or less dropped from forty-three percent to twenty-eight percent between 2018 and 2024. Stanford research found a thirteen to sixteen percent relative decline in employment for workers aged twenty-two to twenty-five in AI-exposed occupations since ChatGPT launched.
But here is the thing I want to call out that Matt briefly mentions but does not quite nail: software is a deterministic task. You write code, you run code, and you know whether it works or not. There are very few tasks in the world with that level of definitiveness. You get a clear, binary signal: the program either does what it is supposed to do, or it does not. That is why the AI labs started with software. It was not primarily to create some flywheel of AI building better AI, although that is a real and genuinely unsettling dynamic. It was because software is the easiest domain to automate well, precisely because you can verify the output programmatically.
This matters because it means the software industry is the leading edge of displacement, not the template for how every other industry will experience it. The leap from automating code to automating the messy, ambiguous, context-dependent work that most people do is enormous. It might happen, but it will not happen overnight, and it will not look the same.
The Enterprise ROI Pressure, and Why You Might Already Be Feeling It
There is a shift happening right now inside companies that many of you are probably already feeling, even if nobody has named it out loud in your organization. The era of playing with AI is over. Enterprises are done experimenting. They are now demanding real return on investment, and if you work at a company of any meaningful size, there is almost certainly a conversation happening somewhere about how to demonstrate that AI is delivering measurable value.
And here is the uncomfortable truth about how that conversation tends to go. One of the most straightforward ways to demonstrate ROI is human cost takeout. If you can say, I replaced this group of workers with an AI tool, I paid X for the tool, and I saved Y by no longer employing those people, that is a clean, measurable story. Your CFO can put it in a slide. Your board can see it in a quarter. It fits neatly into the way businesses have always measured efficiency gains.
The alternative is harder. If AI enables your team to do more in a day, or to take on work you otherwise would not have been able to do, or to move faster and produce higher quality output, those benefits are real. But they are diffuse. They are difficult to put a dollar figure on over a specific time horizon. And in a world where leadership is under pressure to show that the millions being spent on AI tools are paying off, the thing that gets measured is the thing that gets managed. Right now, headcount reduction is easier to measure than productivity enhancement.
So if you are sitting in an organization and sensing a new kind of pressure, a tightening, a quiet restructuring, a sudden interest from leadership in which tasks could be automated, that is not paranoia. That is the enterprise ROI machine starting to turn. It does not mean your job is gone tomorrow. But it does mean the conversation is happening, and it is happening with increasing urgency. And that pressure, layered on top of the leadership anxiety wave and the software canary, is a big part of why Matt's essay hit the way it did.
The Anxiety Is Real, and So Are Its Causes
All of this is why you are seeing this article, and why it resonated with people who had never previously paid attention to AI. There are macro trends converging: leadership anxiety expressed publicly and forcefully, a software industry already visibly disrupted, enterprise pressure to demonstrate ROI through headcount reduction, and a general population that is hearing about AI job displacement not from journalists but from the people building the technology. These trends do not just explain the article. They speak to a real, lived anxiety that is becoming harder to dismiss.
What History Actually Says
The one thing I do not like about Matt's essay, and about a lot of the writing from great AI leaders, is that they all fail to talk about historical innovations on the scale of AI and the effects those innovations had on the economy and the job market. They all say, in passing, that AI will create new jobs. But they do not go into history and explain what actually happened. They do not walk people through the world before an innovation and the world after, so that you can see the pattern for yourself. They do not name what those new jobs were. And they do not acknowledge that nobody at the time could have predicted them.
I think that is a significant omission, and I want to spend the rest of this essay filling it in. I have done extensive research on four historical transitions: the printing press, electrification, the automobile, and the internet. What the record shows is remarkably consistent, and I think it carries real implications for how we should think about AI.
The Printing Press: From 30,000 Books to Nine Million
Before the printing press, approximately 30,000 books existed in all of Europe. By 1500, more than nine million volumes were in circulation. The scribes saw it coming. Filippo de Strata, a Venetian monk, begged the Doge to ban printing in 1474, calling the press a whore and lamenting that a true writer dies of hunger. Scribes' guilds petitioned city councils across Europe. Vespasiano da Bisticci, Florence's most celebrated manuscript bookseller, closed his business around 1480 and died in 1498, his profession extinct. None of the resistance succeeded.
But the printing press did not just kill the scribe profession. It created an entirely new division of labor that nobody had named or predicted: compositors, press operators, type founders, proofreaders, editors, publishers, booksellers. By 1500, presses were running in 282 cities. Economist Jeremiah Dittmar found that cities adopting printing presses between 1450 and 1500 grew twenty to thirty-five percentage points more than comparable non-adopting cities over the following century. The roles of editor, publisher, journalist, and professional librarian evolved organically over decades. Not one person in 1475 could have named them. The publishing industry was not considered mature until roughly 1550, a full century after Gutenberg.
Electrification: Terror Before Transformation
The public reaction to electricity was not job-loss anxiety. It was visceral terror. In 1889, a lineman was electrocuted in full view of thousands of New Yorkers, triggering what historians call the Electric Wire Panic. When 600 lamplighters went on strike in New York in 1907, the cruel irony was that the few roads that stayed lit were the ones in Central Park already converted to electric. In Belgium, lamplighters smashed electric lamps to prevent their own displacement. The natural ice industry, which employed an estimated 90,000 people in New York City alone, took roughly sixty-five years to fully disappear after the introduction of electric refrigeration.
And the new workforce dwarfed the old one. Engineers increased ninefold as a proportion of total employment between 1910 and 2000, from 74,000 to 2.3 million. The electric light and power industry saw employee counts increase roughly 730 percent between 1902 and 1927. Edward Bellamy predicted in 1888 that technology would allow citizens to retire at 45 by the year 2000. He was as wrong as the lamplighters. Neither side could imagine what actually happened: a vastly expanded professional class doing work that nobody in 1890 would have recognized as work at all.
The Automobile: Killing an Economy, Building a Bigger One
A banker tried to dissuade Horace Rackham from investing five thousand dollars in Ford Motor Company in 1903, concluding that the horse is here to stay but the automobile is only a novelty, a fad. Rackham invested anyway. His five thousand dollars became 12.5 million. The dismissals were everywhere and rooted in real workflow constraints: as late as 1913, writers argued that for short hauls where many stops must be made, the gas wagon can never displace the horse. They extrapolated from current limitations and underestimated future improvements.
The horse economy was enormous. The 1890 census recorded over 90,000 people in the wagon and carriage industry, plus thousands of teamsters, tens of thousands of blacksmiths, and an uncounted army of stable hands and farriers. One quarter of U.S. farmland grew feed for horses. By 1929, the automobile ecosystem had dwarfed all of it: 330,000 in dealerships, 127,000 at service stations, 104,000 in garages.
The Internet: 2.6 Jobs Created for Every One Destroyed
In 1995, Clifford Stoll wrote in Newsweek that no online database will replace your daily newspaper. Newsweek ceased print publication in 2012. Paul Krugman predicted the internet's impact on the economy would be no greater than the fax machine's. Robert Metcalfe, the inventor of Ethernet, predicted the internet would catastrophically collapse; in 1997, he put his column in a blender and drank it on stage. The destruction was real: travel agents declined seventy percent, Blockbuster went from 84,300 employees to fewer than 11,000, newspaper classified revenue fell from 19.6 billion to 4.6 billion in a decade.
But the creation was staggering. When Stoll wrote his piece, Google, Facebook, YouTube, and the iPhone did not exist. None of the following job titles had meaning in 1995: SEO specialist, social media manager, app developer, data scientist, cloud engineer, content creator, influencer. Jeremy Rifkin predicted that ninety million jobs were vulnerable to replacement by machines. Instead, the U.S. economy added tens of millions of net new jobs. McKinsey found the internet created 2.6 jobs for every one lost. One third of all new jobs created in the U.S. over the following twenty-five years were types that did not exist previously. By 2025, an estimated 28.4 million U.S. jobs were internet-driven or internet-dependent.
The Pattern Nobody Talks About
Across all four of these transitions, the same dynamics repeat. Destruction is always countable; creation is always diffuse. The scribes could count each other's losses in real time. They could not see the publishing industry forming across hundreds of cities. The video store employees knew they were losing their jobs. Nobody told them about the millions of roles materializing in an economy that did not yet have a name. And not once, across 500 years, did a contemporary observer accurately name the major new occupational categories that would emerge. The invisibility is structural. New jobs emerge from behaviors that the technology enables at scale, and you cannot predict those behaviors until the technology is deployed.
The lag between destruction and creation is real and painful, but it compresses with each revolution. The printing press took roughly seventy to eighty years. Electrification played out over much of the twentieth century. The automobile took fifteen to twenty-five years. The internet's painful gap lasted about five to eight years. And the sentiment arc follows the same pattern every time: curiosity, peak anxiety, institutional adaptation, then normalization. The printing press took a century to complete this arc. The internet took fifteen to twenty years. If the pattern holds, AI's arc could complete in five to ten years. We are squarely in phase two, peak anxiety, right now.
Where AI Stands Now, and the Jobs Coming Into View
The Bureau of Labor Statistics is, for the first time, explicitly citing AI as a factor dampening employment growth across multiple white-collar occupations. Klarna replaced 700 customer service agents with an AI chatbot before admitting they went too far. Chegg lost ninety-nine percent of its stock value. Though it is worth noting that Forrester found fifty-five percent of employers regret laying off workers for AI, having eliminated roles based on capabilities that do not yet exist. The destruction is real, but sometimes exaggerated.
The creation is also already happening. LinkedIn data shows AI has created 1.3 million new roles in just two years. PwC found a fifty-six percent wage premium for workers with AI skills in 2024. And the job creation mechanisms follow the historical pattern precisely: regulation, safety, trust, integration, and infrastructure. The EU AI Act is creating demand for governance leads the way GDPR spawned 75,000 Data Protection Officers. AI trainers and data annotators number in the tens of thousands. AI safety has moved from academic curiosity to corporate necessity. The historical crosswalk is direct: every prior revolution created jobs in selection, permissioning, quality control, safety standards, and institutional trust. AI is doing the same.
The Jobs Nobody Is Seeing Yet
In every prior revolution, the most important new jobs came not from the technology itself but from the second- and third-order behaviors it enabled. We can already see the early contours with AI. When AI generates the majority of digital content, nearly seventy percent of consumers say they are concerned about deception. That concern is creating an authenticity economy: content provenance verifiers, authenticity strategists, professionals whose verified human judgment becomes the scarce, premium product. As AI agents proliferate, with eighty-two percent of executives expecting them in their workforce within eighteen months, new infrastructure roles are forming around agent management, economics, and interoperability. When AI handles routine cognitive tasks, scarcity shifts to what cannot scale: empathy, physical presence, human connection. When AI gives everyone access to expert-level advice, the value of judgment and meta-expertise rises while the value of simply knowing things drops.
I will surely be wrong about some of the specific jobs I have named here. That is the whole point. Nobody gets this right in prospect. But the emerging trend that new work will form around AI is not speculation. It is what has happened, without exception, for five hundred years.
A Challenge for Everyone Reading This
History has always produced two camps: those who resist the technology and those who adopt it. In every case we have looked at, the people who resisted felt the effects far more acutely than the ones who leaned in and started learning. I challenge everyone reading this to be in the camp of adoption. Not because the transition will be painless, or because your anxiety is unfounded. Your anxiety is rational. But pessimism and anger about the technology probably will not change the fact that it is coming. Maybe there is a world where there is a great AI rejection. I would be okay with that outcome. But historically, it has never happened.
Start reskilling now. Get a paid subscription to Claude or ChatGPT and push it into your actual work, not as a search engine but as a collaborator. Think about the macro shift: there are people who get paid for what they do, people who get paid for what they know, and people who get paid for the judgment they bring to complex situations. In a world where AI can do a lot of the doing, the premium shifts upward. Everyone, in some capacity, becomes a manager of AI agents, or a manager of the people who manage the agents. The ATM paradox is instructive: when ATMs arrived, headlines predicted seventy-five percent of teller jobs would vanish. Instead, teller employment grew, because ATMs reduced the cost per branch, banks opened more branches, and tellers shifted to relationship-focused roles. The technology did not eliminate the job. It transformed it.
The Thing That Will Not Change
I want to close with something that none of the AI essays talk about, and I think it might be the most important thing of all.
At the end of the day, life is within people. It always has been. It is abnormal to rely on technology for the human interaction piece, and I do not think that is going to change. Change happens within organizations, and organizations are made of people you know. Your boss is a person. Your colleagues are people. The person who decides whether you stay or go during a restructuring is a person sitting across a table from you, not an algorithm.
Think about that for a second. If you get fired, your boss is going to fire you. But if your boss loves working with you, if your boss understands that great people are genuinely hard to come by, your boss is probably not going to fire you. They are going to find ways to capitalize on your skills, to redeploy you, to keep you in the organization. I know this because I, like anyone who has ever tried to build something, know how painfully difficult it is to find good people who do good work. That has always been true, and AI does not change it. Just because AI can do the work does not mean organizations are going to get rid of every great person they have. The people who are excellent at what they do, who are trusted, who bring judgment and reliability and human connection to their roles, those people have always been scarce. AI does not make them less scarce. If anything, it makes them more valuable.
So the challenge is not just to learn the tools. The challenge is to make yourself valuable and make yourself appreciated. Build relationships. Be someone that people want to work with. Bring something to the table that cannot be replicated by a language model, not because you are competing with the model, but because organizations are ultimately built on trust between humans, and that trust is earned through years of showing up and doing meaningful work alongside other people.
And beyond that, take a moment to reflect on what you actually want to do with your life. Not what job title you want, or what salary you want, but what kind of work gives you a sense of purpose. Because in a world where AI handles more and more of the routine cognitive labor, the question of what you choose to do with your time and energy becomes more important, not less. The people who will thrive are not just the ones who learn the tools fastest. They are the ones who have a clear sense of what they bring to the world that matters, and who invest in the human relationships that no technology can replace.
The jobs AI will create are invisible right now for the same reason that app developer was invisible in 2005 and YouTuber was invisible in 2004. That pattern has held for five hundred years. You do not have to be confident in your predictions to be prepared. You just have to be willing to learn, willing to adapt, and willing to stay close to the people who matter. The technology is coming. The new jobs will follow. But the relationships you build and the value you bring to the people around you have always been, and will always be, the thing that carries you through.


