The First Rung
What the AI labor data actually shows, and what it implies for the next decade.
Most public conversation about AI and jobs is still stuck between two unhelpful poles. On one side, an apocalyptic register that treats every layoff announcement as proof of mass displacement. On the other, a dismissive register that points to a low headline unemployment rate and concludes that nothing is happening. Neither view survives contact with the data we now have.
The data we have is more specific, and in some ways more interesting, than either camp will admit. Generative AI is showing up in measurable employment statistics. The signal is real. But it is concentrated, not broad. It lives in a particular slice of the labor market: early-career workers, in white-collar occupations whose tasks are most exposed to language-model automation, doing work that senior employees can plausibly review at machine speed. Outside that slice, the picture is much closer to business-as-usual, with the usual macro and sectoral noise.
That narrowness is what makes the moment worth thinking about carefully. A small, sharply identified shock at the entry level of the white-collar labor market is not a curiosity. It is the leading edge of a longer adjustment, and the second- and third-order consequences are likely to be more important than the first. The first rung of the white-collar ladder is the place where firms train the people who will run things in 2034. If that rung is being quietly pulled, it has implications for talent pipelines, for the demographics of an entire cohort, for housing and family formation, and for the macroeconomic question of where displaced labor ends up.
This piece tries to do three things. First, lay out what the strongest evidence actually says, and what it does not. Second, walk through the operational mechanism — why the adjustment is showing up in hiring before wages, and what that looks like inside firms. Third, take the longer view: extrapolate the second- and third-order effects, with appropriate caution, on the apprenticeship pipeline that will shape the 2030–2033 talent market, on cohort scarring and its political consequences, and on the macro question of where the labor goes when the entry rung gets thinner.
The investment lens runs through the whole piece, but I have tried not to let it crowd out the broader question. The labor market is not just a portfolio input. It is the substrate of a society that is now being asked to absorb a new general-purpose technology faster than any of its institutions were designed for.
What the data actually says
The empirical anchor is a Stanford Digital Economy Lab working paper by Erik Brynjolfsson, Bharat Chandar, and Ruyu Chen, titled Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence. It uses high-frequency administrative payroll data from ADP, the largest payroll provider in the United States. That source matters. Most public commentary on AI and jobs leans on layoff press releases, public job-posting scrapes, or marketplace anecdotes. Administrative payroll data is several rungs more reliable. It captures actual paychecks for actual workers across a very large slice of the U.S. private workforce.
The headline, in the November 2025 version of the paper, is that workers between the ages of 22 and 25 in the most AI-exposed occupations have experienced a 16% relative employment decline after the widespread adoption of generative AI, even after the authors control for firm-level shocks. An earlier version of the paper, published through the Stanford Institute for Economic Policy Research in August 2025, put the figure at 13%. The newer estimate is bigger, not smaller, and the underlying pattern has held up under additional robustness checks.
The pattern matters more than the headline. The decline is concentrated among workers ages 22 to 25, not across the broader white-collar labor force. It shows up in occupations with high AI exposure and not in occupations with low AI exposure. Within the high-exposure category, it is concentrated in roles where AI is more likely to automate tasks rather than augment workers. It appears in employment quantity rather than in compensation. And it survives controls for technology-firm exposure and remote-work amenability, which would otherwise be the natural alternative explanations.
That is a tight enough fingerprint to be hard to dismiss as a generic post-2022 tech slowdown. A pure macro story — higher rates, post-pandemic hiring normalization, a tech-sector correction — would not naturally predict that 22-to-25-year-olds in highly AI-exposed jobs underperform 35-to-45-year-olds in the same occupations, inside the same firms, at the same time. It would not predict that automation-heavy jobs diverge from augmentation-heavy jobs. It would predict broad-based weakness, not a narrow pattern keyed to the parts of the economy where machine substitution is most plausible.
The single most important caveat is timing. In a follow-up note in February 2026, the same authors looked carefully at when the divergence began. Under their broadest specification, with firm-time fixed effects that absorb essentially all firm-level shocks, the employment decline for AI-exposed young workers becomes statistically significant only in 2024, not in late 2022 or 2023. They also argue that interest-rate movements do not appear to explain the disproportionate weakness in this group; the pattern is concentrated in the right places to look like AI rather than rates.
The honest reading is layered. From late 2022 through 2023, the white-collar entry-level market was already weak for non-AI reasons — post-ZIRP normalization, a tech-sector correction, an overhang of pandemic-era hiring. AI did not cause that weakness. What appears to have happened, beginning in 2024, is that AI exposure started to differentiate outcomes within an already-weak market. The young workers in the most exposed occupations stopped recovering at the rate their less-exposed peers did. By 2025, the divergence was large enough to be visible in administrative data and robust to firm-level controls.
That is not the simple story that “AI is taking the jobs.” It is a more carefully bounded story: in a labor market that was already adjusting for several reasons, AI is now adding a meaningful and identifiable layer, concentrated where you would expect it to concentrate. That story is harder to dramatize, but it is much more useful as a basis for thinking about what comes next.
Why hiring, and not wages
White-collar firms do not generally cut wages for their existing employees. Compensation bands, retention, internal equity, and the simple human friction of explaining a pay cut all push in the other direction. The labor adjustment, when it comes, comes through quantity. Firms hire fewer people. They delay backfills. They let attrition do the work of headcount management. They shrink new-graduate cohorts. They route work that used to go to junior employees through senior employees with AI assistance instead.
This is why the AI labor signal is showing up in employment and not in wages. It is not that AI is depressing pay for incumbent white-collar workers. It is that AI is letting firms get the same or more output with fewer new hires, and the new hires are the easiest margin to adjust quickly and quietly.
It also explains why the signal is concentrated in the 22-to-25 cohort. Workers at that age sit on the hiring margin almost by definition. They have the least firm-specific knowledge, the fewest internal advocates, the lowest accumulated context, and the weakest claim on the kinds of judgment, customer relationships, and institutional memory that are still genuinely hard to automate. They also do, disproportionately, the kinds of tasks that AI handles best: first-draft writing, summarization, code boilerplate, basic research, simple support, data cleaning, internal documentation. Senior workers historically delegated this work to junior workers in part because it was a way to get cheap output and in part because doing it was how junior workers learned. Generative AI gives senior workers a third option, and it is faster than the other two.
The result is a labor market that is not simply smaller at the entry level. It is being re-specified. NACE’s spring 2026 update reported that employers planned to hire 5.6% more graduates from the Class of 2026 than the prior year — but with sharply uneven distribution across industries, and with a near-tripling of demand for AI skills since fall 2025. More than a third of entry-level jobs surveyed now require AI skills. The entry rung is not vanishing. It is being moved up. The graduates who can use AI fluently are getting hired into smaller cohorts that absorb work formerly distributed across larger ones. The graduates who cannot are getting underemployed, dropped, or routed into roles that previously did not require a four-year degree.
The New York Fed’s recent-graduate tracker shows the same picture from the other side. Recent-graduate unemployment was about 5.7% in the fourth quarter of 2025, and underemployment — the share of recent grads working in jobs that do not require a college degree — reached 42.5%, the highest level since 2020. That is consistent with a market in which the entry-level white-collar pipeline is functioning, but functioning at smaller volumes and with much higher selectivity. The graduates who clear the bar are fine. The ones below the bar are taking jobs they would not have taken in 2019.
Inside the firms
The most useful single case study is Klarna, partly because it is the clearest public example of aggressive AI substitution in a labor-intensive function and partly because of how the story has evolved.
In February 2024, Klarna announced that its AI customer service assistant had handled 2.3 million conversations in its first month, equivalent to two-thirds of the company’s customer service chats and the work of roughly 700 full-time agents. The assistant matched human agents on customer satisfaction, cut resolution time from eleven minutes to under two, and reduced repeat inquiries by 25%. The financial results have been impressive. Klarna’s full-year 2025 revenue reached $3.5 billion, up 25% year over year, with adjusted operating profit of $65 million and 118 million active consumers. Revenue per employee, by Klarna’s own reporting in the second quarter of 2025, was approaching $1 million, nearly triple the level of two years earlier.
That is the loud part of the story. The quiet part — and in some ways the more interesting part — is that in May 2025, Bloomberg reported that Klarna’s CEO had publicly said the AI-fueled cost-cutting had gone too far, and that the company was recruiting human agents again so customers could always reach a real person. That is not a repudiation of the AI strategy. Klarna is not unwinding its assistant. It is a calibration: the company learned that there is a floor below which routine support cannot be safely automated without damaging brand trust, edge-case handling, and the complicated interactions that disproportionately matter to customer perception. The right model is hybrid, and the right ratio is somewhere between full agent staffing and the leanest possible automation. Klarna found the ratio by overshooting and pulling back.
That arc — aggressive automation, then partial rebalancing — is likely to be the template for the next several years across customer-facing functions. Gartner’s October 2025 survey of 321 customer service and support leaders found that only 20% had reduced agent staffing as a result of AI, while 55% reported stable staffing levels even as their organizations handled higher customer volumes. Gartner has separately predicted that by 2027, half of the companies that attributed headcount cuts to AI will rehire staff to do similar work under different titles. Whether that exact prediction holds, the pattern it describes — overshoot, then partial reversal, with the productivity gains banked but the labor cuts moderated — is consistent with what is happening at the level of the firm.
The same dynamic is showing up in the business process outsourcing sector, where the model risk is more acute. BPOs sell labor at scale. If AI compresses the unit economics of routine customer support, the BPO contracts that priced agents by the seat are exposed. The offset, which the larger BPOs are pursuing aggressively, is to use AI internally to expand margin and to sell AI-enabled transformation services as a higher-value layer. Concentrix’s full-year 2025 revenue of $9.826 billion was up only 2.2% year over year, with 2026 guidance pointing to continued growth and free cash flow expansion. That is not the trajectory of a sector being eaten alive. It is the trajectory of a sector that has identified the threat and is repositioning. Whether the repositioning is successful at scale is an open question, and the answer will vary by company. The investable thesis is not “short BPO.” It is “diligence the mix” — agent-seat exposure, automation penetration, customer satisfaction metrics, the share of revenue tied to outcome-based pricing rather than seats, the trajectory of revenue per employee.
The enterprise software story is a more interesting one because the conventional bear case has not, so far, played out. The argument is straightforward: if AI lets each customer do more with fewer employees, seat-based SaaS vendors face slower seat expansion. If AI agents perform tasks that humans used to perform, the “user” of the software in some functions becomes a machine, and the per-seat licensing model breaks. That argument is not wrong. But the leading vendors have done a much better job of monetizing AI than the bear case anticipated. Workday’s fiscal 2026 results showed total revenue of $9.552 billion, up 13.1%, with subscription revenue of $8.833 billion, up 14.5%, and FY2027 subscription guidance of $9.925 billion to $9.950 billion, implying 12% to 13% growth. Workday delivered 1.7 billion AI actions across its platform in fiscal 2026 and is leaning hard into agentic AI. ServiceNow’s first-quarter 2026 subscription revenue was $3.671 billion, up 22% year over year, with remaining performance obligations of $27.7 billion (up 25%) and Now Assist customers spending more than $1 million in annual contract value growing more than 130% year over year.
These are not the numbers of a sector being structurally impaired by agents. They are the numbers of a sector that has, for now, succeeded in shifting the conversation from seats to usage, from licensing to consumption, from people to workflows. The investable question is no longer whether AI agents purchase software seats. It is whether the leading vendors can monetize AI-driven workflow execution and orchestration faster than their human seat counts mature. The answer in 2026 looks better than the bear case predicted. That could change. It will change company by company.
The staffing industry is the most exposed of the white-collar service sectors and also the easiest to over-attribute. Staffing revenues track new hiring, contractor demand, and perm-placement volumes — all of which weaken when firms reduce backfills, cut new-grad cohorts, or use AI to absorb work that previously went to contingent workers. Tech staffing, finance and accounting staffing, administrative support, legal support, and customer service staffing are all in the exposure zone. But staffing is also a deeply cyclical business that is sensitive to interest rates, IT budgets, and corporate confidence. The honest read is that AI is part of the story, particularly in junior white-collar roles, but a meaningful share of the sector’s recent weakness reflects the macro cycle. Separating the two requires job-order data by category, bill-rate trends, perm-placement mix, and management commentary that distinguishes AI substitution from cyclical delay.
Limits to the Displacement Thesis
The case that AI is broadly destroying jobs is weaker than it sounds, and three pieces of evidence sit directly against the strongest version of the displacement narrative.
The first is a 2023 NBER field study by Brynjolfsson, Li, and Raymond — Generative AI at Work — covering 5,179 customer support agents at a Fortune 500 firm. The finding was a 14% average productivity increase from access to a generative AI assistant, with the gains concentrated in the workers who needed them most: novice and low-skilled agents saw productivity gains of 34%, while experienced agents saw little benefit. That is the strongest single counterweight to a pure substitution story. AI, in that setting, was not replacing junior workers. It was making them dramatically more productive, and in a way that compressed the gap between novice and expert.
The second is a randomized controlled trial run by METR in early 2025, in which experienced open-source software developers worked on familiar codebases with and without AI tools. The developers using AI tools took 19% longer than those without, even though they expected and reported faster completion. The honest interpretation is not that AI tools slow down all coding work — clearly they speed up many kinds — but that for sufficiently expert workers operating in deeply familiar territory, the overhead of reviewing, debugging, and correcting AI output can offset or exceed the time saved. Productivity claims that ignore this context are overclaiming.
The third is Daron Acemoglu’s Simple Macroeconomics of AI, an NBER paper that estimates the economy-wide productivity impact of AI at no more than 0.66% of total factor productivity over ten years, under reasonable assumptions about how many tasks are AI-exposed and how cost-saving the substitution actually is. That number is far smaller than the figures implied by some industry forecasts. Acemoglu may be wrong; the assumptions can be moved. But the paper functions as an important guardrail against extrapolating from striking firm-level cases to economy-wide booms. The 2025 BLS productivity data — total factor productivity up 0.8%, labor productivity up 2.2% in private nonfarm business — is consistent with firm-level AI effects but does not yet show a broad acceleration that would distinguish an AI-driven productivity regime from a normal expansion.
Taken together, these pieces of evidence push the conclusion toward something more nuanced than displacement. AI can increase the productivity of the workers who remain. AI can occasionally reduce the productivity of the most expert users. AI’s economy-wide productivity impact, on current estimates, is probably modest. A coherent picture emerges: a labor adjustment that runs through hiring rather than wages, that is concentrated rather than broad, that increases the productivity of remaining junior workers while reducing the demand for them in aggregate, and that shows up clearly in some occupations while leaving others largely untouched.
That is the first-order story. It is the second- and third-order consequences that should command the most thinking, because they unfold over years, they are largely invisible in current data, and they are where the strategic and political stakes get serious.
The first rung and the missing 2030s
White-collar firms have always run a hidden financial product. They hire junior workers at salaries that are, in present-value terms, below the firm’s true cost of producing the work those juniors do. The juniors stay because the implicit deal is that today’s underpriced labor is the cost of admission to tomorrow’s senior role. The firms break even on the trade — and often more than break even — because a fraction of those juniors become the seniors who run client relationships, supervise junior cohorts, make partner, hold the institutional knowledge, and generate the firm’s premium pricing power. The exact economics vary by industry. The structure does not. The entry-level salary is a down payment on the capacity to develop a specific kind of person.
Nobody articulates the trade in those terms. A managing partner at a law firm would not say “we hire twenty first-year associates so that three of them can become partners in 2034.” A consulting principal would not say “every analyst hired today is an option on a 2032 senior associate, and the strike price is the difference between paying them now and not having them later.” But that is the math behind the visible activity. The work the juniors do — the document review, the comparable-company analysis, the working papers, the boilerplate code, the deck production — is part economic output and part training rounds. Subtract the training rounds and the math changes.
This is the rung that AI is now letting firms skip. A model can do the document review. A model can pull and clean the comparable-company set. A model can write the working papers and the boilerplate code and the first draft of the deck. The cost saving is immediate and visible on the next quarter’s P&L. The cost of having skipped the training is not visible at all in 2026. It becomes visible in 2031, 2032, 2033, when the firms that used AI to thin their entry-level cohorts discover that the seniors they would have grown are not there to be promoted.
The shape of the problem differs by profession in ways that matter, and the differences are worth working through, because the strategic implications for firms and investors look different in each.
Big Law has the most legible version of the apprenticeship. The associate-to-partner track runs roughly eight to ten years, with progressive increases in matter responsibility, client interaction, and book-of-business development. First-year associates spend most of their time on document review, basic legal research, due diligence support, and first-draft drafting under close supervision. The work is largely fungible across associates and largely automatable. Mid-level associates begin running matters under partner supervision, managing junior associates, and developing technical depth in particular practice areas. Senior associates run matters with substantial autonomy and begin generating their own pipeline. Partners bring in business and supervise. The trade-off the firm makes is that the first two or three years of an associate’s tenure are largely about building the muscles — judgment about what matters in a case, instinct about what a partner will care about, taste for what good legal writing looks like, the ability to spot the issue that nobody briefed — that the associate will need to be useful at year five and indispensable at year eight. The first-year associate is a future-senior-associate-in-training, and the training is most effective when the associate does the work themselves, gets feedback, and iterates. AI handling the document review is fine for the matter. It is less fine for the muscle-building. By year five, an associate who has spent three years checking AI’s document review rather than doing the review and being checked has a different skill profile than an associate who learned the older way. By year eight, the firm needs to bet partner-track on someone whose training trajectory was different from any partner currently in the building. Most firms are not yet thinking about this seriously. The ones that are thinking about it are quietly redesigning their associate development programs while continuing to thin first-year cohorts at the public level.
Audit and accounting is structurally similar with a different rhythm. Staff accountants spend their first two years doing tick-and-tie work, sample testing, working paper preparation, and basic compliance. Seniors review staff work, run smaller engagements, and develop technical specialization. Managers run engagements, manage teams, and develop client relationships. Senior managers and partners run portfolios and bring in business. The Big Four firms have been more aggressive than Big Law on AI deployment, partly because the underlying work is more standardized and partly because audit-firm economics live and die by leverage ratios. PwC, Deloitte, EY, and KPMG have all publicly committed billions to AI infrastructure, much of which is targeted at exactly the staff-level work that has historically been the entry rung. The leverage gain is real. So is the apprenticeship cost. Auditing requires a kind of professional skepticism that is, in part, learned by doing the testing yourself, finding the discrepancy, and being wrong about whether it matters. A staff accountant who reviews AI-generated working papers learns to spot the AI’s errors, which is a useful skill. They do not necessarily learn to spot a client’s errors, which is a different skill and the one the firm actually sells. The 2032 senior manager who is supposed to run the audit of a Fortune 500 client needs the second skill, and the training arc that produces it is being compressed.
Management consulting has the most leveraged pyramid in the major professional services. The analyst-to-associate-to-consultant-to-manager-to-partner arc compresses about ten years of training into a structure where the firm’s revenue model depends on charging high day-rates for senior judgment supported by armies of analysts producing decks, models, and research. Analysts do the heavy lifting on data extraction, slide production, and basic analysis. The job description is largely a description of work that current AI can do at high quality. McKinsey’s Lilli, BCG’s Deckster, and Bain’s analogous internal tools all explicitly target the analyst tier of work. The intended effect is to compress the production cycle and increase the firm’s leverage. The apprenticeship effect is that the analyst does less of the production themselves and more of the prompting, review, and synthesis. There is a plausible argument that the new work is actually better training — it requires the analyst to think about what a good answer looks like before getting it, rather than building the answer from scratch — but the argument is unproven. What is provable is that the analyst learns differently, and the senior associate of 2030 will have a different skill stack than the senior associates the firms are currently used to promoting.
Software engineering is the case that is hardest to reason about, because the technology is moving fastest and the apprenticeship structure is the least formal. A junior software engineer historically learns by working in a codebase, fixing small bugs, writing tests, building small features, and gradually being trusted with larger pieces of the system. The codebase itself is the textbook. By the time the engineer is two or three years in, they have an internal model of how the system works that lets them debug incidents, design new features, and mentor newer engineers. AI coding assistants are now writing a meaningful fraction of the code at many firms. For senior engineers, this is mostly productive: they direct the model, review the output, and apply judgment. For junior engineers, the calculus is different. If the AI is writing the code, the junior is not writing the code. They may be reviewing it, but reviewing AI-written code teaches a different skill than writing code from scratch. The risk is that a junior who spent two years prompting and reviewing builds a different mental model of the system than a junior who spent two years writing and debugging — and that the second mental model is the one that produces a useful senior engineer in year five. The METR finding cited earlier in this piece is a hint: experienced developers were slowed by AI tools partly because the cognitive overhead of reviewing model output is higher than the apparent savings. If that is true for experienced developers in familiar codebases, it has implications for what juniors are actually learning when they spend their day reviewing.
Investment banking presents the cleanest version of the substitution case, because the work is the most standardized. First-year analysts at investment banks spend their hours on pitch books, financial models, comparable-company analyses, and capital-markets updates. Goldman, Morgan Stanley, JPMorgan, and the European investment banks have all publicly discussed AI deployment that targets exactly this work, and several have explicitly mused about reducing analyst class sizes. The two-year analyst program is an unusually compressed training arc; analysts who survive it become associates with a specific kind of muscle memory around models, decks, and the rhythm of deal work. If AI compresses the model and deck production, the muscle memory has to be built somewhere else. The senior associate of 2030 either has it or doesn’t, and the firms that figure out where else to build it earliest will end up with an edge.
Across these professions the pattern is the same. AI handles the routine production. The cost saving is immediate. The training arc that depends on the routine production is interrupted. The seniors who would have emerged from that arc emerge differently, in fewer numbers, or not at all. The bill comes due in five to ten years.
What does a redesigned apprenticeship actually look like? The firms that are thinking about this carefully are converging on a small set of design principles, none of which is publicly articulated yet but all of which are visible in their internal training programs.
The first principle is that the junior has to be in the loop, not adjacent to it. The cost-saving move that breaks the apprenticeship is the one where the senior runs the AI directly and the junior is not present. The pipeline-preserving move is the one where the junior runs the AI under the senior’s review. The economics of the second move are worse than the first in the short run, because the senior has to spend supervisory time. The economics of the second move over a decade are much better, because the junior is being trained.
The second principle is that the work itself has to change. If the junior is reviewing AI output, the review has to be substantive rather than cursory. The firms doing this well are designing review structures where the junior is responsible for catching specific kinds of errors, articulating the model’s limitations, and rebuilding pieces of the analysis from scratch when the model gets things wrong. Done seriously, this is harder training than the original work, not easier. Done as box-checking, it produces nothing.
The third principle is that the senior’s role has to change as well. In the old model, the senior reviewed the junior’s work and the review was the feedback loop. In the redesigned model, the senior is reviewing the junior’s review of the AI’s work, which is a different cognitive task and requires the senior to be much more explicit about what good judgment looks like in the domain. This is harder and slower, and it requires senior workers to articulate intuitions they may never have articulated before. Some firms will do this well. Many will not.
The fourth principle is that the metrics have to be different. Hours billed per associate, slides produced per analyst, lines of code per junior engineer — none of these mean the same thing they used to. The metrics that capture pipeline health are things like time to first independent matter ownership, retention curves at year three and year five, internal-mobility rates into mid-career roles, error rates on AI-supervised work, and the rate at which juniors graduate to true judgment work versus get stuck as supervisors of model output. None of these are currently disclosed and most are not currently tracked in any rigorous way.
The firms that build these systems will have a real advantage by 2030. The firms that don’t will be in one of two positions. The first is the position of having to hire mid-career laterally at a substantial wage premium, because their internal pipeline ran dry. The lateral mid-career market in 2031 is going to be expensive, and the supply is going to be limited, because the same conditions that produced the firm’s own hollow are producing hollows everywhere else. The second is the position of having to promote junior workers into mid-career roles before they are ready, with the predictable consequences for client work, error rates, and senior-level quality. Some firms will compromise on quality without admitting it; some will lose clients before they recognize what happened.
The honest assessment is that most firms in the exposed professions are currently in the first phase — using AI to thin junior cohorts and harvest the cost saving — and have not yet started the redesign. Some have started but are doing it badly. A small number are doing it well, and they are mostly not advertising the fact, because the visible posture in 2026 is still “AI is making us more efficient” rather than “AI is forcing us to redesign how we develop people.” The latter posture is the harder communication and the more expensive operating model. It is also the right answer for any firm that intends to exist as a going concern in the 2030s.
The Klarna pattern applies here, with a longer fuse. A meaningful share of the firms that have aggressively cut junior cohorts will, around 2029 or 2030, quietly begin to rebuild them. They will discover that the cost of the rebuild is much higher than the cost of having maintained the cohort all along, because the lateral market is tight and because the institutional knowledge of how to run a healthy training program has decayed in the intervening years. Some firms will get this rebuilding done before the cost compounds. Many will not. The firms that get the redesign right early will be paying for it now, in the form of margins that are visibly worse than their less thoughtful competitors. They will be repaid in the early 2030s, when they have the seniors and the competitors do not.
The broader strategic point is that the AI-substitution decision in white-collar firms should not be made by the people who measure quarterly cost savings. It should be made by the people who measure ten-year talent compounding. In most firms, those are not the same people, and the org chart is structured so that the cost-saver wins. That is the part of the problem that is not really about AI at all. AI is the technology that exposed an organizational pathology that was already there: the systematic underweighting of long-horizon human capital investments by management structures optimized for shorter horizons. The firms that fix the organizational problem will get the AI strategy right. The firms that don’t will discover, around 2030, why they didn’t.
Cohort scarring, demographics, and politics
If the apprenticeship pipeline argument is about the firms that hire, the cohort scarring argument is about the workers who don’t get hired. There is now a substantial economic literature on what happens to people who graduate into a weak labor market, and the findings are sobering enough to take seriously when thinking about the AI-exposed cohorts of 2025–2027.
Lisa Kahn’s 2010 paper in Labour Economics is the canonical reference. Looking at white male college graduates from 1979 to 1989, Kahn found that a single percentage-point increase in the unemployment rate at graduation reduced initial wages by 6 to 7%. The effect faded with experience but never fully closed. Even fifteen years out, the wage loss was still around 2.5% per point and statistically significant. Comparing the unluckiest cohorts (1980 and 1981, graduating into the early-1980s recession) to the luckiest (1988, graduating into the late-1980s expansion), the earnings gap nearly twenty years later was roughly 10%, with cumulative present-discounted earnings losses well over $100,000 per person. The effect was not transitory, and it operated through a specific mechanism: graduates who entered a weak market took lower-tier first jobs, and the structure of internal labor markets meant they could not fully catch up. The first job mattered, because subsequent moves were largely lateral within tier, and tier was set early.
Subsequent work has confirmed the effect in other contexts. Oreopoulos, von Wachter, and Heisz documented similar patterns in Canadian data, with the largest scars falling on graduates predicted to have lower earnings to begin with. Schwandt and von Wachter, in subsequent NBER work, found that recession-cohort graduates have higher mid-life mortality, which is a striking and disturbing finding — it suggests the labor-market scar translates into stress, financial precarity, and health outcomes that compound over decades.
Now overlay the AI exposure pattern. The Stanford finding is not about unemployment in the headline sense. It is about employment quantity in a specific demographic in a specific set of occupations. But the scarring mechanism applies in an even sharper form when the issue is not “the first job pays less” but “the first job in the trajectory does not exist.” A 22-year-old computer science graduate in 2026 who cannot get a junior software engineering role, and who ends up working in a non-technical job for three years, is not just delayed on the software trajectory. They may be displaced from it entirely. Once they have a three-year gap in software experience, they are competing for re-entry against new graduates, and they are also competing against the AI-fluent workers who never left the trajectory. The probability that they fully re-enter falls with each year out.
Scale matters here. The U.S. produces roughly two million bachelor’s degree recipients per year. Of those, perhaps 1.2 to 1.5 million enter what could broadly be called white-collar entry-level employment, with the remainder going into healthcare, education, government, and other less-exposed sectors. Of the white-collar entry-level cohort, perhaps half — six to eight hundred thousand workers per year — go into roles in the high-AI-exposure occupations: software, marketing, content production, customer-facing technical work, junior finance, junior consulting, junior law, technical writing, design, and administrative-analytical work. If the Stanford pattern of 16% relative employment decline persists or grows, the affected pool is roughly one to two hundred thousand workers per year — graduates who would have been hired in the absence of the AI shock and were not. Compounded across five years, that is half a million to a million workers in the affected cohort. Across ten years, double that. These are not catastrophic numbers in the context of a U.S. labor force of 165 million. They are large enough, however, to constitute a cohort phenomenon with measurable demographic, geographic, and political consequences.
The cohort-fertility math is particularly worth working through, because it tends to get glossed over in the public discussion. U.S. fertility has fallen from 2.1 children per woman in 2007 to roughly 1.6 in 2024, with college-educated women showing systematically lower fertility than the population average. The 2008 recession produced a measurable cohort-fertility decline in the affected age groups that, by 2024, had not been recovered. The mechanism is straightforward: economic uncertainty in early adulthood delays marriage, delays first births, and reduces total completed fertility because the years lost to delay are not recovered. A sustained AI-driven labor shock to the credentialed early-career cohort, layered on top of the existing fertility decline, has a predictable effect. Cohort completed fertility for the affected groups falls another 5 to 10 percentage points relative to the counterfactual. Across a population of one to two million workers spread over multiple cohorts, this implies between fifty thousand and two hundred thousand fewer births over the lifetimes of the affected workers. That is small relative to the 3.6 million annual U.S. birth count, but meaningful as a contribution to a fertility trajectory that already concerns demographers, and meaningful in compositional terms — the missing births are concentrated in the most-credentialed segment of the population, with implications for the long-run skill distribution that take a generation to fully emerge.
The geographic concentration of the effect is sharp. The U.S. labor market for high-skill young workers is concentrated in roughly eight major metros: the San Francisco Bay Area, New York, Boston, Seattle, Austin, Washington, Los Angeles, and Chicago. These cities have housing markets, fiscal bases, and consumer-services economies that are priced — implicitly or explicitly — on the assumption of a sustained inflow of high-earning young residents. The Bay Area’s housing market in particular has been built on the model of a continuous stream of new tech graduates earning $150,000 to $250,000 in their first jobs and bidding up rents and home prices accordingly. New York’s financial-services analyst market, Boston’s biotech and consulting analyst market, Seattle’s tech analyst market, and Austin’s expanding tech presence all run on similar assumptions.
If the AI shock persistently reduces the inflow by 10 to 20 percent over a five- to seven-year window, the housing-market consequences are real but slow-moving. Rents in the most exposed segments — luxury apartments in core neighborhoods marketed to early-career professionals — should compress first, probably by 5 to 15 percent over a multi-year horizon. The effect on owner-occupied housing is slower because the marginal buyer is older, but the long-run downward pressure exists at the price points that depend on a stream of high-earning young first-time buyers. The least exposed segment is the supply-constrained middle tier where buyer demand exceeds available units regardless of cohort dynamics.
The fiscal consequences for the affected cities are more interesting than the housing consequences, because they compound faster. San Francisco and New York both have substantial municipal income tax exposure to high-earning young residents. New York City’s progressive income tax structure means that the top decile of earners — heavily concentrated among finance, consulting, law, and tech professionals — pays a disproportionate share of city revenue. If the inflow of young high earners thins persistently, the revenue base thins with it. Combined with the existing pressures on commercial real estate values, the fiscal squeeze on the major metros could become serious by 2029 or 2030. The political response will probably be a combination of zoning reform aimed at expanding housing supply at lower price points, increased competition between cities for the smaller pool of AI-fluent workers who do exist, and fiscal restructuring that pushes public-sector costs onto the residents who remain. None of these responses will arrive quickly. All of them will arrive eventually, and the cities that move first on housing supply will have a meaningful advantage in attracting the workers who remain.
The political dimension is the most speculative and probably the most important, and it deserves more directional commitment than the standard “consequences will happen” hedge.
The AI-affected cohort has an unusual political profile. They are educated, articulate, geographically clustered in metros that are already politically progressive, online-fluent in ways that earlier protest cohorts were not, and increasingly carrying both student debt and an identifiable economic grievance against a specific set of corporations and technologies. They are not the working-class cohort that produced the populist movements of the 2010s. They are part of what those movements were partly directed against — the credentialed urban class that benefited from globalization and from the financialization of the U.S. economy from 1990 to 2020. The AI shock is now turning that beneficiary status into something more ambiguous. The credentialed urban class is, for the first time in a generation, facing an economic threat that is not solvable through more credentials.
This produces a distinct political alignment, different in important ways from the populist movements of the 2010s. The targets are different. The 2010s populist right targeted globalization, immigration, and a generalized “elite.” The AI-cohort grievance has more specific targets: the AI labs themselves, the executives who publicly attribute headcount cuts to AI, the dominant cloud and chip providers, the SaaS firms that capture the productivity gains from labor substitution. These are concrete, identifiable, often publicly traded entities whose CEOs make public statements about workforce reduction. They are the kinds of targets that produce focused political mobilization rather than diffuse cultural backlash. The cohort itself, being educated and geographically clustered in places like San Francisco and New York, has organizational infrastructure that the 2010s working-class populist movements largely lacked.
The most likely political channels for the AI-cohort response, in rough order of probability over the next five to seven years, look something like this.
Direct AI regulation is the most probable channel. Transparency mandates for AI deployment in employment decisions, audit requirements for AI hiring tools, mandatory disclosure of AI-related workforce changes, and licensing or certification requirements for AI deployment in specific sectors are all on the table. The European Union has already moved in this direction. The United States is likely to follow within five years, partly because the AI-cohort political demand will provide political cover that did not previously exist. The shape of the regulation matters less than the fact that it is coming; once the framework is in place, the friction cost on AI-driven labor substitution rises across the economy.
Labor-market interventions are the second channel. Expanded severance requirements, lengthened advance-notice periods for layoffs, and possibly hiring quotas or tax preferences for firms that maintain entry-level employment are all low-friction policy moves that address an identifiable political constituency. The WARN Act framework already exists in U.S. law; expansion of its scope to cover AI-driven workforce changes is the kind of incremental move that is hard to oppose politically and meaningful in its operational consequences for firms that have built AI-substitution into their cost structure.
Antitrust enforcement is the third channel and possibly the most consequential for the AI-capture firms. The current U.S. antitrust posture is more aggressive than at any time since the 1980s, and an AI-cohort political movement provides additional political support for that posture. The probability of a meaningful antitrust action against one of the major AI labs in the next five years is, by my estimate, materially higher than the public market is currently pricing. The shape of any such action is hard to forecast — structural separation, conduct remedies, pricing intervention — but the directional risk is real and not yet in the multiples.
Tax policy targeting AI capital is the fourth channel. Various proposals — from a robot tax to a worker-dividend funded by AI productivity gains to a more aggressive taxation of corporate AI investment — have been discussed in academic and policy circles for years. The probability of one of them moving from policy paper to enacted legislation increases substantially with an organized political constituency demanding it. The early-2030s window, when the first AI-cohort grievances become electorally meaningful, is a plausible enactment window. The fiscal consequences of cohort scarring — reduced income tax base in the affected metros, lower long-run productivity growth, increased transfer requirements — will themselves create budget pressure that an AI capital tax could plausibly address.
Targeted student-debt relief and public-sector hiring expansion are the fifth channel, both as direct relief mechanisms for the affected cohort. The 2022–2024 student-debt politics provided a template; an AI-cohort version would be more focused, more targeted, and probably more politically successful because the affected population is more geographically and demographically concentrated. Public-sector hiring as employer of last resort for displaced credentialed workers is a possibility that has been discussed in various forms — federal AI corps, expanded teaching corps, expanded public health and social services hiring — and could become serious policy if cohort scarring becomes politically salient.
Sectoral interventions are the sixth channel, particularly in professions where AI substitution intersects with existing licensing structures. Law, medicine, accounting, and architecture all have professional licensing regimes that could be used to slow AI substitution at the margin. The political dynamics here are complex — incumbent professionals have an interest in preserving their licensing rents, and they will use the AI threat to argue for tighter licensing — but the AI-cohort grievance provides additional political coalition support for tightening licensing in ways that protect human practitioners. The American Bar Association, the AICPA, and the AMA all have established lobbying infrastructure that could be activated for AI-related restrictions.
The directional prediction worth making, and one that the standard hedge avoids, is this: the political response to AI labor effects will probably be a mix of these channels, will probably arrive between 2028 and 2032, will probably be more aggressive than current public discourse anticipates, and will probably target a small number of specific corporations rather than the technology in general. The firms that have been loudest in publicly attributing workforce reduction to AI — and there are several — are most likely to be the targets. The firms that have communicated more carefully about augmentation and human-AI collaboration are likely to face less direct exposure. There is a communication risk premium that the market is not currently pricing.
The realignment implication is harder to call. The AI-cohort grievance does not map cleanly onto either of the existing major-party coalitions. It is more economically progressive than the median Democratic position on corporate accountability, but it is culturally aligned with the educated-progressive coalition rather than the working-class populist one. It could produce a further leftward shift in the Democratic Party, focused specifically on AI labs, dominant tech platforms, and the financial structures that capture AI productivity gains. It could produce a new tech-skeptical politics that cuts across left-right lines, combining AI-labor concerns with broader cultural concerns about AI’s role in information environments and child welfare. It could produce a generational cleavage that runs more sharply through age cohorts than through ideological positions, with AI-affected millennials and Gen Z aligned against older Americans whose retirement savings benefit from the AI capture firms’ equity returns.
The historical parallel that comes closest is not the populism of the 2010s but the Progressive Era of the 1900s and 1910s, when an educated middle class with an identifiable grievance against specific industrial concentrations — the railroads, the trusts, the financial powers — produced a sustained political movement that reshaped the regulatory architecture of the U.S. economy over roughly two decades. The AI-cohort grievance has structural similarities: a well-educated, geographically concentrated, politically articulate population with specific corporate targets and a clear economic interest in regulatory intervention. If the parallel holds, the 2026–2036 period should produce a wave of regulatory and structural reform that looks substantial in retrospect even if it appears piecemeal as it unfolds. The firms that read the historical parallel correctly will adjust their cost structures and their public posture early. The firms that don’t will, like the Standard Oil leadership of 1900, find themselves the political target whose concrete positions they did not anticipate.
What is genuinely uncertain is whether the political response is strong enough and timely enough to materially shift the economic adjustment, or whether it arrives too late and too incrementally to do more than redistribute the costs marginally. The historical record on political responses to economic shocks is mixed: sometimes the response is decisive (the New Deal, the post-1970s deregulation), sometimes it is symbolic and ineffectual (much of the 2008 financial crisis response). The 2030s response to AI labor effects could go either way. What is not uncertain is that AI-exposed business models will have political risk in their cost structures by 2030 that they do not currently have, and that the firms most exposed to that risk will be the ones whose public communication has most explicitly identified them as targets.
Where does the labor go?
The third large open question is the macroeconomic one. If AI is reducing the demand for entry-level white-collar labor in some occupations, the displaced workers — the ones who would have been hired in the absence of AI but were not — have to be doing something. Where they end up determines whether the AI labor adjustment becomes a productivity story, a redistribution story, or a stagnation story. The mix matters, because each path has very different second-order consequences for asset prices, monetary policy, inequality, and the political environment.
There are three broad possibilities, and the eventual outcome will be a mix of them in different proportions for different workers.
The first possibility is upward absorption. The displaced workers up-skill, develop new capabilities, and move into roles that AI is creating rather than the roles AI is reducing. AI engineering, ML research, AI safety, AI integration, AI-product management, agentic workflow design, prompt and evaluation work, data infrastructure for model training. The Stanford paper itself notes that some AI-related occupations have grown rapidly. In principle, displaced junior writers and junior coders and junior support staff could become AI workflow designers and AI evaluation specialists and AI-augmented domain experts. In practice, the timeline is the issue. Up-skilling at scale takes years. The new occupations are a fraction of the size of the old occupations. The skill profile for AI-native work is non-trivial and not evenly distributed. A small share of the displaced will end up here. It is the most visible path, but probably not the most populous one.
The second possibility is what economists call cascade displacement, and it is probably the largest near-term path. A 22-year-old computer science graduate who cannot get a junior software job in 2026 does not, in most cases, immediately become an AI engineer. They take a different job — perhaps a project manager role at a non-tech firm, perhaps a technical sales role, perhaps a teaching role, perhaps a job that does not formally require their degree. That job, in the absence of the AI shock, would have gone to someone less credentialed, who would in turn have taken a job that someone even less credentialed would have filled, and so on down the labor market.
The cascade does not produce unemployment at the headline level. It produces credential inflation and mismatch. The same job is now done by someone with a more expensive education and more capability than it strictly requires. The person at the bottom of the cascade — the one who cannot move further down — drops out of the labor force, takes informal or gig work, or stays out for an extended period. The aggregate unemployment rate may not move much, but the underemployment rate moves substantially, and the long-run distribution of who does what kind of work shifts.
The New York Fed’s 42.5% recent-graduate underemployment rate is consistent with this path being the dominant one in the current data. A meaningful share of recent graduates are working, but they are working in jobs that do not formally require a college degree. The ones who would have been doing those jobs in 2019 are presumably doing something else, or doing nothing.
The third possibility is labor force exit. Some share of the displaced workers, having tried and failed to find acceptable employment, leave the labor force entirely. They go into household production, extended education, caregiving, gig work that may or may not be counted, or simply non-employment supported by family or savings. Prime-age labor force participation in the U.S. has been recovering since 2020, but the recovery has been uneven, and a meaningful negative shock to the entry-level white-collar market could halt or reverse it for the affected cohort.
The early-period evidence is more consistent with the cascade scenario than with either of the alternatives. The Stanford finding shows displacement in entry-level work without a corresponding boom in newly-created entry-level work elsewhere. The NACE data show that AI-skilled entry-level jobs are growing fast, but from a smaller base than the conventional entry-level jobs that are flat or declining. The Fed’s underemployment data show that workers are landing somewhere, but somewhere lower than they previously would have. The exit signal is weaker but not absent: prime-age labor force participation has recovered more slowly in the AI-exposed metros than in the broader economy.
If the cascade scenario continues to dominate, the second-order economic consequences are interesting in ways that the standard discussion misses, and they are where the more important macro arguments live.
The first second-order consequence is wage compression at the bottom of the labor market. As overcredentialed workers cascade into less-credentialed jobs, they compete for those jobs against workers who would otherwise have filled them. The standard prediction is that wages in those jobs compress — not necessarily fall in nominal terms, but rise more slowly than productivity in the broader economy would imply. This prediction is testable. The combination of low headline unemployment, weak wage growth in lower-credential service occupations, and rising underemployment in higher-credential cohorts would be the signature. If it shows up clearly in 2027 or 2028 data, the cascade interpretation is supported. The implication is counterintuitive: AI substitution at the top of the credential distribution produces wage compression at the bottom, not the top. Wage growth among already-employed senior white-collar workers continues, possibly accelerates as their productivity rises through AI augmentation. Wage growth among displaced entry-level workers and the workers they cascade onto stagnates or falls. The within-firm and within-cohort wage distributions widen. The aggregate inequality story becomes more complicated, with credentials providing less protection than they used to and seniority providing more.
The second second-order consequence is a likely deepening of the Phillips curve breakdown. The relationship between unemployment and inflation, which already loosened substantially in the 2010s, becomes harder still to interpret in a cascade economy. Headline unemployment can stay low while substantial labor market slack hides in mismatch, underemployment, and labor-force participation effects. The Federal Reserve, which has historically targeted full employment as one of its dual mandates, has to figure out whether a 4% unemployment rate accompanied by 40% recent-graduate underemployment represents tight labor markets or slack ones. The honest answer is that it represents both, in different sub-markets, and the policy implications differ depending on which sub-market the Fed is most worried about. This is a hard problem. The 2010s version of this problem produced a decade of unusual monetary policy. The 2020s and 2030s version is likely to produce more of the same, with implications for asset prices that depend on which way the central bank guesses. A Fed that focuses on the headline numbers may run policy too tight from the perspective of the cascade-affected cohorts; a Fed that focuses on broader measures of slack may run policy too loose from the perspective of senior-level wage growth and asset price stability.
The third second-order consequence runs through productivity measurement. National accounts measure output per hour worked. If AI is producing genuine productivity gains that are captured by firms in the form of higher revenue per employee — the Klarna pattern — the gains show up in productivity statistics. But if a meaningful share of the AI productivity gain is captured as consumer surplus rather than as market output (because AI tools are priced near marginal cost and the welfare gain accrues to users rather than producers), national accounts miss it. And if AI deployment shifts work composition rather than expanding output — replacing junior workers without expanding what the firm produces — productivity statistics may show modest gains that understate what is actually happening at the firm level. The Acemoglu estimate of 0.66% TFP growth over a decade may turn out to be a reasonable description of the official statistics even if the underlying economic transformation is much larger. Solow’s 1987 quip that computers were “everywhere except the productivity statistics” was articulated more than a decade before the late-1990s productivity acceleration; the current AI productivity statistics may be in a similar early phase. They may also not be. The measurement problem has policy consequences. Central banks, fiscal authorities, and electorates all use productivity statistics to calibrate their understanding of what the economy is doing. If those statistics systematically understate an AI-driven transformation while the transformation is producing visible labor-market dislocation, the political pressure on AI-using corporations rises faster than the official productivity case for AI deployment.
The fourth second-order consequence is asset-market reallocation, which compounds across multiple horizons. In a cascade-dominant economy, the firms that capture the AI productivity gains without bearing the labor adjustment costs see margin expansion that the public market is partially pricing — the AI labs themselves, the dominant cloud and chip providers, the SaaS vendors that successfully monetize AI usage, the platform layer. The losers are firms whose business models priced labor at the credentialed level: premium service businesses with thin senior leverage, high-end staffing firms whose margin depended on apprenticeship economics, billable-hour service models that have not figured out how to reprice. The asset classes that capitalized future income streams from now-squeezed cohorts — high-end urban housing, certain consumer discretionary categories aimed at high-earning young professionals, premium subscription services, education debt held by AI-affected cohorts — face slower growth than their valuations imply. The dispersion between AI-capture firms and AI-absorption firms widens. The firms in the middle — partially exposed but not in either camp — face uncertain repricing.
The fifth second-order consequence is the political feedback loop discussed earlier. If cascade dominance is the path, the affected cohort experiences a real economic shock, and the political response builds over the late 2020s. The political response, when it arrives, has policy implications — labor regulation, antitrust, tax — that change the cost structure of AI deployment for the firms most aggressively capturing the gains. This produces an indirect channel through which the cascade scenario eventually self-corrects: the AI-capture firms face rising regulatory and political costs, the AI-absorption firms get partial relief through regulation, and the equilibrium adjusts. This is not a fast process. It probably takes the better part of a decade.
The sixth second-order consequence is fiscal. A cascade economy produces a peculiar fiscal profile: nominal employment looks reasonable, headline GDP grows, but income tax revenue is structurally weaker than the headline numbers imply, because the highest-earning cohorts are smaller than they would otherwise be. State and local revenue in the AI-exposed metros is most exposed, with the New York City and California fiscal architectures particularly vulnerable to a sustained reduction in the top-decile inflow. Federal revenue feels the effect more slowly, but the cumulative impact across a decade is meaningful. The fiscal pressure interacts with the political pressure: governments running short of revenue have an additional incentive to tax AI capital, and the AI-capture firms become natural targets for revenue-raising measures that would otherwise be politically difficult.
The upward-absorption scenario has its own second-order consequences, which are less explored in the public discussion partly because they are more comfortable. If AI’s productivity contribution is closer to the more optimistic 1.5 to 3% TFP range over a decade, then the new economic activity generated is large enough to absorb the displaced labor. The transition is messy — historical productivity transitions always are — but the eventual outcome is something closer to upward absorption with modest cascade. Real wages rise broadly. New occupations grow large enough to matter. The cohort scar is real but shorter-lived than in the pessimistic case. Asset prices benefit broadly across both AI capture and AI deployment. Political pressure subsides because the affected cohorts experience a relatively short scar. The fiscal squeeze on the affected metros softens. The Phillips curve relationship reasserts itself, partially. The world looks more like the late 1990s than the early 1980s.
Which scenario prevails depends on a small number of factors that are hard to forecast. The most important is the rate of new occupation formation: whether the AI economy generates enough genuinely new categories of work, at sufficient scale, to absorb the displaced labor at productivity levels comparable to where it came from. The historical evidence on this is mixed. The Industrial Revolution eventually generated the entire white-collar workforce that did not previously exist, but the transition took decades and the affected agricultural cohorts mostly did not personally benefit. The information technology revolution of the 1980s and 1990s generated the software, financial services, and consulting industries at scale, but the affected manufacturing cohorts mostly did not personally benefit either. The current AI transition is on a faster timeline; whether the new occupations form fast enough and at sufficient scale to absorb the labor before the cohort-scarring effects compound is the central uncertainty.
The honest assessment is that the early data are more consistent with cascade than with upward absorption, but the productivity literature is clear that productivity transitions are non-linear and historically backloaded. The current AI productivity statistics may be in an early phase. They may also not be. We will probably know which scenario was right by 2030 or 2032, and the bets being placed today are about which path most likely materializes.
For investors, the asymmetry is worth thinking about carefully. The cascade scenario produces a narrower set of winners and a wider set of losers; the upward-absorption scenario produces a wider set of winners and a narrower set of losers. The cascade scenario also produces political and regulatory risk that the upward-absorption scenario largely avoids. Positioning for cascade therefore implies tighter focus on the AI-capture firms with explicit hedging for regulatory risk, explicit short or underweight positioning on AI-absorption firms with weak repositioning capability, and underweight positioning on the assets that depend on early-career credentialed cohorts being economically robust. Positioning for upward absorption implies broader exposure to AI-deploying firms across the economy, less concern about the AI-absorption losers because they recover, and less concern about the political dimension. The portfolio that is robust across both scenarios overweights the AI-capture firms with regulatory diversification, underweights the most exposed AI-absorption firms with weak repositioning, and avoids the assets that depend on the most uncertain demographic and political outcomes.
The macroeconomic conclusion is that the AI labor adjustment is not just a labor market story. It is a productivity story, a monetary policy story, an asset market story, a fiscal story, and a political story, all running on different time horizons and interacting in ways that are hard to forecast in detail but predictable in direction. The cascade scenario, if it dominates, produces an economy that looks superficially fine on headline statistics but is internally maladjusted in ways that compound over years. The upward-absorption scenario, if it materializes, produces a more visible productivity boom that resolves many of the second-order problems but takes longer to arrive than the cohort experiencing the scar can reasonably wait for. The political response to either scenario, but particularly to cascade, is likely to arrive on a five- to seven-year horizon and to reshape the cost structure for AI-deploying firms in ways that are not yet priced.
This publication is for informational and educational purposes only and does not constitute financial, investment, or trading advice. The analysis, opinions, and commentary presented here should not be interpreted as a recommendation to buy, sell, or hold any security. Always conduct your own research and consult a qualified financial advisor before making investment decisions. Past performance does not guarantee future results.


