**As generative AI strips “drudgery” from finance, it dismantles the apprenticeship that historically produced financial judgment, shifting real decision power from humans to models and vendors. What looks like productivity today is a slow hollowing of the human pipeline that underpins capital allocation.**

Generative AI in Finance Is Automating the Ladder, Not Just the Work

OPENING: Generative AI is not just taking over routine finance tasks; it is quietly dismantling the apprenticeship system that made the modern CFO possible. The same tools now drafting quarterly reports, investor letters, and liquidity summaries are absorbing the very work through which humans once acquired financial judgment. Current narratives frame this as a win-win: AI handles “drudgery,” humans “move up” into strategy. But finance is a domain where strategy is built out of years of close contact with messy data, tedious reconciliations, and endless draft decks. As large language models (LLMs) move into the finance function, they don’t simply save time-they compress the human learning curve and shift real interpretive power toward models, platforms, and those who own them. That shift won’t show up in this quarter’s ROI, but it will define who actually understands the numbers a decade from now.

THE EVIDENCE: AI Is Eating the Work That Trains Finance Humans

The current wave of generative AI in finance is concentrated precisely where junior and mid-level finance staff used to live. The article’s core examples are not exotic quant applications; they are the everyday, cognitively dense routines that form the backbone of the finance organization.

LLMs and generative tools are already deployed to:

  • Generate quarterly and other financial reports.
  • Draft communications to investors.
  • Produce strategic summaries for leadership.
  • Support treasury with cash, revenue, and liquidity forecasting and management.
  • Automate contract review and elements of investment analysis.

Andrew W. Lo, professor and director of the Laboratory for Financial Engineering at MIT Sloan, captures the mainstream framing: “LLMs can’t replace the CFO by any means, but they can take a lot of the drudgery out of the role by providing first drafts of documents that summarize key issues and outline strategic priorities.” That “drudgery” is exactly what analysts, managers, and aspiring CFOs previously did by hand-building models, reconciling data, writing the first pass of the narrative, and absorbing feedback.

On the treasury side, generative AI is being piloted for cash and liquidity forecasting. The article concedes that forecasting remains constrained “due to the mathematical limitations of LLMs,” but the direction of travel is clear: even where models can’t yet fully own the numeric prediction, they are starting to own the explanatory wrapper around it-the summaries, scenario descriptions, and recommended talking points.

Adoption is no longer hypothetical. Deloitte’s analysis of its 2024 State of Generative AI in the Enterprise survey finds that 19% of finance organizations have already adopted generative AI in the finance function. Despite returns “8 points below expectations so far,” the spending curve is still bending upward: in Deloitte’s Q4 2024 North American CFO Signals survey, 46% of responding CFOs expect deployment or spend on generative AI in finance to increase over the next 12 months.

The explicit rationale for this continued investment is cost control and reallocation of human effort. Respondents cite AI’s ability to “help control costs through self-service and automation and free up workers for higher-level, higher-productivity tasks.” Robyn Peters, a principal in finance transformation at Deloitte Consulting, contrasts the old, email-and-slide-based world of finance with AI-enabled workflows, arguing there is “no reason” the “human-centric experience” perfected in customer-facing industries cannot be brought into finance—and that “AI makes it a lot easier to do.”

The article even gestures at the next generation: “Future finance professionals are growing up using generative AI tools too. CFOs should consider reimagining what it looks like to be a successful finance professional, in collaboration with AI.” The implied trajectory is linear: as AI becomes native, humans will simply occupy a more strategic layer above it.

Put together, the evidence describes a concrete pattern: generative AI is being inserted at precisely the layers of finance work that used to soak up junior time, that currently sit just below the CFO, and that function as the apprenticeship ground for financial judgment. Adoption continues despite below-target ROI because the promise is not just higher margins; it is a reconfigured finance organization where much of the interpretive and narrative labor moves from humans into models.

THE MECHANISM: How “Drudgery Automation” Collapses the Apprenticeship Ladder

The structural shift is not simply “AI does low-value work, humans do high-value work.” In finance, low-value tasks and high-value judgment are not different categories of work—they are different points on the same learning curve.

Historically, the finance career path has been a grind by design. Analysts and associates spend years:

  • Gathering and cleaning data from fragmented systems.
  • Building and maintaining forecasting models.
  • Drafting endless versions of management reports and investor decks.
  • Reconciling discrepancies between operational reality and reported numbers.
  • Sitting in review meetings where senior leaders tear those drafts apart.

This is not just labor; it is how tacit knowledge accumulates. Experiencing which variances matter, which anomalies keep reappearing, what tone calms investors in a bad quarter, and how to translate raw numbers into a defensible narrative is how a future CFO forms their mental map of the business. The “drudgery” is the apprenticeship.

Generative AI directly targets that layer. When an LLM produces “first drafts of documents that summarize key issues and outline strategic priorities,” two subtle shifts occur:

  • The junior staffer no longer has to wrestle the raw material into a coherent structure; they are reacting to, editing, or lightly checking a synthetic narrative.
  • The senior leader increasingly interacts with that synthetic narrative rather than with the messy intermediate work that reveals where judgment is needed.

Over time, this compresses the experiential gradient between data and decision. Humans still sign off, but the cognitive heavy lifting—what to include, how to frame it, what to emphasize—gradually migrates into the model’s side of the ledger. The human role tilts from “author” to “approver.”

Incentives amplify this trend. Finance is under constant pressure to control costs and shorten cycles. If a tool promises faster reports, cheaper contract review, and self-service forecasting, a few percentage points of underwhelming ROI do not stop experimentation—especially when, as Deloitte notes, nearly half of surveyed CFOs still plan to increase generative AI spend. Appearing modern, keeping pace with peers, and signaling technological sophistication to boards all push in the same direction: deploy the tools where they are easiest to justify, which is exactly where work can be labeled “routine.”

But in a hierarchical profession, “routine” often overlaps with “developmental.” When the rote layers are stripped out, the finance function does not simply reveal a perfectly formed strategic core. Instead, it produces a thinner middle: fewer roles in which humans steadily accumulate pattern recognition by doing the work end to end. The promise that those humans will be “freed up” for higher-level tasks runs into a hard constraint: there are only so many truly strategic seats, and they already exist at the top of the pyramid.

A second mechanism is delegation under uncertainty. Even as the article acknowledges “mathematical limitations” in LLM-based forecasting, it still frames generative AI as promising in treasury. That combination—imperfect models plus heavy narrative capability—is particularly potent. Wherever outputs are hard to verify (e.g., multi-year forecasts, complex liquidity scenarios), organizations are tempted to lean into the fluency of AI-generated explanations. The more polished the narrative layer, the easier it is for both juniors and seniors to accept the model’s framing as reality.

Finally, platform dynamics matter. Most generative AI capabilities in finance will be delivered via large vendors or centralized internal platforms. As these tools become embedded into workflow—inside reporting systems, planning tools, and contract management—control over how financial reality is pre-processed shifts from individual practitioners to platform owners. What used to be discretionary human craft (how to structure a board pack, how to explain a variance) becomes a configurable system behavior. The human is still in the loop, but the loop has been redesigned around the model.

THE IMPLICATIONS: A Thinner Finance Middle and Opaque Capital Decisions

If this pattern holds, the most predictable outcome is not the disappearance of the CFO but the erosion of the path that produces future CFOs with deep, experience-based judgment.

In the short term, finance organizations will likely see:

  • Fewer entry- and mid-level roles tied to manual reporting, documentation, and basic analysis.
  • More roles framed as “AI-augmented” or “AI-supervisory,” where humans review and tweak model outputs rather than create from scratch.
  • Increased internal reliance on AI-generated reports and forecasts, even where underlying accuracy remains contested.

Over the medium term, a different kind of shift becomes visible. A cohort of finance professionals will rise who have never built a full quarterly report end to end, never designed an investor letter from a blank page, never personally wrestled a liquidity model into shape. Their expertise will be mediated through prompts, templates, and pre-trained behaviors. They will be faster at orchestrating tools, but poorer at reading when the tools are subtly wrong.

This matters most at the edge cases—the very situations where finance leadership is supposed to earn its keep: sudden market dislocations, geopolitical shocks, regulatory surprises. In those moments, past data is a poor guide and models trained on historical patterns are most brittle. The ability to notice when the narrative no longer fits the reality depends on humans who have internalized both the patterns and their failure modes. Automation of the apprenticeship reduces the number of such humans in the pipeline.

Power within firms also shifts. As generative AI becomes embedded infrastructure, technical teams and external vendors gain increasing practical influence over how financial information is framed. Finance leaders become dependent on platform roadmaps and model behavior they do not fully control. The narrative of the business—as presented to boards, investors, and regulators—passes through systems whose defaults may converge across firms, creating a homogenized, AI-mediated financial discourse.

Externally, capital allocation becomes more opaque. If many organizations lean on similar generative tools to write investor communications, summarize risks, and propose scenarios, decision-makers across the ecosystem will be reacting to a layer of synthetic interpretation. The apparent objectivity of numbers will be wrapped in machine-authored language that subtly shapes what looks plausible, defensible, or urgent. The real shift is not that AI will “decide” alone, but that human decisions will be increasingly pre-structured by AI outputs whose training data, priors, and blind spots are hard to interrogate.

This is all happening even as Deloitte reports that realized returns from generative AI are currently eight points below expectations. In other words, the structural reconfiguration of how people learn and decide in finance is proceeding ahead of clear economic justification. By the time ROI catches up—or fails to—the training ground for an alternative, more human-centered model of financial judgment may already be hollowed out.

THE STAKES: What Happens When the Numbers Understand Us Better Than We Understand Them

What is at stake is not just productivity metrics in the finance department; it is the human capacity to understand and contest how capital moves.

For individual professionals, generative AI in finance recasts identity. The role shifts from craftsperson of financial narratives to operator of narrative machines. Career progression becomes less about slowly earning judgment through lived contact with the numbers and more about learning to supervise systems whose internal logic remains largely opaque. The satisfaction of mastery—knowing a business so well that the numbers “feel” wrong before they are proven wrong—has less room to develop.

At the organizational and societal level, as more of the interpretive layer of finance migrates into AI, the circle of humans who can genuinely read, challenge, or redirect capital decisions narrows. The finance function becomes more efficient but less legible from the inside. The paradox is stark: tools sold as freeing humans for strategy may instead produce strategists who are increasingly dependent on tools they did not train, cannot fully audit, and no longer know how to replace.

Generative AI is not removing humans from finance; it is changing what it means to be human in finance. The “drudgery” it erases is also the apprenticeship through which people once earned the authority to say no to the models. As that apprenticeship collapses, so does a key source of human leverage over the systems that move money through the world.