Here's the shift, in one sentence: when AI does the inputs, every job becomes about judgment — knowing what the right output actually is.

For thirty years, most jobs were defined by the inputs you produced: code, reports, mockups, campaigns. You were good at your job if you were good at producing your piece. AI is collapsing the cost of producing all of that to nearly zero. The work isn't gone — it moved. The job stops being "can you produce the thing?" and starts being "can you determine what the right thing is?"

The right output. The right user experience. The right cycle time. The right level of functionality. That's the work now.

Matan Grinberg, CEO of Factory, put it well in a recent McKinsey interview:

Before, someone could say, "I don't write code, so that's not my fault." Now code is more of a tool across roles... The distinction shifts away from who produces which input and toward outcomes.

The whole old operating model — business gives requirements, IT builds them, Ops runs what got built — was organized around who produced which input. When AI produces the input, the model loses its load-bearing wall. As Grinberg adds elsewhere in the same interview, "measuring success purely by engineering output might not make sense" — swap engineering for any function.

The four skills that matter now

If the job is determining the right output, four underlying skills produce that judgment. None are new. What's new is that they're no longer additive to production skills. They're the job.

1. Strategic framing. Which problem is actually worth solving? AI will obediently solve whatever you point it at, including the wrong thing. Fiverr CEO Micha Kaufman summarized the remaining human work in his widely-read AI memo: "nonlinear thinking, judgment calls, issues that have to do with taste, making decisions, thinking about strategy."

2. Taste. Telling a good output from a mediocre one, often before you can articulate why. A thousand passable drafts are now free; the person who can push for the great one is the scarce resource.

3. User empathy. Knowing what the person actually needs — not what they ask for, and not what the model assumes by default. If you can't stand in their shoes, you can't evaluate whether the output is right.

4. Systems thinking. Seeing the second-order effects. AI is good at local optimization and bad at knowing what else breaks when you change something.

This matches what employers are already saying. The WEF Future of Jobs Report 2025 ranks analytical thinking, creative thinking, and resilience as the skills rising fastest through 2030 — with 7 in 10 employers calling analytical thinking essential right now.

The data

Gartner: 84% of companies have already set up fusion teams — multidisciplinary teams combining business and technology expertise, sharing accountability for both. 43% now report outside of IT. The old wall is already coming down.

McKinsey (State of AI 2025): 88% of organizations use AI in at least one function. Only a third have scaled it. And just 6% capture meaningful EBIT impact — a group that's roughly three times more likely to have redesigned workflows around AI than bolted AI onto existing ones. "Redesigning workflows" is shorthand for organizing around outcomes instead of inputs.

Why augmentation compounds, and automation corrodes

All of this sets up a strategic choice CEOs are starting to make in public. A recent HBR piece by researchers at Oxford, Stanford, and BetterUp (De Neve, Hancock, and Niederhoffer, April 2026) draws the line sharply: companies that position AI as automation — a way to replace people — hit a visible ceiling, while companies that position it as augmentation — a way to amplify the judgment of the people they already have — compound advantage over time.

Two CEO positions illustrate the fork. Early this year, Block's Jack Dorsey cut nearly half the workforce, writing that "intelligence tools have changed what it means to build and run a company." Fiverr's Micha Kaufman (quoted above) took the opposite stance — framing AI as adaptation, not replacement, and betting that his people would handle the judgment work.

The HBR authors argue these paths don't converge. They describe two J-curves:

  • Automation dips shallowly and delivers early gains from cost takeout — then corrodes. Well-being drops, "workslop" (low-quality AI output from disengaged workers) accumulates, attrition rises, and the junior talent pipeline hollows out.
  • Augmentation dips longer and deeper because it requires real investment in people, role redesign, and new human-AI coordination routines — but it compounds, because the same people who build the judgment produce the outcomes.

Their survey data shows the divergence already emerging. Workers who perceive augmentation intent report 32% lower intent to leave than those who perceive automation intent. Workers who feel forced to adopt AI produce roughly 65% more workslop than those who feel empowered to drive it.

The exemplar they cite is Aon. CEO Greg Case has publicly pledged AI literacy for all 60,000 employees and treats headcount as a cornerstone of growth — a pledge Aon made credible during Covid by holding redundancies at zero, funded by temporary cuts to executive pay. Satya Nadella made the same bet at Microsoft in 2014, reskilling the company around cloud and AI rather than downsizing it. A decade of compounding advantage is on the tape.

The thing to notice is that augmentation only works if the credible commitment is visible. The HBR authors found that 81% of senior leaders think their organization is all-in on augmentation, while only 53% of individual contributors perceive it that way — and 40% of ICs suspect the real intent is automation. That gap isn't cosmetic. Employees who suspect automation are the same employees who stop engaging, produce workslop, and leave.

Three moves for leaders

1. Rewrite your performance criteria. If people are measured on throughput — code shipped, campaigns launched, tickets closed — you're rewarding the work AI is about to absorb. Measure what you actually want: revenue moved, time compressed, satisfaction lifted.

2. Hire and promote for the four skills above. Résumés optimize for what people produced. The signals you actually need — strategic framing, taste, empathy, systems thinking — show up in how someone talks about the choices they made, not the artifacts. Rework your interview loops.

3. Make the augmentation bet credibly, or don't make it at all. You can't announce "we're investing in our people" while simultaneously cutting headcount and expect the augmentation path to work — the contradiction shows up in engagement data within weeks. Either commit to developing the judgment-holders you already have, with real investment in reskilling and role redesign, or accept that you're on the automation path and price in its long-run costs.

The uncomfortable part

Outcome ownership exposes who was actually contributing. When the measure was "did you produce your input on time," a lot of people could stay in a safe middle. When the measure is "did you pick the right thing, and did it work?" — there's nowhere to hide. That includes executives.

This is the part that stalls transformations. It's also the part that makes this one actually work.

Your job isn't what you produce anymore. The only question is whether your performance system has caught up.

Videos worth your time

  • Sequoia Capital | *Factory's Matan Grinberg and Eno Reyes Unleash the Droids on Software Development* — Grinberg at length on how AI agents change enterprise work. YouTube
  • Lenny's Podcast | *Product management theater — Marty Cagan* — the classic conversation on feature teams (inputs) vs. empowered product teams (outcomes). lennysnewsletter.com
Sources
  • McKinsey | *Paving the road for AI agents: Interview with Factory CEO Matan Grinberg* — source quotes on role blurring and measuring beyond output. mckinsey.com
  • HBR | *Why Companies That Choose AI Augmentation Over Automation May Win in the Long Run* — De Neve, Hancock, and Niederhoffer on the two J-curves, the Kaufman quote, and the Aon/Microsoft examples (April 15, 2026).
  • McKinsey | *The state of AI in 2025* — the 88/33/6% data and the workflow-redesign finding. mckinsey.com
  • World Economic Forum | *Future of Jobs Report 2025* — the employer-side view of which skills are rising and falling through 2030. weforum.org
  • Gartner | *Fusion Teams: A New Model for Digital Delivery* — the 84% adoption number. gartner.com