The Job Description Is a Fossil.

Every hire we make is filling a role we no longer have.

Impact
Impact
March 5, 2026
Manish
The Job Description Is a Fossil.

Watch what happens when a Fortune 500 engineering team needs to hire a frontend engineer.

A tech lead opens a Word document. They write the role from memory. They list the technologies they remember being used six months ago. They list the responsibilities they remember being important. They list the qualifications they think a good candidate should have. They send it to a recruiter.

The recruiter reads it. Reformats it. Adds the company's standard responsibilities pulled from a JD template that was last updated in 2022. Expands the qualifications list to cover edge cases nobody asked for. Posts it to the careers site.

The candidate applies. The candidate is screened against the document. The candidate is interviewed against the document. The candidate is hired against the document.

Six weeks later, on the job, the new hire discovers the document had only a partial resemblance to the actual work.

This is how every enterprise hires today. And it is structurally broken.

Three layers of human translation. Each one lossy.

The job description the founding artifact of every hiring decision is produced by translating the work three times:

  1. The tech lead's recollection of what the team is doing
  2. The recruiter's reformulation of the tech lead's memo
  3. The HR template's overlay of generic responsibilities

Each translation is lossy. By the time the JD reaches a candidate, it bears the fingerprints of all three translators and the substance of none of them. The candidate is not being evaluated against the work. The candidate is being evaluated against a fossil a document that captured what someone remembered needing, not what the team is actually doing.

This is not a small problem. The JD anchors everything downstream. The screening rubric is built from it. The interview questions are derived from it. The assessment criteria reference it. The offer is made against it. If the founding document is a distortion, every subsequent decision compounds the distortion.

And it gets worse in regulated industries.

The same pathology, in pharma

A pharmaceutical site needs a quality assurance officer. The site head describes the role from memory. The HR generalist drafts a standard pharma QA JD. The actual evidence the active validation protocols the site is currently working through, the recent deviation reports, the open GxP audit findings, the regulatory filing the team is racing to submit next quarter is nowhere in the document.

The candidate is hired against generic competencies. "5+ years experience in pharmaceutical QA. Knowledge of GMP. Familiarity with FDA guidance. Strong communication skills."

What the site actually needs is someone who can close out three open Form 483 observations in the next ninety days, run a CSV validation against a specific document management system that was deployed last year, and draft responses to the next FDA inspection citation that is statistically certain to come.

None of that is in the JD. The JD describes a generic pharma QA officer. The site needs a specific person to do specific work. The hire arrives. The work does not match. Six months later, everyone is frustrated, and nobody can quite explain why.

The same pattern repeats in banking. A risk analyst is hired from a JD that describes risk analysis as a category. The bank actually needs someone who understands the specific counterparty exposures that grew last quarter, the new product the trading desk just launched, and the compliance findings the regulator flagged in their last visit. None of this is in the JD. None of it can be, because nobody wrote it down.

Stop asking humans to describe the work from memory.

The fix is not a better JD template. It is not better prompts for ChatGPT. It is not a more rigorous interview process to compensate for a flawed source document.

The fix is to invert the process.

Stop asking humans to describe the work from memory. Have an agent read the work itself.

The sources of truth about what a role actually requires are not in someone's recollection. They are in the systems where the work lives. The active sprint in Linear. The architecture documents in Notion. The codebase in GitHub. The validation protocols in the pharmaceutical document management system. The risk models in the bank's compliance database. These are the systems that know what the team is doing right now, not what someone remembers them doing six months ago.

An agent that reads these systems directly produces a different kind of job description. It names the actual modules. It lists the libraries the team is shipping with this week, not the libraries the tech lead remembers from the architecture review. It references the audit findings that are open today, not the generic GxP framework that hasn't changed in five years. It targets the role the work demands, not the role the org chart implies.

A human still approves the document before it goes live. But the document arrives at the human's desk evidence-based, not memory-based. The human is editing a draft anchored in reality, not generating fiction from scratch.

This is what agent-first hiring software looks like. It is not ATS-with-AI-features. It is not a workflow tool with prompts bolted on. It is a different architecture entirely, one where the source-of-truth about every role is the work itself, and a human approves a JD that an agent generated from the actual evidence.

And then the role itself changes.

There is a second shift happening at the same time, and it makes the first one urgent.

The person being hired is changing too. Not in five years. Now.

In leading software engineering teams, individual contributors are no longer the primary executors of code. They are directors of agents that write, test, refactor, and ship code. The same is true in legal, where senior associates increasingly review what an agent has drafted rather than draft it themselves. In financial analysis, where junior analysts have been replaced not by other analysts but by analyst-supervised agent pipelines. In customer support, where the human's job has shifted from resolving tickets to overseeing a fleet of agents that resolve them.

The productive worker of 2027 is not the worker who can type ten times faster. It is the worker who can decompose a task across the right agents, catch the agents when they hallucinate, choose the cheap model for the routine sub-step and the expensive one for the hard step, and produce an audit trail that survives regulatory scrutiny.

This is a new capability profile. It is not adjacent to traditional hiring evaluation. It is the central evaluation question of the next decade.

No platform answers it today. The applicant tracking systems are designed to count candidates, not evaluate them. The talent intelligence platforms are built on historical career graphs that predate agentic AI. The video interview platforms measure the human's verbal performance under standardised conditions, which is increasingly the wrong signal because the work is no longer the human's verbal output. It is the human's directorial output over a stack of agents — a capability invisible to a video interview.

The question every CHRO will face by 2027 is one no platform answers today: which of our employees and candidates can actually wield agents productively, safely, and at scale?

The agent-first hiring stack

The two shifts compound. Job descriptions need to be generated by agents that read the actual work. Candidates need to be evaluated against their ability to wield agents in the work that exists.

Both shifts require the same architectural foundation. Agents that connect natively to enterprise source systems. Evaluation surfaces designed for the human + agent stack as the unit of capability, not the lone human as the unit of capability. Compliance built in from day zero, because a hiring decision in pharma or banking that cannot be audited is a decision that cannot be made.

This is what we are building at Exterview. Our universal agent — Smaya — reads source systems directly today and is in active build for full Linear, Notion, Jira, and GitHub coverage by Q2 2026. The pharmaceutical and BFSI source connectors follow on the same architecture. We are also building EX Fluency, the dedicated evaluation surface for the human + agent stack, in closed beta this quarter.

We are doing this because the existing stack is built for a workforce that is disappearing. The applicant tracking systems, the talent intelligence platforms, the video interview tools — they were designed for a world where the candidate would, themselves, be the executing party. That world is ending. The next one needs different infrastructure.

The job description should not be the first thing a human writes about the role. It should be the last thing a human approves about the role. The work already exists in the systems where the work lives. An agent should read it. A human should sign it. A candidate should be evaluated against it.

The document that has anchored every hire for the last thirty years should no longer be a fossil.

It should be a map.

Manish Surapaneni is the CEO of Exterview AI, the first MCP-native agentic talent intelligence platform built on the Microsoft AI Cloud. The detailed technical argument behind this post is laid out in our companion paper, The Human + Agent Workforce: How Hiring Breaks. What Replaces It. Reach Manish at manish@exterview.ai.