NIST Just Admitted It Can't Keep Up With CVEs. Here's What That Means for Your Vulnerability Data.

NIST Just Admitted It Can't Keep Up With CVEs. Here's What That Means for Your Vulnerability Data.#
On April 15, NIST published a straightforward announcement: the National Vulnerability Database can no longer enrich every CVE it receives. CVE submissions increased 263% between 2020 and 2025. The first three months of 2026 are running one-third higher than the same period last year. NIST enriched 42,000 CVEs in 2025, which was 45% more than any previous year, and still fell further behind.
The announcement is not a surprise to anyone who has been watching the backlog grow since early 2024. It is now official policy, and it has concrete consequences for anyone building on NVD as a data source.
What changed#
Starting April 15, NIST will only enrich CVEs that meet one of three criteria:
- CVEs appearing in the CISA Known Exploited Vulnerabilities catalog, enriched within one business day
- CVEs for software used within the US federal government
- CVEs for critical software as defined by Executive Order 14028 Everything else gets listed in NVD but marked "Not Scheduled." That means no CVSS severity score from NIST, no CPE product mappings, no version range data. The CVE exists in the database but carries no structured metadata until someone requests enrichment by email and NIST decides to prioritise it.
The backlog is also being written off. Every unenriched CVE with a publish date before March 1 2026 moves to Not Scheduled. Harold Booth, a NIST computer scientist who runs the NVD program, told VulnCon26: "our ability to keep up is just not there."
Two smaller changes came alongside this. NIST will no longer provide its own CVSS score for CVEs where the submitting CNA already provided one. It will also only re-analyse modified CVEs if the modification materially affects the enrichment data.
What this means for tools built on NVD#
Most vulnerability scanners and CVE databases treat NVD as their primary source of truth. The enrichment they surface, severity scores, affected product lists, version ranges, comes from NIST analysts. When that enrichment is absent, those tools face a choice: surface the bare CVE record with no structured metadata, or skip it entirely.
Either way, coverage is no longer complete. A CVE that falls outside NIST's three priority categories appears as a gap in any tool that relies purely on NVD. For teams using those tools to make deployment decisions, that gap is invisible. They receive a false sense that their stack has been checked.
This is not an abstract concern. FIRST has modelled scenarios where total CVE volume hits 100,000 in 2026. The Not Scheduled category is going to grow.
What this means for Attestd#
The short answer is that not much changes operationally. The longer answer explains why.
Attestd has never treated NVD as a single source of truth. The pipeline combines three independent inputs: NVD for version range data, CISA KEV for active exploitation status, and LLM synthesis to collapse incomplete or overlapping range data into a usable signal per version. That architecture was a deliberate design choice. NVD data has always had gaps and the synthesis layer exists to handle them.
The CISA KEV signal, which drives the actively_exploited field in every Attestd response, is unaffected by this announcement. KEV is NIST's first enrichment priority, enriched within one business day. The highest-confidence signal in the API stays reliable.
For CVEs that fall outside NIST's new priority criteria, Attestd still ingests and synthesises them. The confidence score reflects the quality of the underlying data honestly. A CVE with full NVD enrichment, a KEV flag, and a clean CPE mapping produces a high confidence score. A CVE with partial NVD data and no KEV flag produces a lower one. That distinction was already built into every response. NIST's announcement makes it more meaningful.
The confidence score#
Every Attestd response includes a confidence field between 0.0 and 1.0 reflecting how much reliable underlying data exists for that synthesis. Here is what two responses look like:
{
"product": "openssl",
"version": "3.0.7",
"risk_state": "high",
"actively_exploited": true,
"confidence": 0.97,
"last_updated": "2026-04-17T08:00:00Z"
}
{
"product": "some-product",
"version": "2.1.0",
"risk_state": "elevated",
"actively_exploited": false,
"confidence": 0.61,
"last_updated": "2026-04-17T08:00:00Z"
}
The first has a KEV flag, full NVD enrichment, and clean CPE mappings. The second has partial data. Both responses are honest about what they know. The confidence score is what tells them apart.
After April 15, the number of CVEs producing lower-confidence responses will grow. That is an accurate reflection of what the upstream data actually contains. A tool that returns confidence: 0.97 on everything is not being honest with you.
The structural problem underneath this#
NIST's announcement is the clearest signal yet that centralised, manually-enriched vulnerability data does not scale. CVE volume is growing faster than any human analyst team can process and the trend is not slowing. FIRST forecasted a record 50,000 additional CVEs for 2026 before automated vulnerability-discovery tooling began accelerating submissions further. Whatever the final number reaches, the gap between submissions and human enrichment capacity is structural and widening.
The infrastructure developers and security tools rely on was designed for a different volume. It is being asked to operate in a world where AI-assisted vulnerability discovery is accelerating CVE creation faster than the institutions built to track them can respond.
Deterministic signals, confidence-weighted outputs, and multiple independent sources are not a nice-to-have in that environment. They are the only honest way to build a security data layer.
What to do now#
If you are using a security tool that draws exclusively from NVD, it is worth asking specifically how it handles CVEs with no enrichment data. If the answer is "we skip them" or "we pass through whatever NVD has," the coverage you think you have is smaller than it was two weeks ago.
For developers building autonomous systems or AI agents, this raises a specific design question. If your agent makes deployment or patching decisions based on CVE data, the reliability of that data needs to be a first-class input to the decision. A signal that says "elevated risk" with a confidence of 0.6 should be handled differently from one at 0.95. Building that distinction into branching logic is more important now than it was last week.
The full NIST announcement is at nist.gov. Attestd's confidence score semantics are documented at attestd.io/docs/response-fields.
Get an API key at api.attestd.io/portal/login. Free tier, 1,000 calls a month, no credit card required.