The Knowledge Crisis in Three Numbers

John Meissner·

We cite three statistics on our homepage. We take these numbers seriously — they frame the problem Contor exists to solve — so we want to show our work.

Here's where each stat comes from, what it actually measures, and why it matters.


38% of executives made a wrong decision based on AI-generated content

Source: Deloitte, State of Generative AI in the Enterprise, Q3 2024

Methodology: Survey of 2,770 respondents across 14 countries, focused on enterprise AI adoption and outcomes.

What it measures: The percentage of business executives who reported making at least one incorrect decision based on content generated by AI tools — specifically, content that contained hallucinations or fabricated information.

The full picture: The same research ecosystem found that 77% of businesses express concern about AI hallucinations (Deloitte Global Future of Cyber Survey, 2024, ~1,200 cyber decision-makers). This isn't a fringe worry — it's the majority position among people actually deploying these tools.

Why it matters for knowledge-rich organizations: When a membership association uses a generic AI chatbot to answer a member question, the chatbot doesn't know what the organization has actually published. It fills gaps with confident-sounding fabrications. For organizations where credibility is the product — technical societies, standards bodies, research consortiums — a single hallucinated answer can undermine years of trust.

This is the direct answer to "why can't we just use ChatGPT?" You can. But 38% of the time, you might be making decisions based on something the AI made up.

Primary sources:


47% of what your organization knows walks out the door when someone leaves

Source: Starmind, Productivity Drain Research Report, 2021

Methodology: Survey of 1,000+ enterprise knowledge workers across the US, Germany, and Switzerland, examining how organizational knowledge is captured, shared, and lost.

What it measures: The report found that only 53% of employee knowledge is retained and documented within the organization. We invert this — 47% is not retained — because the loss framing more accurately captures what organizations experience when a long-tenured employee leaves.

Both framings are mathematically identical. "53% of knowledge is retained" and "47% walks out the door" describe the same finding.

Why it matters for knowledge-rich organizations: Consider an association whose program director of 15 years retires. She knew which 2019 workshop produced the recommendation that shaped your 2022 policy brief. She knew that the 2017 report contradicted the 2020 update on section 4.3. She knew which member questions had already been answered and where the answers lived.

None of that was in a document. It was in her head. And now it's gone.

For organizations built on decades of accumulated expertise, this isn't an HR problem — it's an existential risk. The knowledge that makes your organization valuable is the knowledge most likely to be undocumented.

Primary sources:


6 hours per week spent recreating work that already exists

Source: Panopto, Workplace Knowledge and Productivity Report

Methodology: Survey of 1,001 US employees, conducted in partnership with YouGov, examining how knowledge-sharing inefficiencies affect daily work.

What it measures: The report breaks down three types of wasted time:

  • 5 hours/week waiting for information from colleagues
  • 8 hours/week working without needed expertise or context
  • ~6 hours/week duplicating or recreating work that already exists somewhere in the organization

We cite the 6-hour figure specifically because it captures the recreation and synthesis problem — not the searching or waiting problem. This is staff spending nearly a full workday every week rebuilding answers, analyses, and recommendations that someone else in the organization has already produced.

Why it matters for knowledge-rich organizations: This is the stat that names the exact workflow Contor replaces. A program manager gets a member question about your organization's position on a topic. She knows the answer exists — spread across a dozen reports, three webinars, and a working group output from 2019. But there's no way to synthesize across all of them, so she starts from scratch. Two hours later, she's produced a response that covers maybe 60% of what the organization actually knows.

Multiply that by every member question, every stakeholder briefing, every board prep. That's the 6 hours.

Primary sources:


How the three stats work together

These aren't three random data points. They describe a compounding failure:

| Problem | Stat | What Contor does about it | |---|---|---| | Generic AI can't be trusted with your content | 38% made wrong decisions from AI | Every answer grounded in your actual publications, with citations to every source | | Knowledge disappears when people leave | 47% walks out the door | Institutional knowledge structured and preserved — independent of any individual | | Staff rebuild what already exists | 6 hrs/wk recreating existing work | Contor synthesizes across your library so staff review and send, instead of starting from scratch |

The progression tells a story: your AI tools can't be trusted, your knowledge is at risk of disappearing, and your staff is wasting a day a week rebuilding what you already know.

That's the problem Contor exists to solve.


Have questions about these sources or our methodology? Reach out — we're happy to discuss.

Want to see knowledge synthesis in action?

Learn about Contor