47 Results, Zero Answers: The Real Cost of 'Search' in Knowledge-Rich Organizations

John Meissner·

The question arrives

It's 9:14 on a Tuesday morning and the email is already waiting. A board member from a regional utility needs a briefing on how distributed energy storage affects transmission planning — specifically, what the association has published on the topic, and how a 2019 regulatory change altered the recommendations.

Lisa knows the organization has covered this. She sat in the working group session three years ago. There was a conference paper in 2017 that laid the technical groundwork, and she's pretty sure the 2023 annual report updated the numbers. And somewhere — she can picture the slide deck but not the event — there was a workshop in 2019 that addressed the regulatory piece directly.

She opens the internal document portal. The search bar blinks. She types "energy storage transmission planning" and hits enter.

The 47 results

Forty-seven results. Sorted by relevance, which in practice means sorted by how many times those keywords appear in the metadata.

The first hit is conference proceedings from 2017 — the right topic, wrong decade of data. The third is a webinar summary that mentions storage in passing but is really about demand response. Result number twelve is a working group report she recognizes, but she also knows it was superseded by a 2022 revision that doesn't appear until result thirty-one.

She opens tabs. Skims introductions. Cross-references publication dates. Copies a paragraph from the 2023 report, then realizes it contradicts a finding from the 2017 paper without acknowledging the change. She needs to figure out which one reflects the current position.

Three hours in, she has a draft. It covers the storage question and the transmission piece. It cites four sources. It's accurate, as far as she can tell.

But she knows she's missing that 2019 workshop. She tried three different keyword combinations. She scrolled through all 47 results twice. It's not there — or it's buried under a title that doesn't mention storage or transmission, because the workshop was about regulatory frameworks more broadly. She sends the response anyway. It's good enough.

The part nobody tracks

The board member gets his briefing two days after he asked for it. It's useful. It's not wrong. But it's incomplete in ways that neither he nor Lisa can fully see — because the 2019 workshop would have added context that changes the emphasis of two of her four citations.

Lisa moves on. She has eleven other things to do today, and the deeper analysis — connecting the regulatory timeline to the technical evolution across all the organization's publications — would take another half-day she doesn't have. So the briefing goes out as-is.

Next quarter, a new program associate gets a similar question from a different member. He doesn't know about Lisa's briefing. He doesn't know about the 2019 workshop either. He starts from scratch, spends four hours, and produces a response that covers roughly 60% of the same ground with a slightly different set of sources.

Nobody tracks this. There's no line item for "time spent manually synthesizing answers that already exist across our publications." But it's real. Multiply it by 15 member inquiries a month — technical questions, policy briefings, board prep, stakeholder requests — and you're looking at a full-time salary spent on manual synthesis. Not research. Not analysis. Just finding and stitching together things the organization already knows.

Why "better search" doesn't fix it

The instinct is to improve the search tool. Better relevance ranking. Full-text indexing. Maybe an AI chatbot that can summarize documents.

These are real improvements. But they solve the wrong problem.

Search — even good search — answers the question "which documents contain information about X?" You get a ranked list. You still have to open them, read them, figure out which ones are current, and synthesize an answer yourself.

RAG-based tools go a step further. They retrieve chunks from your documents and generate a narrative summary. That's genuinely useful when the answer lives inside one or two documents. But Lisa's question doesn't live inside one or two documents. It spans a 2017 conference paper, a 2022 revision, a 2023 annual report update, and a 2019 workshop that the system can't find because the metadata doesn't match her keywords.

Neither search nor RAG reasons across documents. Neither connects a technical finding from 2017 to a regulatory change from 2019 to an updated recommendation from 2023 and produces a coherent answer that accounts for all three. That's not a retrieval problem — it's a synthesis problem. And it requires a fundamentally different approach to what "answering a question" means when your organization has been publishing for two decades.

For a deeper look at where retrieval-based approaches hit their ceiling, see Why RAG Isn't Enough.

What "solved" looks like

Same Tuesday morning. Same email from the board member. Same question about distributed energy storage, transmission planning, and the 2019 regulatory change.

Lisa types the question. The system draws from the organization's full publication history — not just the documents that match her keywords, but the ones connected by topic, by citation, by the conceptual thread that runs from the 2017 conference paper through the regulatory shift and into the 2023 update. It finds the 2019 workshop. It notes where the 2022 revision superseded the earlier working group report. It produces a draft response with every claim tied to a specific source.

Lisa reads it. She adjusts the tone for the audience — this board member prefers concise — and adds a sentence of context about the upcoming policy review. She sends it.

Forty minutes. Not three hours. And the response reflects what the organization actually knows — synthesized, cited, and ready to send — instead of what one person could piece together from 47 search results on a Tuesday morning.

The real question

The technology to do this exists now. The question isn't technical. It's organizational.

Does your organization treat its accumulated expertise as a searchable archive — a collection of PDFs that staff dig through when someone asks a question? Or does it treat that expertise as a working knowledge system — something that can reason across everything you've published and produce answers that reflect the full weight of what your organization knows?

For associations, research consortiums, and technical societies, the difference is the gap between being a publisher and being an authority. Publishers produce documents. Authorities produce answers.

Your organization has spent decades earning the expertise. The question is whether you're making it work.


Contor is AI knowledge management software built for organizations where knowledge is the product. If your team spends hours synthesizing answers that already exist somewhere in your document library, we'd love to hear from you.

Want to see knowledge synthesis in action?

Learn about Contor