What Librarians Know About Search That Your Help Center Doesn't
Librarians invented the reference interview: a structured way to figure out what someone actually needs vs what they asked for. Your chatbot skips this step entirely.
A person walks into a library and asks: "Do you have books about snakes?"
A bad librarian points to the reptile section. A good librarian asks: "What about snakes? Are you looking for a pet care guide? A field identification book? A children's picture book? Something about snake venom for a school project?"
The question "do you have books about snakes?" contains almost zero useful information about what the person actually needs. The librarian's job is to figure out the real question behind the stated question.
This is the reference interview, and it's been part of library science for over a century (formal reference services trace back to 1876). Your help center, your chatbot, and your FAQ page skip this step entirely.
The Reference Interview
Library scientists identified that people rarely ask for what they actually need on the first try. The stated question and the actual information need are different in about 50% of reference interactions (studies by Dervin, Kuhlthau, and others have replicated this consistently).
The reference interview is a structured conversation to bridge that gap. It typically involves five types of questions:
Open questions to understand the broad need. "What are you working on?" "What's the context?"
Clarifying questions to narrow the scope. "When you say 'not working,' what specifically happens?" "Which part of the process fails?"
Confirming questions to verify understanding. "So you need to export just the March data, not all of it?"
Neutral questions that don't lead the answer. "What have you already tried?" instead of "Did you try clearing your cache?" (The second assumes the solution. The first discovers what the customer already knows.)
Follow-up questions after providing information. "Did that answer your question, or were you looking for something different?"
What This Means for Support
Most support interactions skip the reference interview. The customer says "the export isn't working" and the agent immediately starts troubleshooting the export feature. But "the export isn't working" could mean:
The export button does nothing when clicked (UI bug). The export produces a file but the data is wrong (data bug). The export works but the file format is wrong (feature misunderstanding). The customer can't find the export feature (navigation problem). The customer is on a plan that doesn't include export (access issue).
Each of these has a completely different resolution. Without asking a clarifying question, the agent has a 20% chance of guessing correctly.
One clarifying question, "What happens when you try to export?", reduces five possibilities to one or two. That question takes 10 seconds to type and saves 5 to 15 minutes of back-and-forth troubleshooting in the wrong direction.
Why Chatbots Fail at This
Most chatbots are designed to answer questions, not to understand them. They take the customer's stated question at face value and match it to a response.
Customer: "How do I export my data?" Chatbot: "To export your data, go to Settings > Data > Export and click the CSV button."
If the customer's real question was "how do I export just the invoices from March," the chatbot's answer is technically correct and practically useless. The customer now has to send a follow-up message, which the chatbot may or may not handle well.
A librarian would have asked: "Which data are you looking to export? Everything, or a specific subset?"
The difference between a chatbot and a librarian is curiosity. The chatbot assumes it knows what you need. The librarian assumes it doesn't.
Building the Reference Interview Into AI
AI classification can do the first step of the reference interview automatically. When Supp classifies a message as "export issue," it identifies the broad category. But the classification doesn't know whether it's a bug, a feature misunderstanding, or an access issue.
The follow-up question is where AI can improve. Instead of jumping to an answer, the AI asks one clarifying question: "I'd like to help with the export. Can you tell me what happens when you try? (For example: nothing happens, wrong data, error message, can't find the button.)"
That question narrows the possibilities before a response is generated. The customer provides the missing context. The next response is accurate instead of generic.
This pattern (classify, clarify, then respond) mirrors the reference interview. Classify is the librarian hearing the question. Clarify is the reference interview. Respond is retrieving the right book from the shelf.
Most AI support systems skip the middle step. Adding it back reduces follow-up messages by 30 to 40% and increases first-contact resolution because the first response actually addresses the right problem.
The Neutral Question Problem
Librarians are trained to ask neutral questions. "What have you tried so far?" not "Did you restart your computer?" The difference matters.
Leading questions bias the customer's response. If you ask "did you clear your cache?" the customer says yes (even if they didn't) because they want to skip to the next step. You've lost diagnostic information.
Neutral questions give you honest data. "Walk me through what you've done so far" gets you the real troubleshooting history, including the steps they skipped and the things they tried that weren't on your checklist.
Train your agents (and your AI) to ask neutral questions first. The specific leading questions ("try clearing your cache") come after, as targeted troubleshooting steps, not as opening moves.
The Follow-Up Check
The most underused librarian technique: the follow-up check. After providing an answer, the librarian asks: "Does that answer your question, or were you looking for something else?"
In support, this looks like: "I've sent the instructions for exporting March invoices. Does that cover what you need, or is there something else I can help with?"
That question catches misunderstandings before they become follow-up tickets. If the customer says "actually, I also need to export them in PDF format," you've identified a second need in the same interaction instead of handling it as a separate ticket later.
The follow-up check adds 10 seconds to each interaction and prevents 10 to 15% of reopened tickets. Librarians have been doing this for 70 years. Support teams can start doing it today.