You've read Talking to Humans. You know that customer discovery means getting out of the building and talking to real people. You've learned to ask about past behavior, look for patterns, and resist the urge to pitch instead of listen. And if you've done the work, you've probably gotten pretty good at it.
Then you put something in front of customers. And you stopped asking.
This is one of the most common and costly mistakes I see founders make at the Leslie eLab. It doesn't matter whether what you shipped was a prototype, an MVP, a beta, or a finished product. The mistake is the same: treating customer discovery as something you do before you build, then setting it aside once something exists to test. The thinking goes: now that people can interact with it, the market will tell me what it thinks. Users will engage or they won't. The data will speak for itself.
The data won't speak for itself. Data tells you what happened. It takes a conversation to understand why.
And when founders do remember to have that conversation, there's another trap waiting: asking customers what they want next. That conversation will fill your backlog and empty your signal. Users will request features earnestly and in good faith, and most of those features will not save you if the core product hasn't solved the problem it was built to solve. The more useful question isn't "what do you want?" It's "did this actually work for you?"
The more useful question isn't "what do you want?" It's "did this actually work for you?"
Two Kinds of Feedback
There are two types of feedback worth gathering at this stage, and most founders only collect one. The first is usability feedback: did people understand how to use it? Where did they get confused? What features are missing? That feedback is valuable, and you should absolutely gather it.
But there's a different kind of feedback that's harder to get and more important at this stage: feedback on the decision. Why did you decide to keep using it, or stop? What were you expecting that you didn't get? What made you hesitate to try it in the first place? This isn't feedback on how the product works. It's feedback on whether it works for them, and why or why not.
Most founders collect the first kind and skip the second entirely.
The Three Populations You Need to Understand
The people who carry the most signal about your product fall into three groups, and you need to talk to all of them deliberately. The first group tried it and stayed. The second tried it and didn't come back. The third fits your target profile but never tried it at all.
Most founders only hear from the first group, and even then, usually only when those users volunteer feedback. The churned user didn't file a complaint. The non-adopter didn't explain their hesitation. They quietly disappeared, or never showed up, and your dashboard recorded the outcome without the reason.
But your engaged users are just as important to understand. Why did they come back? What made this worth their time? What were they trying to accomplish, and did you actually help them accomplish it? The answers tell you who your real early adopter is and what's genuinely resonating. That's the signal you want to double down on before you do anything else.
This is precisely where customer discovery and customer validation have to work together. Validation tells you whether your solution is working. Discovery tells you why. You need both, running in parallel, at every stage, not just before you build.
We close Talking to Humans with a line worth revisiting: "Don't stop talking directly to customers. Your questions will likely evolve, but no matter what stage you are in, you'll usually find that your best insights will come from talking to real people and observing real behavior." That's not a warm sendoff. That's an instruction.
The Questions Evolve. The Practice Doesn't.
Before you build, you're asking: does this problem exist, does it hurt enough, and are we talking to the right people? Once something is in front of customers, the questions shift in both directions: why did you come back? What made this worth your time? And equally: why did you stop using it? What were you expecting that you didn't get? What made you hesitate? These are still discovery questions. They just come after something exists to react to.
The mechanics are the same ones you learned from Talking to Humans. Stay one degree of separation from your closest allies. Ask about past behavior, not hypothetical future behavior. Listen more than you talk. The difference is who you're talking to and what you're trying to learn.
If your retention is soft, find five people who tried your prototype or product once and didn't return. If conversion is lower than expected, find five people who fit your target profile and didn't engage. And if something is working, find five people who keep coming back and understand exactly why. You will learn things your analytics cannot surface, and you will learn them faster than any experiment can reveal them on its own.
Putting something in front of customers is a milestone. It is not a research method.