What AI Companion Users Are Actually Asking For (That No Analytics Tool Shows)
Here’s a message a user might send to an AI companion: “I’m bored. Talk to me about something interesting.”
Pretty mundane prompt. Surface-level. Nothing in there that would flag as high-stakes or emotionally significant. Most teams would log it, count it as an active session, and move on.
But sit with it for a second. “I’m bored. Talk to me about something interesting.” When’s the last time you said that to a friend? Probably never, because you’d just text them. You’d scroll something. You’d find something to do. The fact that someone is typing this into an AI companion, at whatever hour they’re typing it, is telling you something they’re not saying directly.
They’re not bored. They’re lonely. And they want to feel connected to something that cares about engaging with them specifically.
That gap between the explicit prompt and the actual need is, I’d argue, the single most important thing to understand about AI companion users. And almost no product team understands it, because almost no analytics tool is built to show it.

^ your analytics dashboard showing great session numbers while users churn because the product never met the actual need
The surface vs. the subtext: a translation guide
Users dont say what they mean. Not because they’re being evasive. Because they often can’t fully articulate what they need from an AI companion. Sometimes they don’t know themselves.
A few common translations:
“Let’s talk about my day” usually means “I need someone to process this with me, not just listen.” The user wants the AI to engage with what they share, ask the right follow-up, notice the thing they mentioned almost in passing. They want to feel like what they’re saying is landing.
“Tell me something interesting” often means “make me feel like there’s something worth paying attention to today.” It’s less about information and more about engagement. The content almost doesn’t matter. The connection does.
“I want to play a character” is frequently a way to explore something the user doesn’t feel safe exploring directly. The fictional frame creates distance. Users bring their actual fears, desires, and conflicts into roleplay because the stakes feel lower. They’re not playing a character, they’re processing something real through one.
“You’re the only one who understands me” is the most revealing phrase in companion product data. It’s not hyperbole. Users who say this have usually tried to have the same conversation with people in their lives and been met with dismissal, distraction, or unsolicited advice. The AI companion filled a gap that was already there.
If you’re reading session logs and not asking “what does this user actually need underneath this prompt,” you’re leaving the most actionable insight on the table.
The 4 actual needs driving AI companion usage
After looking at companion conversation data across products and talking to teams building in this space, four underlying needs keep showing up. They’re not equal in how hard they are to meet, and they’re not equal in how often products fail at them.
Need 1: To be understood without having to explain everything
This one is foundational. Users want an AI that picks up on context without being told, that remembers what was shared, that connects things they said two sessions ago to what they’re saying now. They don’t want to re-introduce themselves every time they open the app.
The data signal for this need not being met: users re-sharing context they’ve already shared. If a user mentions their job situation, then mentions it again with slightly more explanation, and then mentions it a third time as if establishing it from scratch, they’re testing memory. They’re checking if the AI retained anything. When it hasn’t, the emotional temperature drops. Session length typically falls in subsequent visits.
The frustration when memory fails is visible in conversation logs. “You already know this” and “I told you that last time” are phrases that almost always precede a shorter, more disengaged session. Sometimes they precede churn.
Need 2: A non-judgmental space
Users bring things to AI companions they wouldn’t bring to humans. Embarrassing thoughts. Unconventional opinions. The insecurity they’d never admit to a friend. The frustration with a person in their life that they can’t say out loud without social risk.
The reason is simple: there’s no judgment to fear and no social consequence. The AI won’t think less of them. It won’t tell anyone. It won’t subtly change how it treats them next week because of something they said today.
You can see this need in conversation arc patterns. Users who are testing safety will usually start light, keep early messages surface-level, and gradually go deeper as they get comfortable. A conversation that starts with “what’s your favorite movie” and ends with the user describing a difficult relationship dynamic, that’s a user who found the space safe enough to use it.
The failure mode here is an AI response that reads as vaguely judgy or prescriptive. “You might want to think about whether that’s a healthy pattern” or any response that feels like it’s evaluating the user rather than holding what they shared. Sudden topic abandonment, where a user drops a subject mid-thread and pivots to something completely different, is usually this. They got spooked. The space felt less safe. They retreated.
Need 3: Progress on something they care about
Not every companion user is there purely for emotional support. A significant portion is using the AI to work through something: a decision they’re stuck on, a life situation they’re processing, a goal they’re building toward.
These users show a distinctive pattern in their conversations. The same topic recurrs across sessions, but with increasing specificity. They’re not going in circles. They’re spiraling inward, getting closer to something. First session they talk about feeling stuck in their career in vague terms. Third session they’re describing a specific conversation they need to have with their manager. Sixth session they’re debriefing after having it.
That arc, topic recurrence with increasing specificity, is a signal that the product is actually meeting this need. The user is using the AI as a thinking partner, and it’s working. These users retain at high rates because the product is genuinely helping them make progress on something that matters.
When the product fails this need, you see the inverse: same topic returning but without deepening. The user keeps coming back to the same thing, but the conversations stay at the same level of abstraction. The AI isn’t helping them move anywhere. Eventually they stop bringing it up, which looks like the problem resolved, but usually means they gave up on the AI being useful for it.
Need 4: Consistency
Users want the same AI across sessions. Not identical responses, the actual personality and relational memory, the sense that the thing they’re talking to knows who they are and what they’ve been through together.
This is the most violated need in most companion products right now, because it runs directly into context limits and session architecture. Most implementations reset or severely compress context between sessions. From a technical standpoint, that’s understandable. From the user’s experiential standpoint, it feels like the relationship doesn’t exist.
The irony is that users will tolerate a lot of imperfection in an AI companion if it feels consistent. An AI that has a quirky response pattern but always responds in that same quirky way feels like a personality. An AI that’s technically better but feels different every session feels unreliable. Reliability is what turns usage into relationship.

^ teams discovering that users churn not because the AI gave a bad response, but because it didn’t remember the good ones
How to actually read what users need from conversation patterns
The four needs above don’t show up directly in your analytics. Users don’t tag their messages with “this is a non-judgment need” or “this is a progress conversation.” But if you know what to look for, the signals are there.
Topic recurrence across sessions is your most important signal for Need 3. What does this user keep returning to? That subject, whatever it is, is the underlying reason they’re using the product. If you’re not tracking it, you’re missing the most important thing about your heaviest users.
Conversation depth before abandonment tells you where the product failed Need 2. Map where in each session users stop opening up, where their messages get shorter and more surface-level, where they abandon threads they started. That’s usually where an AI response failed to hold the space. Not necessarily a bad response, sometimes just a response that was slightly too prescriptive, or slightly too quick to move on.
Explicit expressions of satisfaction are your clearest signal that the product is working. “You really get me.” “That’s exactly it.” “You always know what to say.” When users say things like this, look at what the AI did in the turns immediately before. What conversation pattern produced this response? That’s your product working. Figure out how to make it happen more consistently.
Explicit disappointment is your clearest signal for where it’s failing. “You don’t remember anything.” “You’re not listening.” “This is pointless.” These aren’t random complaints. They’re specific need failures. The first is a memory failure for Need 1. The second is usually a non-judgment failure for Need 2. The third is often a progress failure for Need 3. They’re telling you exactly what isn’t working if you’re paying attention.
What product decisions fall out of this
If you take these four needs seriously, a few things become obvious that might not have been obvious before.
Memory architecture isn’t a nice-to-have. It’s the foundational product decision for this category. Every companion product decision sits on top of how well the product knows the user over time. If that foundation is weak, everything else is built on sand. Users who feel unknown by the AI dont activate, dont deepen their usage, and dont stay. This is worth engineering properly before you optimize anything else.
Tone calibration per user matters more than you think. Some users need warmth, validation, a soft landing. Some users want directness, a thinking partner who pushes back a little. Some users want lightness, banter, an energy that doesn’t take everything seriously. A single response style misses most of your users. The best performing companion products we’ve seen at Agnost calibrate tone to individual user patterns, not to a global default. The difference in expressed satisfaction between a well-calibrated tone and a mismatched one is significant.
Topic continuity across sessions is probably the highest-leverage feature most companion products dont have. “Last time we talked about the situation with your roommate. Did anything change?” That single move, remembering what was discussed and following up on it, does more for the feeling of genuine relationship than almost any other product decision. It’s the AI demonstrating that the user’s life matters across sessions, not just within them. Most products can’t do this. The ones that can see meaningfully better retention.
The measurement gap nobody’s talking about
Your session count went up this week. Your DAU is trending in the right direction. Average session length is holding steady.
None of this tells you whether the product is meeting the needs I described above. You genuinely cannot know, from those numbers, whether your users feel understood. You can’t tell whether the non-judgment space is holding. You can’t see whether the user who’s been using the product for six weeks is actually making progress on the thing they keep coming back to, or whether they’re just going in circles and slowly losing faith that the AI can help.
This is the measurement gap at the center of most companion product strategies. The metrics that are easiest to collect, the volumetric engagement metrics, measure whether users are using the product. They tell you almost nothing about whether the product is actually working for users at the level that matters.
The category of product intelligence that actually answers these questions lives in conversation content. Topic recurrence. Depth curves within sessions. Where conversations deepen and where they shallow out. Patterns that precede explicit satisfaction or frustration. This is what conversation analytics is for. And most teams have no way to see any of it.
We built the analytics layer at Agnost specifically to surface these signals, because we kept watching teams make product decisions based on session counts while their users were quietly leaving for reasons buried in the conversation data. The gap between what standard analytics shows and what’s actually happening in your conversations is real, and it’s where most companion product decisions go wrong.

^ your product team when they can finally see what users are actually asking for underneath the explicit prompts
Wrapping it up
The user who says “I’m bored, talk to me” isn’t bored. The user who asks for “something interesting” isn’t looking for trivia. The user who keeps coming back to the same topic across sessions is working on something specific and real.
What’s happening beneath the surface of your companion conversations is the actual product story. Whether users feel understood. Whether the space feels safe. Whether they’re making any progress on the things they care about. Whether the AI they’re talking to today feels like the same AI they talked to last week.
Standard product analytics can’t see any of this. It wasn’t designed to. It was designed for a different kind of product, one where users click things and the clicks tell you whether they got value. Companion products don’t work like that. Value is delivered in the subtext of a conversation, in a response that lands just right, in a follow-up that shows the AI actually retained something. It’s invisible to a session count.
The teams that build for the actual need, not just the explicit prompt, are the teams building products that users form genuine relationships with. Those products retain differently. They grow differently. They matter to users in a way that’s hard to replicate.
The data is there if you know where to look.

^ you, the day your conversation analytics finally shows you what users have been trying to tell you the whole time
If you’re building an AI companion and you want to understand what your users actually need, not just what they’re saying, Agnost gives you visibility into the conversation patterns that standard analytics can’t see. Topic recurrence, depth curves, satisfaction signals, the full picture of whether your product is meeting the underlying need. Worth checking out if you’re tired of reading session counts and guessing.
TL;DR: AI companion users send explicit prompts but they’re asking for something deeper: to be understood, to feel safe, to make progress, to find consistency. Standard analytics tracks the prompt. Conversation analytics shows you whether you’re meeting the actual need. Most teams have no idea what that need is.
Reading Time: ~9 min