Entry #11 · Apr 14, 2026

Your customers are asking AI about you right now. You just can’t see it. We can.

We’ve spent three months building an AI site for Genymotion, an Android emulator company. The AI site makes their product show up in ChatGPT, Claude, and Perplexity answers. Citation rate went from 14% to 83%. We’ve written ten articles about the technical side: crawler behavior, sitemap structures, bot architectures.

This article is different. This one is about what we’re learning about the people who ask the queries.

Something new is happening

People are evaluating products through AI now. They ask: “What’s the best Android emulator?” “How much does it cost?” “Does it work on my Mac?” They also happen in ChatGPT, Claude, Perplexity. In that case, conversations often end with a recommendation, pushing the user in one direction or another. And the company whose product is being discussed has no idea it happened.

When people use AI answer engines, you don’t see page views, users don’t fill in forms. Your Google Analytics sees nothing. The conversation happens and a decision gets made in a place the company can’t see.

We think we are able to see a significant part of that traffic.

Two windows into the same audience

We have two data sources for Genymotion, and together they give us a view full of insights that we’re sharing here.

The Rozz chatbot sits on Genymotion’s website and captures what visitors ask. It’s an answer engine that knows the website, help center, and product documentation. 667 conversations in March representing over 1,300 questions. Each one records what the person wanted to do, what problem they had, and whether they got an answer. This is direct: the user types a question, the chatbot answers.

The AI site sits at rozz.genymotion.com and serves structured content to AI platforms. When someone asks ChatGPT about Genymotion and ChatGPT looks up a page from the AI site to answer, we see it in the CloudFront logs. We don’t see the user’s question directly, but we see which pages were fetched. And since the pages are Q&A pages generated from real chatbot conversations, literally hundreds of them, the page titles are the questions themselves. By clustering fetches by timing and IP, we can reconstruct multi-turn sessions.

We’re not saying that every ChatGPT conversation about Genymotion hits the AI site. It is very likely that many are answered from training data alone or via the search index. But we still see a lot of queries: 3,830 in March that we group into over 2,500 sessions. What we see are the conversations where ChatGPT needed to look something up. Those tend to be the specific, current questions: pricing, compatibility, recent releases.

What we’re seeing in the ChatGPT sessions

This week, ChatGPT fetched 1,323 pages from the AI site during approximately 500 live user sessions. Here are some of the sessions we reconstructed.

A pricing evaluation from Madrid.

Eleven pages fetched across five rounds, about five minutes total. The session started with root access topics, then moved to the free personal use Q&A, then the SaaS vs Desktop pricing comparison, then the full pricing breakdown. The page sequence tells a story: someone went from exploring the product to asking “can I get it free?” to “ok, what does it actually cost?”

A 22-minute session from the US West Coast.

Four rounds. A 17-minute gap between rounds two and three. The last page fetched was the pricing Q&A. We don’t know what happened during those 17 minutes, but the session ended on the purchase question.

The same macOS question, twice, from two continents.

Two independent sessions on the same day — one from the US, one from Europe — both fetched the “Is Genymotion available for macOS?” Q&A page. This question appears in the logs daily. ChatGPT doesn’t seem to know the answer from training and looks it up every time.

Three VirtualBox bug reports on the same day.

Three separate sessions, three different regions, all asking about the same problem: “I upgraded VirtualBox and Genymotion no longer works.” This could be an interesting product signal for the support team.

A CLI runbook session.

Eleven pages, four rounds. The user pulled both the gmtool and gmsaas CLI runbooks, which are the step-by-step command references we added to the AI site specifically for developer tooling workflows. We’re hoping to see more usage of these runbooks from coding tools to develop them as a sales channel.

What 667 chatbot conversations reveal

The ChatGPT session data shows us what questions the AI is looking up. The chatbot data on the website shows us something different: who’s actually visiting, and what they want to do.

Genymotion’s marketing targets enterprise buyers. CI/CD automation. Mobile security testing. Cloud deployment at scale. Here’s what people actually told the chatbot they wanted to do in March:

User goal Conversations
Use Genymotion Cloud14
Get a free license key9
Know prices and billing plans9
Install on Windows step by step7
Set the language to Chinese6
Install on PC to use Instagram5
Play Minecraft on Steam3
Run TikTok on PC3
Download eFootball2
Update WhatsApp2

And the enterprise-buyer goals?

User goal Conversations
Create an API token in the SaaS portal4
Perform native MITM with TLS interception4
Run automated UI tests2
QR provisioning for company-owned device1

Out of 667 conversations, fewer than 20 showed clear commercial intent. The chatbot had a 95.5% satisfaction rate and 88.7% resolution rate — it works well. But it’s mostly answering questions from hobbyists and individual users trying to run mobile apps on their PC. The enterprise audience the marketing team targets represents less than 5% of the conversations.

We wouldn’t have known this without the chatbot data. The website analytics would show page views. The chatbot shows intent.

What we think this means

There are two things happening here that we think matter beyond this case study.

AI is THE discovery and evaluation channel, and it’s invisible to most companies.

The pricing evaluation from Madrid, the late-night deliberation from the US, the macOS compatibility checks happening daily from multiple continents: these are real product evaluations. Some of them lead to purchases, some don’t. But the company has no visibility into them without an AI site generating logs.

This is new. A year ago, these conversations would have been Google searches that showed up in analytics. Now they’re AI conversations that might not lead to any traffic on your human site, but that we can notice (at least in part) on the AI site.

The combination of chatbot + AI site creates a feedback loop that a website alone can’t.

The chatbot tells you who’s visiting your website and what they want. The AI site tells you what questions AI platforms are looking up about you. Together, they reveal what audience is actually showing up — and it might not be the one your marketing team is targeting.

What we’re trying next

Now that we can see the conversations, the question becomes: can we influence them?

If 95% of AI-mediated conversations are about free usage and mobile gaming, and 5% are about enterprise CI/CD, is there a way to shift that ratio? Can the AI site prioritize business use cases so that AI platforms are more likely to recommend Genymotion for enterprise workflows rather than just confirming it exists for hobbyists?

We think so. We’re working on it now. That’s the next article.

Get this for your company

Rozz gives you visibility into the AI conversations happening about your product — and the tools to influence what AI recommends.

AI site + chatbot + analytics. One infrastructure that turns invisible AI conversations into a measurable channel.

$997/month | AI site + chatbot + analytics

Book a call  |  See how it works  |  rozz@rozz.site

Data source: Rozz chatbot conversation logs on Genymotion.com and CloudFront access logs for rozz.genymotion.com, March 1 – April 14, 2026. ChatGPT sessions reconstructed by clustering page fetches on IP and timing. Intent classification from the chatbot’s own conversation analytics.