Two channels, two audiences. The hobbyists are on your website. The buyers are on the AI.
Three weeks ago we ended an article with a question. If 95% of AI-mediated conversations about Genymotion are about free usage and mobile gaming, can we shift that ratio toward enterprise? We made changes to the AI site and we just looked at the data.
The traffic on the chatbot on the website hasn’t shifted. On the AI site however, it has. Buyer/professional content reads went from 35.9% to 45.0% (+9.1pp). The chatbot stayed flat at 7–8% buyer rate. Two channels, two audiences.
Where we left off
In article #11 we shared traffic numbers. On the Genymotion website, the on-site chatbot recorded 667 conversations in March. Fewer than 20 of them showed clear commercial intent, under 5%. The rest were people trying to run TikTok on their PC, set their device language to Chinese, play Minecraft on Steam. The marketing team targeted CI/CD teams, mobile security testers, and cloud-deployment buyers. Those audiences were less than 5% of the conversations the chatbot recorded.
That same article also showed a different pattern on the AI side. A pricing evaluation from Madrid: eleven ChatGPT-User fetches across five rounds, ending on the full pricing breakdown. A late-night deliberation from the US. macOS compatibility checks from multiple continents. These were buyer-pattern sessions, all happening in ChatGPT. The website never recorded them.
We asked at the end of that article whether the AI site could push the ratio. Could we get AI platforms to recommend Genymotion to buyers, not just describe it to hobbyists?
We can now answer that with three more weeks of data. We compared a 14-day window in mid-March (the article #11 baseline) with a 14-day window from April 21 to May 4. Both for the chatbot on genymotion.com and for the AI site at rozz.genymotion.com.
Chatbot rate: flat
The Rozz chatbot now classifies every conversation with a buying_intent flag (yes/no). It’s a broader definition than the manual “clear commercial intent” we used in March, so the absolute numbers are higher. Applied consistently before and after, it gives us a consistent measure.
| 14-day window | Conversations | Classified | buying_intent='yes' |
Rate |
|---|---|---|---|---|
| Mar 11 – 24 | 338 | 335 | 24 | 7.2% |
| Apr 21 – May 4 | 273 | 210 | 17 | 8.1% |
Plus or minus a percentage point. Classifier coverage was partial in the late-April window because the pipeline is still rolling out. It applied uniformly to whatever it classified, so the rate is comparable.
The broader monthly picture: February 7.9%, March 7.6%, April 10.3% (with classifier coverage still partial). The chatbot’s buyer rate has stayed between 7% and 10% for months. Same hobbyist questions about Minecraft, TikTok, and Chinese language packs. The website’s audience composition is essentially what it was when we wrote article #11.
If the goal was to convert the chatbot’s hobbyist audience into a buyer audience, that didn’t happen. The website is still mostly hobbyists.
The AI site shifted toward enterprise
We classified every page that ChatGPT-User fetched in both windows by what the page actually contains. Pricing pages, free-tier pages, and license pages count as “buyer / professional”. Install guides, system requirements, and “I can’t find the Play Store” content count as “free user”. We set aside discovery pages because they’re navigation, not content. Those are pages like the homepage and topic listings.
| Audience cluster (content reads on the AI site) | Mar 11–24 | Apr 21–May 4 | Δ |
|---|---|---|---|
| Buyer / professional (pricing, cloud, CI-CD, security testing) | 264 (35.9%) | 385 (45.0%) | +9.1pp |
| Free-user (install help, troubleshooting, requirements, compatibility) | 294 (39.9%) | 270 (31.5%) | −8.4pp |
Six weeks ago, free-user content was the bigger bucket. Today, buyer/professional content is.
By design, as we added enterprise content between the two windows, some of this shift is supply-driven. The new security-testing page (use-burp-suite-with-genymotion-desktop) got 89 reads in 14 days. In March, that page didn’t exist. Cloud product pages grew from 48 to 71 reads. CI/CD content went from 5 to 12. If you publish more enterprise pages, AI bots will fetch more enterprise pages. That part is supply-driven.
What isn’t supply-driven is what stayed flat. Reads of what-pricing-plans-are-available-for-genymotion were 42 in March, 47 in late April. The existing buyer pages aren’t being read dramatically more. The likely explanation: ChatGPT-User didn’t fetch content on the Genymotion AI site when enterprise queries were made before the corresponding pages existed. Once they did, AI bots fetched them. On May 1–2, three separate sessions paired the new Burp Suite page with i-recently-upgraded-virtualbox-and-genymotion-no-longer-work. Mobile pen-testers were hitting a real bug.
What we changed on the AI site
In the six weeks between the two windows, we shipped changes to how the AI site ranks and presents Q&As. Our numbers indicate that ChatGPT-User fetches the homepage of rozz.genymotion.com for roughly 25% of its retrievals. We chose to change two things that AI platforms are known (or at least assumed) to read: what’s at the top of the page, and what’s inside the FAQPage JSON-LD.
Three weeks ago, the homepage led with “is genymotion free?” The FAQ was ranked by raw retrieval count from CloudFront logs. Whichever Q&As AI bots fetched most often got promoted. Hobbyist queries dominated, so hobbyist Q&As led the homepage. When AI platforms read the homepage to answer enterprise queries, they found mostly hobbyist content.
We made three sets of changes.
Buying-intent ranking. The chatbot already classifies every conversation by buying_intent. We applied that signal to the AI site. If a Q&A’s origin conversation is tagged buyer, the Q&A gets a ranking boost in the FAQ selector. We started at +0.25 on Apr 17. The first round didn’t shift the rankings enough, so we raised it to +0.5 the same day. On Apr 20 we unified the HTML FAQ and the FAQPage JSON-LD to use the same boosted selector. Before that, they were picked independently. The JSON-LD just took the first 10 Q&As in database order. That was wrong. The JSON-LD that AI bots read directly was the least curated part of the site.
A “For Enterprise Buyers” section, then a reversal. On Apr 17 we also added a dedicated “For Enterprise Buyers” section to the homepage and llms.txt. Five days later we audited what it actually contained. Only ~27% of the Q&As in that section were actually about enterprise topics. The rest were consumer content the buying_intent classifier had over-tagged: install/uninstall Linux, “light use not gaming,” Bluestacks comparisons. We removed the section on Apr 22. Labeling enterprise content has a precision problem when the upstream signal is noisy. Ranking the same content higher in the shared FAQ doesn’t have that risk. A borderline Q&A that gets boosted is still inside a generic FAQ. It’s not under a heading that overstates its category.
Editorial overrides. On Apr 21 we shipped two manual controls. One pins specific Q&As to the top of the FAQ. The other promotes specific topics to the front of the topic directory. The Genymotion homepage now leads with Network & Security Config, Mobile Test Automation, CI/CD Automation, and Cloud Deployment Options. That ordering is a curated decision, not an algorithmic one. The algorithm picks the long tail. Editors pick the front.
Two structural cleanups also mattered. Each Q&A used to appear in 4–7 topic cards because of keyword-based inheritance. 86% of Q&As were duplicated across multiple topics. A new classifier (Apr 22) puts each Q&A in 1–2 canonical buckets. Duplication is now at 2%. And on Apr 23 the topic taxonomy became persistent across crawls. URLs stopped changing on every regeneration. That’s the URL-churn problem we wrote about in article #12.
Claude Code activity grew
In article #9 we documented the first ever Claude-User session on the AI site: 14 requests over six days in late March, mostly Claude Code. We treated it as an early signal.
The signal continued. In the past two weeks we logged 26 Claude-User hits. 24 came from Claude Code. Two sessions are notable:
| May 1, 19:49 UTC | May 2, 10:28 UTC |
|---|---|
| 6 fetches in 46 seconds | 19 fetches in 70 seconds |
| Index → Cloud Deployment Options → QnA index → what-pricing-plans-are-available | Index → Cloud Deployment Options → Virtual Device Management → Android Dev Integration → costs-of-cloud-and-billing → can-i-run-my-apk → arm-support-saas → credit-card-trial → simultaneous-devices → saas-vs-desktop → how-can-i-run-locally → cloud-marketplace-pricing → gpu-arm-support → bluetooth → desktop-requirements |
The second session matches a procurement evaluation pattern. Cloud deployment, billing model, ARM-on-SaaS, trial requirements, SaaS-versus-Desktop economics, marketplace pricing. Someone (or something acting on behalf of someone) traversed the site in the order a procurement evaluator would: cost, scaling, technical compatibility, cost again from a different angle.
In the entire 14-day window, Claude-User fetched zero install-help pages, zero troubleshooting pages, zero “can’t find” content. Of 11 actual content reads (excluding navigation), 50% were enterprise-evaluation pages and 33% were cloud-product pages. The sample is small (n=11). It’s consistent with every other measurement.
So, did the buyer ratio shift?
It depends on the channel. The AI site shifted. The website didn’t. The two channels produced different results.
The genymotion.com website still draws its old audience. Hobbyists, gamers, language-pack tweakers, the same 7–8% buyer rate as before. Nothing the AI site did changed who arrives at genymotion.com.
On the AI side, different sessions are now appearing: the Claude Code procurement session, the mobile pen-tester pairing Burp Suite with VirtualBox troubleshooting. They asked an AI a question. The AI fetched the AI site. We’re now trying various ways to trace actual conversions — stay tuned!
This is what’s new. The AI site is read by an audience that doesn’t show up in numbers on the website, especially if they’re dwarfed by the numbers of the free users. However, the marketing team’s enterprise targets (CI/CD, mobile security, cloud at scale) do appear in the AI site’s logs.
Buyers query AI platforms. AI platforms read the AI site to answer them.
What we cannot yet claim
We see what AI platforms read. We don’t see what they say. Did ChatGPT and Claude actually recommend Genymotion in their answers during those sessions we reconstructed? Did the buyer who asked “what cloud emulators integrate with our CI pipeline” get “Genymotion” as the answer? Those are separate measurements. The citation tracker handles them. We’ll come back to it later.
A sample-size note. Claude-User content reads are small. We’re calling it a representative observation, not a statistical claim. The ChatGPT-User numbers are much higher, with 856 content reads in 14 days. They show the same pattern.
One more honest note. We can’t say from this data that buyers stopped visiting the website. We can say that the buyer-pattern sessions we see in AI logs don’t appear in the chatbot logs. Would those buyers have visited the website in a pre-AI world? Is AI now substituting for that visit? We’d need cross-deployment data to answer with confidence.
Get this for your company
Rozz gives you visibility into the AI conversations happening about your product, and the tools to influence what AI recommends.
$997/month | AI site + chatbot + analytics
→ Book a call | → See how it works | → rozz@rozz.site
→ Data source: On-site chatbot logs for genymotion.com (buying_intent classifier) and CloudFront access logs for rozz.genymotion.com. Two 14-day windows: Mar 11 – 24, 2026 (baseline) and Apr 21 – May 4, 2026 (post-changes). ChatGPT-User and Claude-User reads classified by content audience cluster.