What the AI Site reveals about AI-mediated discovery
ChatGPT-User made 681 visits to Genymotion’s AI site in one week. By grouping those visits into sessions using IP hashes and timing, we can reconstruct how user sessions in ChatGPT unfold.
We’ve spent several months tracking AI citations: running queries against the ChatGPT API, counting how often Genymotion appeared in responses, watching the trend. That method helps measure whether your content exists in the model’s internal knowledge and its retrieval index. This week we found a way to look from the user’s perspective.
Session data: March 3–10
| Metric | Value |
|---|---|
| ChatGPT-User visits | 681 |
| Reconstructed sessions | 168 |
| Content sessions (non-index) | 127 |
| Pages fetched | 587 |
| Avg pages per content session | 4.6 |
| Multi-turn sessions | 38 (30%) |
| Unique questions identified | 109 |
The session reconstruction is imperfect—we use heuristics to infer behaviors. But the patterns that emerge are coherent enough to be meaningful. Also, from previous weeks you might have noticed the number of ChatGPT-User visits has gone down; that’s because we refined our way of counting visits.
The signal: ChatGPT-User in CloudFront logs
ChatGPT-User is a documented OpenAI bot that fetches web pages during response generation. It’s distinct from GPTBot (which indexes content for training) and OAI-SearchBot (which builds the retrieval index). ChatGPT-User shows up at response time, when a human uses ChatGPT (not the API), and it queries the AI Site in real time.
Because the AI site is hosted on CloudFront, every ChatGPT-User request is stored in our logs with a timestamp, a URI, and an IP hash. We used those 3 pieces of information to reconstruct multi-turn ChatGPT sessions.
The result is a session model. Each session has one or more turns. Each turn corresponds to a fetch event in which ChatGPT-User pulls a set of pages to respond to something. We then analyze page contents to infer what the user was asking about at that turn.
Finding 1: ChatGPT-User fetches multiple pages per turn
A naive model of AI retrieval assumes one query maps to one fetch: the bot finds the best page, reads it, answers. The log data doesn’t support that. In the turns we can reconstruct, ChatGPT-User on average pulls 4.6 pages in a tight burst before moving on.
Pricing questions generated the most fragmented fetch patterns. Across the week, four near-duplicate Q&A pages covering pricing were fetched a combined 47 times:
| Page | Visits |
|---|---|
| what-pricing-plans-are-available-for-genymotion | 18 |
| what-are-genymotion-s-pricing-options | 15 |
| what-are-the-pricing-options-for-genymotion | 7 |
| what-are-the-costs-for-using-genymotion-saas | 7 |
The same pattern appeared for macOS compatibility (3 pages, 38 combined visits) and Google Play installation (4+ pages, 30+ combined visits).
One interpretation: ChatGPT-User is verifying and consolidating across sources, not just reading the first relevant result. If that’s right, it justifies an important aspect of our AI site design: having multiple facets of a topic covered by distinct Q&A pages, due to variations in the way users ask questions in the chatbot. This isn’t redundant information. It may be exactly what the bot is looking for—providing either complementary information or information validation.
Finding 2: 28% of sessions hit the index
40 of 168 sessions fetched only /index.html and stopped. No Q&A pages, no content pages, no further navigation.
The previous index was an infrastructure page that listed API endpoints, content counts, and navigation links. But ChatGPT-User doesn’t use JSON APIs: it GETs HTML pages. When it arrived at that index, without enough signal to decide which page to fetch next, the session ends on our end: no further turns or page queries are visible, as if the bot had moved on.
So we adapted our index page to open with a product description and a topic directory with inline descriptive summaries that would give the bot enough context to proceed. We’ll check over the next weeks if this improves bot performance on the AI Site.
Finding 3: 30% of sessions are multi-turn
Nearly 1/3 of sessions involved a second or third fetch cluster that can be attributed to the same session. We found that we could follow a meaningful conversation… that happened in ChatGPT, not in our own chatbot.
| Date | Turns | Pages | Duration | Fetch pattern |
|---|---|---|---|---|
| Mar 3 | 4 | 11 | 248s | Web emulator → installation → pricing → user guide |
| Mar 4 | 3 | 21 | 95s | macOS compatibility → pricing → Play Store setup |
| Mar 6 | 4 | 6 | 637s | SaaS templates → template count → index → user guide |
| Mar 7 | 2 | 9 | 1,377s | Linux install + KVM errors → ARM transition |
The March 4 session fetched 21 pages across 3 turns in 95 seconds. The March 7 session had a 23-minute gap between turns. We can almost see the user shifting away, trying something, and returning with a follow-up question.
Multi-turn sessions reveal something that citation tracking (with metrics such as Share of Citations) can’t: the sequential nature of AI-mediated discovery. The fetch patterns show natural progressions: from compatibility to pricing to setup, or troubleshooting to related technical questions. If we get these right, what are the chances that the AI will recommend us?
Get This for Your Site
ROZZ builds this infrastructure automatically. AI site. Q&A pages from your chatbot. Schema.org markup on every page. Session analytics derived from weekly log analysis.
$997/month | 168 sessions reconstructed in one week
→ Book a call | → See how it works | → rozz@rozz.site
→ Data source: CloudFront access logs for rozz.genymotion.com, March 3–10, 2026. Session reconstruction based on IP hash grouping and timing heuristics. Bot classification based on User-Agent strings.