Entry #9 · Mar 31, 2026

Selling in Claude Code: From Pricing to Implementation in 10 Seconds with an AI Site

A developer asked Claude Code how much Genymotion costs. Ten seconds later, it was showing them how to set it up. Both answers came from our AI site. The pricing page and command line instructions were structured for exactly this: clean, extractable content that Claude Code can fetch and act on in the same session.

The logs show what happened during a session on March 27th hitting the AI Site for Genymotion: 18:30:56 UTCClaude Code fetches /qna/what-pricing-plans-are-available-for-genymotion.html. 18:31:06 UTCClaude Code fetches /runbooks/gmsaas-cli-runbook.html. No website visit. No marketing fluff. A developer evaluated a product and started implementing it inside the same terminal window. We finally have the honor of hosting Claude-User on the AI site. There have been 14 requests from 10 unique sessions in six days, and 12 of them came from Claude Code.

Key findings

  • Claude-User: 14 requests (Mar 25–30), first-ever direct Claude citations from the AI site
  • 12 of 14 requests came from Claude Code (user-agent: claude-code/2.1.83-84)
  • 2 of 14 came from claude.ai web search (Claude-User/1.0)
  • The pricing Q&A page was fetched 6 times from 6 different sessions
  • The gmsaas CLI runbook was fetched 3 times
  • Time from ClaudeBot mass crawl to first Claude-User retrieval: 5 days

The data

All LLM bots (Mar 24–31, 2026)

Bot Requests Category Change vs. prior week
ChatGPT-User 1,008 Citation Stable (~1,200 plateau)
ByteSpider 337 Training Up from 91
OpenAI SearchBot 123 Search index Stable
OpenAI GPTBot 64 Training Up from 37
ClaudeBot 33 Training Down from 958 (post-crawl monitoring)
PerplexityBot 24 Search index Stable (post-indexing)
Claude-User 14 Citation First appearance
Meta AI 9 Training Stable

Claude-User content breakdown

Content type Requests %
Q&A Page642.9%
Homepage (index)321.4%
Runbooks321.4%
GEO Page17.1%
Robots.txt17.1%

What Claude Code is

Claude Code is Anthropic’s terminal-based AI coding assistant for reading, editing, debugging, and automating work inside codebases, and it has become one of the fastest-rising developer tools in 2026. Estimates put Claude Code at over 20 million daily installs, with some reports saying it exceeded 60% developer adoption within nine months.

The user-agent string in our logs confirms the tool and version: Claude-User (claude-code/2.1.83) and Claude-User (claude-code/2.1.84). Different developers running different versions of Claude Code, each showing up independently in the logs.

Claude Code doesn’t operate like a web chatbot. A ChatGPT-User session happens in a browser tab. The developer reads the answer, then opens a new tab to visit the docs, then goes back to their editor. Claude Code collapses that into one step: the answer, the documentation, and the implementation all happen inside the terminal.

Using the terminal as a sales channel is, to our knowledge, genuinely new and incredibly powerful.

The sessions

Every Claude-User session tells a story.

Session D: 10 seconds (March 27)

Time (UTC) Page User agent
18:30:56/qna/what-pricing-plans-are-available-for-genymotion.htmlclaude-code/2.1.84
18:31:06/runbooks/gmsaas-cli-runbook.htmlclaude-code/2.1.84

Two things made this possible.

First, the content is structured for AI retrieval. The pricing answer lives on a dedicated Q&A page with Schema.org QAPage markup—one question, one complete answer, extractable in a single fetch. But the page never reaches Claude Code directly. Claude Code’s WebFetch tool passes the HTML through Haiku, a small fast model that summarizes the content and enforces a 125-character limit on direct quotes. Haiku can’t reproduce the pricing table verbatim—it paraphrases, condenses, and sometimes invents helpful framing that wasn’t in the original. What Claude Code actually sees is a lossy summary, not the page itself. Without the structured Q&A format, that summary would be even more lossy—fragments from a marketing site with no clear answer to extract. With it, Haiku has a clean signal: one question, one complete answer, and it consistently preserves the pricing tiers even through summarization.

Second, the AI site was deliberately built with Claude Code in mind. The gmsaas CLI runbook isn’t a page that existed on Genymotion’s main website. It’s a step-by-step command reference that Rozz generated specifically for developer tooling sessions—the kind of content a developer needs when they’re already in their terminal. The Q&A pages link to relevant runbooks, so an AI agent navigating from a pricing question can discover the implementation path. That’s why Claude Code went from “how much does it cost?” to “here are the setup commands” in 10 seconds: the content was designed to support that exact transition.

Session C: 12 minutes (March 26)

Time (UTC) Page User agent
17:13:15/qna/what-pricing-plans-are-available-for-genymotion.htmlclaude-code/2.1.84
17:18:38/runbooks/gmsaas-cli-runbook.htmlclaude-code/2.1.84
17:25:28/index.htmlclaude-code/2.1.84

Session F: claude.ai web search (March 30)

Time (UTC) Page User agent
07:59:59/robots.txtClaude-User/1.0
08:00:00/runbooks/gmsaas-cli-runbook.htmlClaude-User/1.0

The only non-Claude Code session. The user agent is Mozilla/5.0 ... compatible; Claude-User/1.0—claude.ai’s web search feature. Someone used Claude in their browser, asked about Genymotion’s CLI, and Claude’s web search fetched the runbook.

Two Claude channels are now open: Claude Code (developer terminal) and claude.ai (web chat).

What this means for developer tools

The website is no longer the interface. The AI agent is.

For three months, ChatGPT-User has been the citation channel. Someone opens ChatGPT, asks about Android emulators, gets an answer citing Genymotion. That’s discovery and evaluation. The developer still needs to leave ChatGPT, visit the website, read the docs, go back to their editor, and start building.

Claude Code removes every step between “I want this” and “I’m building with this.” The developer is already in their codebase. When Claude Code fetches the pricing page, the answer appears in the terminal. When it fetches the CLI runbook, the setup commands are right there. The next step isn’t “open a browser.” The next step is writing the integration.

Your content is no longer read by users on a webpage. It is consumed by AI tools inside workflows—where the output is working code, not a bookmark to visit later.

This is what the AI site was built for. The pricing Q&A page answers the purchase question in structured, extractable format. The gmsaas CLI runbook provides step-by-step commands. The topic taxonomy links them so an AI agent can navigate between evaluation and implementation in a single session. Rozz built this infrastructure for Genymotion: dedicated Q&A pages, executable runbooks, Schema.org markup, and a topic structure that AI agents can traverse. Claude Code consumed it. So did ChatGPT-User. So did PerplexityBot.

The Claude pipeline is complete

Three stages. Each one observable in the logs.

Stage Bot What happens
1. Crawl + indexClaudeBotContent collected, knowledge base updated
2. MonitoringClaudeBotRobots.txt + sitemap checks, freshness assessment
3. Live retrievalClaude-UserReal-time fetches during user conversations

We’re now observing stage 3. The full timeline:

Date Event
Mar 2ClaudeBot sweeps topic pages (“reading the map”)
Mar 10–17ClaudeBot regresses to monitoring only (0 Q&A pages)
Mar 20ClaudeBot mass crawl: 577 requests triggered by per-topic sitemapindex
Mar 25Claude-User first appearance: pricing Q&A fetched from Claude Code

Five days from mass crawl to live retrieval.

Claude-SearchBot: answered

In article 8, we asked why Claude-SearchBot had never visited the AI site. The answer is now clear: Anthropic doesn’t need a separate search bot. ClaudeBot handles both training and search indexing. Claude-User is retrieving content without Claude-SearchBot ever visiting.

Role OpenAI Anthropic
Training + indexingGPTBot + OAI-SearchBotClaudeBot (one bot, both functions)
Live retrievalChatGPT-UserClaude-User

Perplexity: 25% and climbing

Perplexity’s citation rate rose from 17% to 25%. PerplexityBot’s deep indexing event (March 10–15) is producing results two weeks later. The crawl-to-citation pattern has now been validated on three platforms:

Platform Deep indexing First citations Lag Current citation rate
ChatGPTJan 7 (547 req)Late January~3 weeks83%
PerplexityMar 10–15 (511 req)~Mar 24~2 weeks25% (up from 17%)
ClaudeMar 20 (577 req)Mar 255 daysEarly (14 requests)

Where we are

Three months ago, one platform cited Genymotion: ChatGPT, at 14%.

Today:

ChatGPT: 83% citation rate. ~1,000 requests/week, stable. The proven channel.

Perplexity: 25% citation rate. Up from 17%. Growing.

Claude: live retrieval confirmed. 14 requests in 6 days, 12 from Claude Code. Developers evaluating and implementing inside their coding sessions.

Same infrastructure. Same content. Same Schema.org markup. Three platforms, three timelines, same outcome.

The next question: does Claude-User follow the same growth curve as ChatGPT-User? ChatGPT went from 42 requests in January to 1,200/week by March. Claude-User is at 14 in its first week. The developer audience using Claude Code may be smaller than ChatGPT’s general audience—but the sessions are higher intent. A developer asking about pricing inside their terminal is closer to a purchase decision than someone asking ChatGPT in a browser.

We’re iterating. Stay tuned.

Get This for Your Site

This is what the Genymotion AI site was built for. Structured Q&A pages that answer purchase questions. Executable runbooks with step-by-step commands. Clean, extractable content linked by topic taxonomy so AI agents can navigate from evaluation to implementation in one session.

Claude Code consumed that content. ChatGPT-User has been consuming it for months. PerplexityBot indexed all of it. Rozz builds this automatically.

$997/month | ChatGPT at 83%. Perplexity at 25%. Claude Code sessions starting.

Book a call  |  See how it works  |  rozz@rozz.site

Data source: CloudFront access logs for rozz.genymotion.com, March 24–31, 2026. Bot classification based on User-Agent strings. Claude-User session reconstruction from timestamped request logs and user-agent version matching.