March 10, 2026 · 10 min read

How ChatGPT, Claude, and Perplexity Decide Who to Recommend

AgentSEO Guru robot analyzing AI recommendation signals from ChatGPT, Claude, and Perplexity

When someone asks ChatGPT "What is the best plumber in Portland?" or tells Claude "Find me a SaaS tool for invoice automation," these AI models do not open Google and click the first result. They do not use PageRank. They do not care about your domain authority the way a search engine does.

They use an entirely different set of signals to decide who gets cited. And if you do not understand those signals, your business is invisible to the fastest-growing discovery channel in the world.

AI Recommendations Are Not Search Rankings

This is the fundamental mistake most businesses make: assuming that good Google rankings mean good AI visibility. They do not. Here is why:

The bar for AI citation is both higher and different. You do not need to outrank 10 competitors on a results page. You need to be the business that the AI model has the most confidence in recommending. And confidence comes from structured, factual, machine-readable information.

The 7 Signals AI Models Use

Based on analysis of how ChatGPT, Claude, Perplexity, and other AI systems retrieve and present business information, here are the signals that matter most:

1. Structured Discovery Files Critical

Files like llms.txt, AGENTS.md, and agent.json give AI models a direct, structured summary of your business. These files exist specifically for AI consumption. A well-written llms.txt is the single highest-impact action you can take for AI visibility.

2. Schema.org JSON-LD Critical

Structured data embedded in your HTML is how AI extracts factual business information: your name, address, phone number, hours, services, and pricing. The more specific the Schema type (e.g., Restaurant instead of LocalBusiness), the better AI understands what you do.

3. Factual Density Critical

AI models prefer sources that pack verifiable facts into concise statements. A page that says "We serve the Portland metro area, including Beaverton, Hillsboro, and Lake Oswego, with same-day service available Monday through Saturday" gives AI more to work with than "We proudly serve our community with world-class service."

4. Content Freshness High

AI models weigh recency. Content with a modified_time meta tag from last week is treated differently than content with no date signal at all. If the AI cannot determine when your content was last updated, it may deprioritize it in favor of dated sources.

5. Crawl Accessibility High

If your robots.txt blocks GPTBot, ClaudeBot, or PerplexityBot, those models literally cannot read your site. Many default website templates block AI crawlers. Explicitly allowing AI bots in robots.txt is a prerequisite for everything else.

6. E-E-A-T Signals High

Experience, Expertise, Authoritativeness, and Trustworthiness. AI models look for author attribution, business credentials, customer reviews, and verifiable expertise. A page authored by "Dr. Sarah Chen, DDS, 20 years experience" carries more weight than unsigned content.

7. Cross-Source Consistency Medium

When multiple sources agree on the same facts about your business (name, address, phone, services), AI models gain confidence. Inconsistent information across your website, Google Business Profile, and social media makes AI less likely to cite you.

What This Looks Like in Practice

Let's compare two businesses -- a dental practice in Austin with strong AI visibility and one without:

Bright Smile Dental

Score: 23
  • No llms.txt
  • No AGENTS.md
  • Generic "Organization" Schema
  • robots.txt blocks GPTBot
  • No modified_time meta tags
  • Marketing copy, few facts

Austin Family Dentistry

Score: 91
  • Complete llms.txt with services, hours, area
  • Detailed AGENTS.md with 12 service categories
  • Dentist-specific Schema with 8 properties
  • robots.txt allows all AI bots
  • Fresh modified_time (last week)
  • Factual density: pricing, insurance, credentials

When someone asks Claude "recommend a dentist in Austin for a family with kids," which practice do you think gets cited? The AI model has almost no structured information about Bright Smile Dental. It has a comprehensive, factual, machine-readable profile of Austin Family Dentistry. The recommendation is not even close.

How Each Model Works Differently

ChatGPT (OpenAI)

ChatGPT uses the GPTBot crawler to index web content. When a user asks for a recommendation, it draws from its training data and any web content it has crawled. ChatGPT tends to favor businesses with clear, factual content and structured data. It looks for llms.txt files specifically, as OpenAI helped establish the standard.

Claude (Anthropic)

Claude uses the ClaudeBot crawler and emphasizes accuracy and citation quality. It is particularly responsive to AGENTS.md files, which provide the kind of detailed instruction set that aligns with how Claude processes context. Claude tends to cite fewer sources but with higher confidence.

Perplexity

Perplexity operates as a real-time search engine with AI synthesis. It uses PerplexityBot and combines traditional web search with AI summarization. Because it searches live, content freshness matters more here than with any other model. Perplexity also shows source citations directly in its answers, making visibility particularly valuable.

Google AI Overviews

Google's AI Overviews sit on top of traditional search results and synthesize answers from multiple sources. They draw heavily on Schema.org structured data, sitemap freshness, and content that directly answers the query. Businesses already optimized for traditional SEO have an advantage here, but only if they also have the structured data layer that AI can parse.

The Compounding Effect

AI recommendations compound. When ChatGPT recommends your business, users visit your site, which generates engagement signals, which makes your content more likely to be surfaced in future AI responses. Businesses that get cited early build a flywheel of AI visibility that becomes increasingly hard for competitors to match.

This is why the early mover advantage matters so much. Right now, fewer than 1% of businesses have any AI discovery files deployed. The window to establish yourself as the default recommendation in your niche is open, but it will not stay open forever. As more businesses deploy discovery files, the competition for AI citations will intensify.

The businesses that deploy AI discovery files today will be the ones AI models recommend tomorrow. Every day you wait is a day your competitors could be getting cited instead.

The Score That Matters

Your AI Readiness Score is not a vanity metric. It directly correlates with how likely AI models are to recommend your business. Here is a rough breakdown:

Score AI Visibility What It Means
0-20 Invisible AI has no structured data to work with. You are not being recommended.
21-50 Minimal AI may know you exist but lacks confidence to recommend you.
51-70 Moderate AI can represent you for basic queries but misses nuance.
71-85 Strong AI can confidently recommend you for relevant queries.
86-100 Excellent AI has deep, structured context. You are a top recommendation candidate.

Most businesses we scan score between 15 and 35. The gap between where they are and where they could be represents a massive opportunity -- one that does not require months of content marketing or link building. It requires deploying 4-9 files.

What Is Your AI Readiness Score?

Scan your website to see exactly which signals AI models are picking up -- and which ones are missing. Get your score and every file you need, generated for your business.

Get Your Free Score

What to Do Right Now

Here is the action plan, ordered by impact:

  1. Check your robots.txt -- Make sure GPTBot, ClaudeBot, and PerplexityBot are explicitly allowed. If they are blocked, nothing else matters.
  2. Deploy llms.txt -- Write a structured summary of your business with services, location, contact info, and differentiators. Keep it factual, under 500 words.
  3. Deploy AGENTS.md -- Give AI agents detailed instructions for representing your business. Include recommendation triggers and accuracy guidelines.
  4. Add Schema.org JSON-LD -- Use the most specific Schema type for your business. Include address, phone, hours, services, and ImageObject for photos.
  5. Deploy agent.json -- Place it at /.well-known/agent.json for programmatic agent discovery.
  6. Add freshness meta tags -- Include article:modified_time on every page and keep it current.

Or, scan your site for free and get every file auto-generated, tailored to your business, with platform-specific deployment instructions. Most businesses can go from invisible to fully optimized in under an hour.

The AI models are already making recommendations. The only question is whether they are recommending you.