Dynamic Agent-Readiness Scoring: Why Static Checklists Fail
Most "is my site agent-ready?" tools run the same checklist against every website on the internet. A plumber in Omaha gets graded against the same list as Stripe, OpenAI, and the New York Times. Missing a public API catalog? That's a fail. Don't have a bespoke skills discovery index? That's a fail. No middleware attaching RFC 8288 Link headers? You lose more points.
This is backwards. Agent readiness is not a fixed checklist. It depends on who you are, what platform you're on, how mature your site is, and who your audience is. The whole point of this field should be making your site easy for agents to understand — not passing an arbitrary exam written for a company that doesn't look anything like you.
So we rebuilt our scoring engine from scratch. Starting today, every scan on AgentSEO.guru produces a BusinessContextProfile — a four-dimensional description of your site — that drives every decision the platform makes afterwards: what items apply, how heavily they're weighted, which files we generate, and even the tone of the recommendations we write.
The four dimensions we detect
1. Business type
20 verticals: restaurant, SaaS/tech, law firm, ecommerce, plumber, creative pro, informational blog, and more. A restaurant needs reservations data; a SaaS needs an API catalog. Weights adapt.
2. Platform capabilities
Five granular dimensions per platform: can you add files? set HTTP headers? run middleware? inject structured data? modify robots.txt? A Squarespace site is not a Vercel deployment — stop pretending it is.
3. Maturity signals
Do you have an API? booking? payments? reviews? a catalog or menu? directory presence? deep content? These signals gate whether certain protocol items even apply to you.
4. Audience persona
Technical, non-technical, or mixed. Your recommendations and your deployment guides are written in the voice your audience actually speaks — and you can toggle between voices on demand.
Why this matters: the N/A concept
The biggest visible change is a new score status: not_applicable. If an item can't possibly apply to your site — for example, an API catalog for a brochure plumbing site with no API — it is marked N/A and excluded from your score entirely. It doesn't count as a fail. It doesn't get a red indicator. It doesn't pretend to be something you should go fix.
This is a hard requirement for scoring to be useful. A 62/100 that penalises you for missing items you couldn't possibly deploy isn't a score — it's an insult dressed up as feedback.
Dynamic weights by profile
Beyond simply removing inapplicable items, we re-weight the categories that remain based on what actually moves the needle for your profile:
- Restaurant / local service: JSON-LD (LocalBusiness schema), robots.txt with Content-Signal, directory presence, reviews. Higher weight on structured data and trust signals. Near-zero weight on API-layer items.
- Public-API SaaS: RFC 9727 API catalog,
/.well-known/agent-skills/discovery, OpenAPI spec, markdown negotiation. Full weight on the protocol layer. Lower weight on LocalBusiness JSON-LD. - Content publisher:
llms-full.txt, markdown negotiation, freshness signals, Content-Signal directives. Weighted toward what makes long-form content easy for LLMs to cite. - Ecommerce: Product schema, catalog surface area, payments and checkout signals, reviews, feed discoverability. API catalog enters the mix if a public API is detected.
Only what your platform can actually deploy
Static tools love to generate a Cloudflare Worker middleware snippet and a custom HTTP-header rule for a Wix site. That isn't helpful — it's discouraging, and it's the reason non-technical users give up on agent readiness entirely.
Our generator now filters files by your platform's real capabilities. If your host can't set HTTP headers, we don't produce a Link-headers snippet. If it can't run middleware, we skip the markdown-negotiation bundle. What you get is a tight ZIP of files your stack can actually deploy, plus a COVERAGE.md at the root that documents every file we shipped and every file we intentionally skipped, with the reason.
This is the opposite of over-promising. And it's what lets us give closed-builder sites a confident 88/100 — because they don't lose points for not being Vercel.
Two-voice recommendations
Once we know your audience persona, the recommendation engine writes in two voices:
| Voice | Example recommendation |
|---|---|
| Non-technical | "Add a page that lists your services and hours in plain English. AI models will use this to answer questions about you. We've already written one — paste the text from llms.txt into your site footer or upload the file to your root." |
| Technical | "Serve /llms.txt with Content-Type: text/plain; charset=utf-8. Reference it in <link rel="llm-content"> and the RFC 8288 Link header. Spec: llmstxt.org. Cloudflare Worker and Vercel redirect snippets included." |
Users on mixed or non-technical sites default to plain-English copy with a "Show technical detail" toggle that reveals the implementation underneath. Technical sites see the technical voice by default. Nothing is hidden — the information is the same. The delivery is appropriate.
Coverage report: radical transparency
Every ZIP now includes a COVERAGE.md at the root listing two things:
- Every file we generated, where to deploy it, and why it matters for your profile.
- Every file we skipped, with a plain-English reason. Example: "Skipped
api-catalog.json— we did not detect a public API for your site. If you add one later, re-scan and we will generate this for you."
We think this is the single most honest thing a tool in this space can do. If a file doesn't help you, we don't ship it and we tell you why.
The Agent Protocol Layer
This release also formalises five items we now score and generate as a dedicated protocol layer, each profile-gated:
- Link response headers (RFC 8288) — advertise related resources on every HTML response.
- Content-Signal in robots.txt — explicit
ai-train,search, andai-inputpreferences, editable per site from your dashboard. - Markdown content negotiation — respond to
Accept: text/markdownwith a pre-rendered markdown version of the same page. - API catalog (RFC 9727) —
/.well-known/api-catalogfor sites with a public API. - Agent Skills Discovery — a machine-readable index of the agent-callable actions your site actually supports.
Each is only scored and generated when the profile supports it. A brochure site is never penalised for not having an API catalog.
How this compares to static-checklist tools
| Dimension | Static checklist tools | AgentSEO.guru |
|---|---|---|
| Scoring | Same checklist for every site | Dynamic weights based on BusinessContextProfile |
| Items that don't apply | Counted as fails | Marked N/A and excluded from score |
| Generated files | Always the same set, whether deployable or not | Only files your platform can deploy, with COVERAGE.md |
| Recommendations | Single voice (usually technical) | Two-voice, audience-aware, with toggle |
| Platform guides | "Here's a Cloudflare Worker" regardless of your host | Tailored to your detected platform; honest "not supported" messages |
See your dynamic score
Paste your URL. We'll detect your business, platform, maturity, and audience — and score you against criteria that actually apply to your site.
Run a free scanWhat's next
Dynamic scoring was the hardest rewrite we've done. It touched the analyzer, every generator, the recommendations engine, the ZIP pipeline, and most of the UI. We're treating it as a foundation, not a milestone — the next releases will add per-site preferences for Content-Signal directives (live today in Settings), per-platform deploy automation, and more protocol-layer items as the Cloudflare / W3C / IETF drafts firm up.
If you're an agency, a power user, or just curious how the profile is computed for your site: every report now includes a Your Site In A Nutshell card in the overview tab that exposes the detected profile in full. Scan yours and have a look.
As always, feedback is welcome at support@agentseo.guru.