Siteline FAQ
Questions you should be able to answer before trusting a score.
This page explains what the scanner is measuring, where its limits are, and what to do when a public score is not enough.
What does Siteline measure?
Siteline measures a public web page and a few obvious public routes across four pillars: Signal, Navigate, Absorb, and Perform.
Signal asks whether agents can detect and reach the site. Navigate asks whether they can orient and find the right public paths. Absorb asks whether they can take in meaningful content from the initial response. Perform asks whether they can complete a useful handoff or action.
What does Siteline not measure?
It does not replace a full audit, authenticated workflow testing, broad SEO tooling, or implementation planning.
What should I do if the score is weak?
Start with blockers and core issues first. If you need deeper guidance, move to the full audit.
Where do I go next?
Use the scanner for a baseline, the contact page for a human path, and the audit page when you need a fuller engagement.
Is Siteline an SEO score?
No. SEO tools measure how well search engines can index and rank a page. Siteline measures how well an AI agent can reach, read, navigate, and act on a page on behalf of a human. A site can rank well in search and still be unusable by agents — for example, if it relies on JavaScript rendering, blocks non-browser user-agents, or lacks clear next-step paths.
What do the letter grades mean?
Siteline grades from A to F based on a 0–100 score. The grade is constrained by two independent layers: SNAP fundamentals (how well agents can passively use the site) and Agentic Enablement (whether the site provides dedicated machine-readable resources). A (90–100) means the site is agent-usable and actively built for agents. B (78–89) means strong enablement with minor gaps. C (64–77) means moderate enablement — agents can read but lack full support. D (45–63) means the site is hard for agents or lacks dedicated resources. F (0–44) means agents are blocked or the site is unusable. Each grade includes a likely failure mode and shows which layer is the binding constraint.
What does the remediation tier mean?
After scanning, Siteline classifies the result into a remediation tier that suggests the right next step. Self-fix means the issues are straightforward and the site owner can resolve them independently. Workshop means a guided session would help plan the changes. Audit means a deeper assessment is recommended before making changes. None means no significant issues were found.
Can I use Siteline programmatically?
Yes. Siteline offers several interfaces beyond the web scanner:
a public REST API (GET /api/scan?url=example.com),
a CLI (npx siteline scan example.com),
and an MCP server (npx siteline mcp) that lets
AI agents call Siteline directly as a tool. Rate limits apply to
all interfaces — check /api/limits for the current policy.
How often can I scan the same site?
Each domain can be scanned once per calendar day. If you scan the same
domain again within 24 hours, you will receive the cached result from the
earlier scan — it is still valid and complete. There is also a limit of
10 scans per IP per hour across all domains. These limits keep the service
available for everyone and prevent the scan engine from being used as a
proxy. Full rate limit details are always available at /api/limits.
How can someone managing autonomous agents use Siteline?
Siteline is useful for agent operators who need to quickly assess whether a target site is likely to be readable, navigable, and safe to hand a human through. Use the public scanner for quick checks, the CLI for terminal workflows, and the local MCP server when you want an agent to call Siteline directly as a tool.
What Siteline does today is evaluation and interpretation. What it does not do yet is host remote authenticated tooling, maintain long-term scan history, or execute actions on third-party sites.
What is the Agentic Enablement layer?
Beyond the four SNAP pillars, Siteline assesses whether a site actively supports agents through dedicated machine-readable resources. Eleven resources are probed and qualitatively scored — not just for presence, but for whether they actually help agents. The total maps to a level (0–4) that caps the maximum achievable grade.
A site with great HTML but no dedicated agent resources maxes out at D. A site with full agentic resources but broken HTML scores whatever SNAP produces. Both layers are necessary conditions for a high grade.
What machine-readable resources does Siteline look for?
Siteline probes 11 resources, each scored 0 (absent), 1 (present but weak), or 2 (present and useful):
- robots.txt — AI-specific directives (GPTBot, ClaudeBot, etc.)
- sitemap.xml — multiple meaningful URLs
- RSS/Atom feed — actual entries
- llms.txt — structured markdown with headings, lists, URLs
- llms-full.txt — substantial (5KB+) with structured headings
- agents.json — capabilities or endpoints declared
- .well-known/mcp.json — server tools or resource links
- .well-known/security.txt — Contact + Expires per RFC 9116
- openapi.json/yaml — valid spec with paths defined
- api/v1/index.json — endpoint manifest
- api/v1/changelog.json — version history
Why did my site score D even though the content looks good?
Siteline uses two layers. Layer 1 (SNAP) evaluates passive usability — your HTML content, structure, routing, and next-step clarity may all be strong. But Layer 2 (Agentic Enablement) caps the grade based on dedicated machine-readable resources.
Without resources like llms.txt, agents.json, or a structured
sitemap, the maximum grade is D regardless of how good the HTML is. The scan results show
both layers so you can see which one is the binding constraint and what to do about it.