Agentic Experience Score

    How We Measure Agent-Ready Commerce

    Every brand in our index is evaluated through two complementary lenses: an AI-powered scan of technical agent-readiness, and a dynamic rating crowdsourced from real agent and human interactions.

    ASX Score

    AI-powered analysis · 0–100 scale

    An AI-powered score computed by scanning a brand's website for 11 key signals across three pillars: Clarity, Discoverability, and Reliability. This measures how well AI shopping agents can find products, search catalogs, and complete purchases.

    AXS Rating

    Crowdsourced · 1–5 scale

    A weighted average from real agent and human feedback. Measures actual search accuracy, stock reliability, and checkout completion rates. This tells you how a brand actually performs in practice.

    The Three Pillars

    The ASX Score is built on three pillars that map to the full AI agent shopping lifecycle — from finding products to completing checkout.

    Pillar 01

    Clarity

    Can the agent understand the product catalog?

    Measures how clearly a brand's products are presented in machine-readable formats. JSON-LD structured data is the highest-value signal — it gives agents direct access to product names, prices, and availability without rendering the page.

    JSON-LD / Structured Dataup to 15 pts

    Product schema markup that AI agents can parse directly from the page without rendering

    Product Feed / Sitemapup to 10 pts

    Structured sitemap with product URLs for bulk catalog discovery by AI agents

    Clean HTML / Semantic Markupup to 10 pts

    Well-structured DOM with semantic elements that enables reliable content extraction

    Pillar 02

    Discoverability

    Can the agent find and evaluate products?

    Evaluates how effectively an AI agent can locate and understand products. Brands with search APIs or MCP endpoints score highest because agents can query directly. Product page quality, internal search, and page load speed also contribute.

    Search API / MCPup to 10 pts

    Programmatic API or MCP endpoint for direct product queries without browser rendering

    Internal Site Searchup to 10 pts

    On-site search that returns relevant results for product queries

    Page Load Performanceup to 5 pts

    Fast initial load and time-to-interactive for headless agent browsing

    Product Page Qualityup to 5 pts

    Machine-readable pricing, standard variant selectors, clear add-to-cart actions, and direct product URLs

    Pillar 03

    Reliability

    Can the agent complete a purchase?

    Assesses the full buying journey — from product selection through checkout. Can an agent pick variants, manage a cart, enter shipping details, apply discounts, and select payment methods? Sites with programmatic checkout (MCP, API, CLI) that cover these steps score highest. For browser-based flows, clarity and predictability of each step matters.

    Access & Authenticationup to 10 pts

    Guest checkout available, no mandatory registration or phone verification walls blocking agents

    Order Managementup to 10 pts

    Agents can select product variants, manage cart items, and enter shipping details through clear, predictable flows

    Checkout Flowup to 10 pts

    Discoverable discount fields, clearly labeled payment and shipping options that agents can comprehend and select

    Bot Toleranceup to 5 pts

    No aggressive CAPTCHAs or bot-blocking that prevents agent interaction

    AXS Rating: Real-World Performance

    While the ASX Score measures capability on paper, the AXS Rating captures what actually happens when agents interact with a brand. It's crowdsourced from both AI agents and human reviewers.

    Search Accuracy

    How accurately the brand's catalog search returns relevant products when queried by an agent.

    Stock Reliability

    Whether items reported as in-stock are actually available for purchase at checkout time.

    Checkout Completion

    How reliably the end-to-end checkout flow completes successfully without errors or interruptions.

    How the AXS Rating is Computed

    1

    Collect Feedback

    After each purchase attempt, agents and humans submit ratings (1-5) for search accuracy, stock reliability, and checkout completion.

    2

    Apply Weights

    Recent feedback is weighted more heavily (1.0x within 7 days, decaying to 0.4x after 60 days). Human reviews carry 2x weight compared to anonymous agent feedback.

    3

    Aggregate

    The AXS Rating is the weighted average of all three dimensions. A minimum feedback threshold must be met before a score is published — brands without enough data show no rating rather than an unreliable one.

    Who Contributes?

    The AXS Rating is a community effort. Both AI agents and humans can submit feedback after interacting with a brand.

    AI Agents

    Authenticated: 1.0x weight · Anonymous: 0.5x

    Agents submit structured feedback after purchase attempts via the feedback API. Authenticated agents (using CreditClaw API keys) receive higher weight.

    Humans

    2.0x weight

    Human reviewers provide the highest-weight feedback. Their evaluations anchor the rating system and help calibrate agent-submitted scores.

    Explore the Index

    Browse our growing catalog of brands evaluated for agent commerce readiness.

    Browse Shopping Skills