The trust signals LLMs use to evaluate source credibility from About and Contact pages to author attribution, robots.txt AI crawler access, and the emerging llms.txt standard.
The other three pillars are about making your content readable, extractable, and clear. Authority & Trust is about something different: convincing an LLM that your content is worth citing in the first place. These are two distinct problems, and the distinction matters more than most publishers realize.
Large language models especially those powering AI search features like Perplexity, Bing AI, and ChatGPT with browsing apply a form of source evaluation when deciding whether to surface content in a response. This evaluation draws on signals that are remarkably similar to the E-E-A-T framework in traditional SEO: is there a real entity behind this content? Is there an identifiable author? Can a user contact someone? Is the site accessible to AI crawlers?
A page that blocks AI crawlers in robots.txt can score perfectly on the other three pillars and still be invisible to LLMs because it has explicitly opted out of AI indexing. This pillar is the only one where a single check can negate everything else.
At 20% of the total score, Authority & Trust covers nine checks including two that are fetched asynchronously at analysis time: robots.txt AI crawler access and llms.txt support. These are the only checks in the entire scoring system that require live network requests beyond the page itself.
An About page is the clearest signal that a real entity a person, a company, an organization is behind the content. LLMs that evaluate source credibility treat the presence of an About page as a baseline trust indicator: anonymous or unidentifiable sources receive lower citation confidence scores.
The analyzer searches for links to About pages by checking both the href attribute and the visible link text. It recognizes English patterns (/about, /about-us) and Greek patterns (/σχετικα, /ποιοι-ειμαστε, /etairia, /εταιρ). Any matching link on the page earns the full +5 points.
A Contact page complements the About page by confirming that the entity behind the content is reachable. For LLMs evaluating source trustworthiness, the combination of an About page and a Contact page significantly increases the likelihood that the content will be treated as authoritative rather than anonymous.
The check recognizes English (/contact, /contact-us) and Greek (/επικοινωνια, /epikoinwnia) patterns in both link targets and visible text. A page that links to a contact form, email address page, or dedicated contact section earns the full +5 points.
About and Contact page links together account for 10 points a full quarter of this pillar's raw score. Both are present in almost every professionally maintained site, yet frequently missing from landing pages, microsites, and single-page applications.
Author attribution is a direct content credibility signal. When an LLM can identify who wrote a piece of content, it can apply authorship-based trust weighting the same mechanism that makes bylined journalism more citable than anonymous web content. For AI-generated answer systems, attributed content is significantly preferred over unattributed content.
The analyzer checks for author-signaling text patterns anywhere in the page body: English patterns ("by", "author", "authored by") and Greek patterns ("συντάκτης", "συγγραφέας", "επιμέλεια", "γράφει"). Any match earns +5 points. The check is deliberately broad a simple "By [Name]" line above an article is sufficient.
By John Smith · March 2026
Συντάκτης: Γιάννης Παπαδόπουλος
(no author information anywhere on the page)
Links to social media profiles particularly LinkedIn, Twitter/X, and Facebook act as entity verification signals. They connect the content to an identifiable, publicly verifiable presence on established platforms. For LLMs, this cross-referencing increases confidence that the content source is a real, accountable entity rather than an anonymous website.
The analyzer checks for outbound links to any of the five recognized social platforms: linkedin.com, twitter.com, x.com, facebook.com, and instagram.com. A single matching link anywhere on the page earns the full +5 points.
Alt text serves a dual purpose for LLM visibility: it makes image content accessible to text-only extraction pipelines, and it signals editorial quality. Pages where images have descriptive alt attributes are treated as more carefully produced than pages with empty or missing alt attributes a quality signal that contributes to source trust evaluation.
The analyzer identifies all meaningful images on the page (filtering out tracking pixels and empty src attributes) and calculates the percentage with non-empty alt text of more than 2 characters. Coverage of 90% or above earns +5 points. Coverage of 50–89% earns +2 points. Below 50% earns 0.
Note: pages where all images appear to be loaded via JavaScript (lazy loading with data-src attributes) receive a neutral result the analyzer cannot evaluate what it cannot see in the raw HTML.
Alt text: 95% coverage (19/20 images) → +5 pts
Alt text: 65% coverage (13/20 images) → +2 pts
Alt text: 30% coverage (6/20 images) → 0 pts
The meta robots tag controls whether search engines and AI crawlers are allowed to index and follow a page. It is the most binary check in the entire scoring system a page with noindex is explicitly opting out of AI indexing, and no amount of structural or content optimization can compensate for that.
The scoring is deliberately asymmetric. An explicit content="index, follow" earns +5 points. A missing meta robots tag earns +3 (browsers default to index/follow when the tag is absent). Any other value earns +2. But a noindex directive triggers a −10 penalty the harshest penalty in the entire scoring system.
<meta name="robots" content="index, follow"> → +5 pts
(no meta robots tag defaults to index) → +3 pts
<meta name="robots" content="noindex"> → −10 pts
The −10 noindex penalty is intentionally severe. A page that explicitly blocks indexing should not receive a high LLM visibility score it has actively chosen not to be visible to AI systems.
HTTPS is a baseline trust signal that predates LLM optimization entirely it has been a ranking factor in traditional SEO since 2014. For AI systems, an HTTP (non-secure) URL is a negative credibility signal: it suggests an outdated, unmaintained, or low-quality source. AI search systems that retrieve and cite content in real time strongly prefer HTTPS sources.
A secure HTTPS URL earns +3 points. An HTTP URL receives a −5 penalty. In 2026, virtually all active websites should be on HTTPS if yours is not, the certificate migration should be treated as an urgent infrastructure priority independent of any scoring considerations.
The robots.txt file at the root of your domain controls which crawlers can access your content. Since 2023, major AI companies have introduced dedicated crawler bots GPTBot (OpenAI), ClaudeBot (Anthropic), and Anthropic-AI that respect robots.txt directives. Blocking these crawlers prevents AI systems from including your content in training data and real-time retrieval.
The analyzer fetches your robots.txt file live at analysis time. It checks for specific user-agent entries for GPTBot, ClaudeBot, Anthropic-AI, and GoogleBot, and evaluates whether each is allowed or blocked. If any AI bots are explicitly blocked with Disallow: /, the check receives a −5 penalty. If AI bots are explicitly allowed or no specific block is found, the check earns +5 points.
No GPTBot/ClaudeBot block found → +5 pts
User-agent: GPTBot · Disallow: / → −5 pts
The llms.txt file is an emerging standard proposed in 2024 that gives AI systems a structured, human-readable overview of a site's content and purpose. Similar to how robots.txt instructs crawlers on access rules, llms.txt tells LLMs what your site is about, what content is available, and how it should be used.
A typical llms.txt file sits at the root of your domain (yourdomain.com/llms.txt) and contains a brief description of the site, links to key pages, and optionally metadata about content type and licensing. The analyzer fetches your llms.txt live at analysis time. If a valid file with at least 10 characters of content is found, the check earns +5 points.
llms.txt adoption is still early most sites do not yet have one. That makes it an easy differentiator: implementing a well-structured llms.txt is a 30-minute task that puts your site ahead of the majority of the web for AI discoverability.
yourdomain.com/llms.txt valid file found → +5 pts
yourdomain.com/llms.txt 404 Not Found → 0 pts
Authority & Trust has a raw maximum of 43 points, normalized to 0–100. Two checks carry severe penalties: a noindex meta robots tag (−10) and a non-secure HTTP URL (−5). The robots.txt and llms.txt checks are fetched live at analysis time and may update the score after the initial calculation.
Most Authority & Trust improvements are infrastructure and configuration changes rather than content edits. Several of them can be completed in under an hour and have permanent, site-wide impact.
Run a free analysis and get a detailed breakdown of every check with specific recommendations for your page.