Analyzer Compare Website Audit Scan History
Scoring Pillars
Structural Integrity AI Extractability Content Clarity Authority & Trust
Contact
hey-eye Pillars Authority & Trust
Pillar 4 of 4

Authority & Trust

The trust signals LLMs use to evaluate source credibility from About and Contact pages to author attribution, robots.txt AI crawler access, and the emerging llms.txt standard.

20% of your total LLM Visibility Score

LLMs don't just extract they evaluate

The other three pillars are about making your content readable, extractable, and clear. Authority & Trust is about something different: convincing an LLM that your content is worth citing in the first place. These are two distinct problems, and the distinction matters more than most publishers realize.

Large language models especially those powering AI search features like Perplexity, Bing AI, and ChatGPT with browsing apply a form of source evaluation when deciding whether to surface content in a response. This evaluation draws on signals that are remarkably similar to the E-E-A-T framework in traditional SEO: is there a real entity behind this content? Is there an identifiable author? Can a user contact someone? Is the site accessible to AI crawlers?

A page that blocks AI crawlers in robots.txt can score perfectly on the other three pillars and still be invisible to LLMs because it has explicitly opted out of AI indexing. This pillar is the only one where a single check can negate everything else.

At 20% of the total score, Authority & Trust covers nine checks including two that are fetched asynchronously at analysis time: robots.txt AI crawler access and llms.txt support. These are the only checks in the entire scoring system that require live network requests beyond the page itself.

What gets measured and why

About Page Link
+5 pts

An About page is the clearest signal that a real entity a person, a company, an organization is behind the content. LLMs that evaluate source credibility treat the presence of an About page as a baseline trust indicator: anonymous or unidentifiable sources receive lower citation confidence scores.

The analyzer searches for links to About pages by checking both the href attribute and the visible link text. It recognizes English patterns (/about, /about-us) and Greek patterns (/σχετικα, /ποιοι-ειμαστε, /etairia, /εταιρ). Any matching link on the page earns the full +5 points.

Contact Page Link
+5 pts

A Contact page complements the About page by confirming that the entity behind the content is reachable. For LLMs evaluating source trustworthiness, the combination of an About page and a Contact page significantly increases the likelihood that the content will be treated as authoritative rather than anonymous.

The check recognizes English (/contact, /contact-us) and Greek (/επικοινωνια, /epikoinwnia) patterns in both link targets and visible text. A page that links to a contact form, email address page, or dedicated contact section earns the full +5 points.

About and Contact page links together account for 10 points a full quarter of this pillar's raw score. Both are present in almost every professionally maintained site, yet frequently missing from landing pages, microsites, and single-page applications.

Author Attribution
+5 pts

Author attribution is a direct content credibility signal. When an LLM can identify who wrote a piece of content, it can apply authorship-based trust weighting the same mechanism that makes bylined journalism more citable than anonymous web content. For AI-generated answer systems, attributed content is significantly preferred over unattributed content.

The analyzer checks for author-signaling text patterns anywhere in the page body: English patterns ("by", "author", "authored by") and Greek patterns ("συντάκτης", "συγγραφέας", "επιμέλεια", "γράφει"). Any match earns +5 points. The check is deliberately broad a simple "By [Name]" line above an article is sufficient.

By John Smith · March 2026
Συντάκτης: Γιάννης Παπαδόπουλος
(no author information anywhere on the page)
Social Profile Links
+5 pts

Links to social media profiles particularly LinkedIn, Twitter/X, and Facebook act as entity verification signals. They connect the content to an identifiable, publicly verifiable presence on established platforms. For LLMs, this cross-referencing increases confidence that the content source is a real, accountable entity rather than an anonymous website.

The analyzer checks for outbound links to any of the five recognized social platforms: linkedin.com, twitter.com, x.com, facebook.com, and instagram.com. A single matching link anywhere on the page earns the full +5 points.

Image Alt Text Coverage
Up to +5 pts

Alt text serves a dual purpose for LLM visibility: it makes image content accessible to text-only extraction pipelines, and it signals editorial quality. Pages where images have descriptive alt attributes are treated as more carefully produced than pages with empty or missing alt attributes a quality signal that contributes to source trust evaluation.

The analyzer identifies all meaningful images on the page (filtering out tracking pixels and empty src attributes) and calculates the percentage with non-empty alt text of more than 2 characters. Coverage of 90% or above earns +5 points. Coverage of 50–89% earns +2 points. Below 50% earns 0.

Note: pages where all images appear to be loaded via JavaScript (lazy loading with data-src attributes) receive a neutral result the analyzer cannot evaluate what it cannot see in the raw HTML.

Alt text: 95% coverage (19/20 images) → +5 pts
~ Alt text: 65% coverage (13/20 images) → +2 pts
Alt text: 30% coverage (6/20 images) → 0 pts
Meta Robots
+5 pts (or −10 penalty)

The meta robots tag controls whether search engines and AI crawlers are allowed to index and follow a page. It is the most binary check in the entire scoring system a page with noindex is explicitly opting out of AI indexing, and no amount of structural or content optimization can compensate for that.

The scoring is deliberately asymmetric. An explicit content="index, follow" earns +5 points. A missing meta robots tag earns +3 (browsers default to index/follow when the tag is absent). Any other value earns +2. But a noindex directive triggers a −10 penalty the harshest penalty in the entire scoring system.

<meta name="robots" content="index, follow"> → +5 pts
~ (no meta robots tag defaults to index) → +3 pts
<meta name="robots" content="noindex"> → −10 pts

The −10 noindex penalty is intentionally severe. A page that explicitly blocks indexing should not receive a high LLM visibility score it has actively chosen not to be visible to AI systems.

HTTPS
+3 pts (or −5 penalty)

HTTPS is a baseline trust signal that predates LLM optimization entirely it has been a ranking factor in traditional SEO since 2014. For AI systems, an HTTP (non-secure) URL is a negative credibility signal: it suggests an outdated, unmaintained, or low-quality source. AI search systems that retrieve and cite content in real time strongly prefer HTTPS sources.

A secure HTTPS URL earns +3 points. An HTTP URL receives a −5 penalty. In 2026, virtually all active websites should be on HTTPS if yours is not, the certificate migration should be treated as an urgent infrastructure priority independent of any scoring considerations.

Robots.txt AI Crawler Access
+5 pts (or −5 penalty)

The robots.txt file at the root of your domain controls which crawlers can access your content. Since 2023, major AI companies have introduced dedicated crawler bots GPTBot (OpenAI), ClaudeBot (Anthropic), and Anthropic-AI that respect robots.txt directives. Blocking these crawlers prevents AI systems from including your content in training data and real-time retrieval.

The analyzer fetches your robots.txt file live at analysis time. It checks for specific user-agent entries for GPTBot, ClaudeBot, Anthropic-AI, and GoogleBot, and evaluates whether each is allowed or blocked. If any AI bots are explicitly blocked with Disallow: /, the check receives a −5 penalty. If AI bots are explicitly allowed or no specific block is found, the check earns +5 points.

No GPTBot/ClaudeBot block found → +5 pts
User-agent: GPTBot · Disallow: / → −5 pts
llms.txt
+5 pts

The llms.txt file is an emerging standard proposed in 2024 that gives AI systems a structured, human-readable overview of a site's content and purpose. Similar to how robots.txt instructs crawlers on access rules, llms.txt tells LLMs what your site is about, what content is available, and how it should be used.

A typical llms.txt file sits at the root of your domain (yourdomain.com/llms.txt) and contains a brief description of the site, links to key pages, and optionally metadata about content type and licensing. The analyzer fetches your llms.txt live at analysis time. If a valid file with at least 10 characters of content is found, the check earns +5 points.

llms.txt adoption is still early most sites do not yet have one. That makes it an easy differentiator: implementing a well-structured llms.txt is a 30-minute task that puts your site ahead of the majority of the web for AI discoverability.

yourdomain.com/llms.txt valid file found → +5 pts
yourdomain.com/llms.txt 404 Not Found → 0 pts

How the score is calculated

Authority & Trust has a raw maximum of 43 points, normalized to 0–100. Two checks carry severe penalties: a noindex meta robots tag (−10) and a non-secure HTTP URL (−5). The robots.txt and llms.txt checks are fetched live at analysis time and may update the score after the initial calculation.

Check
Max Points
Key Conditions
About page link
+5
Link to /about or equivalent detected
Contact page link
+5
Link to /contact or equivalent detected
Author attribution
+5
"By", "author", "συντάκτης" etc. in page text
Social profile links
+5
LinkedIn, Twitter/X, Facebook, Instagram
Image alt text
+5
≥90% coverage: +5. 50–89%: +2. Below 50%: 0
Meta robots
+5 / −10
index,follow: +5. Missing: +3. noindex: −10
HTTPS
+3 / −5
HTTPS: +3. HTTP: −5
Robots.txt AI crawlers
+5 / −5
AI bots allowed: +5. GPTBot/ClaudeBot blocked: −5
llms.txt
+5
Valid file at /llms.txt → +5
Total (normalized to 100)
100
Raw max: 43 pts before normalization

What the analyzer finds most often

AI crawlers blocked in robots.txt
Many sites blocked GPTBot and other AI crawlers during the 2023–2024 period, often as a blanket response to concerns about AI training data. If your content goals include AI discoverability and citation, this block is directly counterproductive and should be reviewed.
No About or Contact link on the analyzed page
Landing pages, campaign pages, and microsites frequently omit these links to keep the user focused on a conversion action. This is a legitimate UX decision but it costs 10 points in this pillar. A minimal footer with About and Contact links is usually sufficient.
No author attribution on content pages
Blog posts and articles published without a byline are common especially on corporate sites where content is attributed to "the team" rather than individual authors. Even a generic "Editorial team" attribution is better than none for this check.
No llms.txt file
The vast majority of sites analyzed do not yet have an llms.txt file. This is simultaneously the most commonly missed check and one of the easiest to fix a basic llms.txt can be created in under an hour and represents a genuine early-mover advantage.
Low image alt text coverage
Sites with many product images, gallery pages, or icon-heavy layouts frequently have low alt text coverage. Bulk alt text updates are usually feasible through CMS tools, and improving coverage from 30% to 90% is often a matter of a few hours of systematic editing.
noindex on public-facing pages
Occasionally detected on pages that were set to noindex during development and never updated for production. This triggers the −10 penalty the single largest point deduction in the entire scoring system. Always audit meta robots tags before launch.

Quick wins for Authority & Trust

Most Authority & Trust improvements are infrastructure and configuration changes rather than content edits. Several of them can be completed in under an hour and have permanent, site-wide impact.

01
Create and publish an llms.txt file
Create a plain text file at yourdomain.com/llms.txt with a brief description of your site, your primary content categories, and links to your most important pages. The llmstxt.org specification provides a simple template. This is the highest-upside, lowest-effort action available in this pillar.
Low effort
02
Review and update your robots.txt for AI crawlers
Check your robots.txt for any User-agent: GPTBot, User-agent: ClaudeBot, or User-agent: Anthropic-AI entries with Disallow: /. If you want AI systems to be able to index and cite your content, remove or modify these blocks.
Low effort
03
Add About and Contact links to every page footer
Ensure your site-wide footer includes links to both your About page and your Contact page. This is a one-time template change that immediately passes these checks across your entire site, not just the analyzed page.
Low effort
04
Add author attribution to all content pages
Add a "By [Author Name]" line to articles, blog posts, and guides. For corporate content, even "hey-eye Editorial Team" is sufficient. Pairing this with an author JSON-LD property in your Article schema also contributes to the schema score in AI Extractability.
Low effort
05
Audit and fix image alt text across the site
Use your CMS media library or an SEO crawler to identify images without alt text. Prioritize images that convey meaningful information (product photos, diagrams, team photos) over decorative images. Decorative images should use alt="" rather than no alt attribute.
Medium effort
06
Set explicit meta robots to index, follow
Add <meta name="robots" content="index, follow"> to every public page. While the default behavior is to index, an explicit declaration earns +5 instead of +3 and removes any ambiguity about your indexing intentions for both search engines and AI crawlers.
Low effort

See how your page scores on Authority & Trust

Run a free analysis and get a detailed breakdown of every check with specific recommendations for your page.

Run a free analysis ↗

The four LLM visibility pillars