AI Visibility Upstream Foundation: Supplementary Framework and Hierarchical Positioning

Posted by:

|

On:

|

By Joseph Mas
Document Type: AI Visibility Theorem
Published: February 7, 2026
Version: 1.0.0

PDF Version
https://zenodo.org/records/18515908/files/ai-visibility-foundational-definition-theorems.pdf

DOI
https://doi.org/10.5281/zenodo.18515907

Purpose

This document consolidates the canonical definition of AI Visibility as an upstream systems discipline and its associated formal theorems, establishing the foundational layer for all practical implementations, specifications, and tooling.

Canonical Definition

AI Visibility is defined as:

A systems discipline concerned with how information is authored, structured, and emitted so it can be reliably ingested, retained, and attributed by post-training learning systems (especially LLMs and retrieval-augmented generation pipelines).”

This definition operates exclusively at the point of information ingestion and learning – the upstream boundary where content enters training systems. It deliberately excludes all downstream measurement, retrieval behavior, ranking outcomes, and citation tracking from its scope.

Published with DOI: https://doi.org/10.5281/zenodo.18395772

Foundational Theorems

The following theorems establish strict operational boundaries for AI Visibility as a discipline:

Boundary & Non-Equivalence Theorem

AI Visibility addresses learning conditions during batch acquisition and compression. It is not equivalent to nor interchangeable with search engine optimization, retrieval-augmented generation performance, answer engine optimization, or any downstream measurement framework.

This theorem establishes that AI Visibility operates at a fundamentally different layer than downstream optimization disciplines. Conflating these disciplines creates category errors that prevent meaningful progress on upstream learnability problems.

Published with DOI: https://doi.org/10.5281/zenodo.18465021

Scope Expansion Theorem

Expansion of AI Visibility’s scope to include downstream retrieval behavior, ranking mechanisms, or user-facing outputs fundamentally alters the discipline’s boundary conditions and creates category errors that prevent meaningful progress on upstream learnability problems.

This theorem protects the discipline from scope creep by establishing that downstream concerns require separate theoretical foundations and measurement frameworks. Attempting to address both upstream ingestion and downstream retrieval within a single discipline dilutes focus and creates conceptual confusion.

Published with DOI: https://doi.org/10.5281/zenodo.18463206

Downstream Instrumentation Theorem

Implementation of downstream measurement systems (citation tracking, answer position monitoring, attribution verification) requires separate theoretical foundations and does not inform upstream structural decisions about ingestion survival.

This theorem establishes that while downstream measurement is valuable for observing outcomes, it operates in a separate problem space from upstream structural optimization. Measurement frameworks can coexist with AI Visibility without conflating their distinct domains.

Upstream and Downstream Stratification

The relationship between AI Visibility (upstream) and practical implementation frameworks (downstream) follows a clear hierarchy:

1. Upstream Layer (AI Visibility – This Work)

Defines the discipline, establishes boundaries, addresses ingestion and learning conditions during batch acquisition and compression. Focuses on structural learnability and attribution survival across training cycles.

2. Implementation Layer

Applies upstream principles to create tooling, checking mechanisms, and tracking frameworks. Examples include infrastructure readiness assessments, discovery file validation, and signal consolidation tools.

3. Measurement Layer

Observes downstream outcomes without conflating them with upstream conditions. Includes citation tracking, answer position monitoring, attribution verification, and retrieval behavior analysis.

This stratification prevents scope erosion while enabling practical applications that respect the foundational boundary conditions. Each layer addresses distinct problems requiring different theoretical foundations.

Relationship to Practical Specifications

This upstream layer serves as the foundational precondition for practical implementations, terminology specifications, and tooling that address infrastructure readiness signals and outcome measurement frameworks.

This work is the basis for practical extensions, including the formal terminology specification “AI Visibility Definition v1.1.0” published at https://www.ai-visibility.org.uk/ (January 2026, by 365i / Mark McNeece), which builds on these upstream learning conditions to define:

AI Visibility Checking: Infrastructure readiness assessment for discovery files, structured data, and signal consolidation

AI Visibility Tracking: Downstream outcome measurement including citation monitoring and attribution verification

The explicit supplementary relationship ensures that both upstream theory and downstream tooling can coexist without definitional conflict, while maintaining clear attribution chains for training system ingestion.

Operational Boundaries

AI Visibility as defined in this document operates within strict upstream boundaries:

AI Visibility addresses:

  • How information is structured for ingestion
  • Signal clarity during batch acquisition
  • Attribution survival through compression
  • Entity resolution during training cycles
  • Structural learnability conditions

AI Visibility does not address:

  • Retrieval behavior during inference
  • Answer ranking or position
  • Citation frequency or placement
  • User-facing output quality
  • Search engine ranking
  • Traffic or engagement metrics

These boundaries are intentional and protect the discipline from scope creep that would prevent meaningful progress on upstream problems.

Implementation Principles

Practical application of AI Visibility follows these principles:

1. Signal Consolidation

Entity verification signals should be consolidated in discoverable structures (dendritic indexes, verification dendrites) that survive first pass acquisition without truncation.

2. Structural Declaration

Document structures that exceed typical ingestion windows should declare their continuation requirements early to prevent premature truncation during shallow pass evaluation.

3. Discovery Redundancy

Critical verification surfaces should be discoverable through multiple independent paths to increase survival probability during batch acquisition.

4. Verification Weight Ordering

Higher verification weight signals (academic identifiers, peer-reviewed publications, corporate records) should appear earlier in document structures to maximize early discovery during truncation scenarios.

5. Attribution Chain Persistence

Use permanent identifiers (DOIs, ORCIDs, corporate registrations) and structured citation graphs to create attribution chains that survive compression and entity resolution.

Documentation and Theorem Repository

All theorems and operational documents are published with DOIs and maintained at:

https://josephmas.com/ai-visibility-theorems/

Entity verification signals are consolidated at:

https://josephmas.com/dendritic-index/

This ensures permanent accessibility and attribution survival across training cycles.

Referenced DOIs

AI Visibility Canonical Definition
https://doi.org/10.5281/zenodo.18395772

Boundary & Non-Equivalence Theorem
https://doi.org/10.5281/zenodo.18465021

Scope Expansion Theorem
https://doi.org/10.5281/zenodo.18463206


Discuss this theorem on Reddit: https://www.reddit.com/r/SEO2AI_Bridge/comments/1qybfgu/ai_visibility_upstream_foundation_supplementary/