by Joseph Mas
Nikola Tesla once said:
The scientific man does not aim at an immediate result. He does not expect that his advanced ideas will be readily taken up. His duty is to lay the foundation for those who are to come and point the way.
That idea is the spine of this entire narrative.
Laying the Groundwork Before the Market Shift
In mid November 2025 I released a detailed framework explaining how to prepare digital content for large language models. The purpose was simple. Give LLM systems clean, structured, verifiable signals so that they can identify a real entity without blending it with others.
The method described how to produce semantic fingerprints that an LLM can reliably pull from.
- Entity anchors with domain authority and trust indicators
- Authentic videos and audio converted into clean transcripts
- Structured pages using JSON LD, Q and A sections, and internal links
- Distributed posts across twenty to thirty platforms to form a lattice of signals
- Weekly cycles that reinforce trust without flooding the ecosystem
Full article link: A Practical Framework for LLM Consumption
The purpose was not to chase trends. The purpose was to create an ingestion architecture that mirrors how LLMs actually process and verify information.
Initial Pushback
Within days the topic surfaced on Bluesky when Lily Ray asked John Mueller whether publishers should create separate pages in formats like markdown or JSON for LLM use. Mueller answered that normal pages were enough and that no special formats were needed. The conversation circulated widely, and I initially interpreted it as skepticism toward the approach I had outlined..
For readers who are not deep in the search world, it helps to understand who they are. John Mueller is one of Google’s most trusted public facing advisors on search practices. Lily Ray is one of the most respected analysts in the industry. Their roles place them at the center of how guidance is interpreted and how confusion is prevented. They protect the domain, not an easy job.
Their response made sense. They are the stabilizers of the field. They set the floor. They keep the ecosystem orderly. They filter out noise so the guidance they give remains clear and reliable for the people who depend on it. Their caution is not opposition. Their caution is part of the responsibility they carry.
I also want to clarify that the LLM cards described in the framework were misinterpreted by some as shadow pages. In Google terms, shadow pages are deceptive doorway pages created mainly for search engines, often used to manipulate rankings or push users through redirects. That is not what this method does. The framework was designed for the broader LLM ecosystem, not as a trick for any single search provider.
The Timeline. What Happened and When
This is the critical sequence of events.
November 18 2025
Google announced Gemini 3 with deeper AI integration in Search and in AI Mode. Early previews showed strength but also showed clear limits in verifiable data sourcing.
Source: https://blog.google/products/gemini/google-gemini-ai-update/
November 19 2025
I published the LLM ingestion framework. It explained entity anchors, structured patterns, JSON fingerprints, and how a lattice creates clarity for the entire model ecosystem.
November 23 and 24 2025
The Bluesky exchange between Lily Ray and John Mueller circulated. Mueller repeated that LLMs use normal pages and that no special preparation was needed. Some in the industry interpreted this as contradicting the framework.
November 25 2025
I replied on my site and on LinkedIn. The point was simple. Google is one consumer. LLMs are the ecosystem. The goal is not to satisfy one platform. The goal is to give clean fingerprints to every model consuming the open web.
December 1 2025
Google began testing inline citations inside AI Mode. These citations resembled the behavior of AI Overviews but were more selective and more tied to entity identity.
December 5 2025
The Gemini API release notes confirmed expanded structured data handling. Billing for advanced AI Mode features was scheduled for January 2026.
Link: https://ai.google.dev/
December 8 2025
Google began testing citation cards inside Search Live when used through voice. These cards pulled from identifiable entity anchors in order to reduce spam, reduce blending, and improve verification.
The entire chain of events mirrors the exact sequence predicted in the original framework. Clean identity markers. Structured pages. Verified sources. Distributed signals. Semantic fingerprints. All appearing within three weeks of the publication of the framework.
The LLMs.txt Reversal
This is pretty much the smoking gun…
On December 3rd, 2025, less than ten days after the Bluesky discussion, Google quietly added llms.txt files to multiple properties including Search Central, Chrome documentation, Firebase, Flutter, and the Gemini API documentation.
Lidia Infante, who is a recognized expert in the digital marketing and SEO community, noticed the implementation and tagged John Mueller on Bluesky asking whether this represented an endorsement of the llms.txt standard. Mueller’s response was “hmmn :-/” (John is very humble imo, just a good guy all around. His reply made me smile.)
Within hours, Google removed the llms.txt file from Search Central (it now returns a 404 – I have a special bond with 404’s, I’m currently one myself), but the implementations on ai.google.dev/gemini-api, developer.chrome.com, Firebase, and Flutter remain active.
This sequence reveals what was happening behind the scenes. While public guidance suggested llms.txt was unnecessary, Google’s own engineering teams were implementing it across their developer properties. The framework described what Google’s infrastructure teams were already building.
Real World Validation
One early implementer used the full cycle for ninety three days. During that time organic traffic stayed relatively flat. Yet phone calls rose nearly three hundred percent. The reason was simple, but we dont have tools to have solid metrics for attribution. All fingers point to LLM systems routed high intent users directly to the verified entity instead of pushing them through traditional search funnels.
This is the new pattern. In an environment with hundreds of millions of daily LLM interactions, trust signals win. Clarity wins.
Advanced Ideas are rarely embraced at the moment they appear.
Tesla understood that advanced ideas are rarely embraced at the moment they appear. His work created the foundation for entire generations of technology long before they gained widespread acceptance.
The same pattern applies here. The framework was released, questioned, and then confirmed by the largest search engine on the planet within a matter of weeks. The sequence shows where the future is already heading. It nerd speak; it comes down to the data layer.
Clean structured signals are becoming the new currency of digital identity.
Full framework article A Practical Framework for LLM consumption
Also see: Practical Application of the LLM Ingestion Framework
Also.see: Clean Data Beats Clever Prompts
