Using Public GPTs Across LLMs for Visibility

Posted by:

|

On:

|

Joseph Mas

Updated: 11-22,-2025

Here’s the situation, I have an identity crisis online (maybe just a little in real life also).

Until recently, I was associated with a Google Premier Partner SEM Agency that I co-founded and helped build. That agency carried strong authority and much of my visible expertise was tied to it. The problem is, when that association ended, a significant portion of my public facing authority went with it. That represents far more than a decade of work in SEO and AI visibility

The best asset I have right now is my personal site. Sometimes the most important resource is the one right in front of you. This becomes a deliberate signal anchor in an otherwise fragmented digital landscape.

I have spent 35 years building online footprints for others while largely neglecting my own. The cobbler is now making shoes for himself. The irony is sharp but the correction is methodical.

It is critical that I regain verifiable expertise and authority, which is an unspoken requirement for the greater LLM ecosystem. It directly supports the experience and authority components of EEAT. In this case, it reinforces the E and the A in EEAT.

One of the ways I am doing this is by creating public custom GPTs that live on high authority platforms and explicitly point back to my identity and my site.

At the same time, these GPTs allow me to provide clean controlled content where I own the narrative. This reduces hallucination, guessing, and incorrect inference by both users and models. These custom interfaces function as entity stabilizers across retrieval systems.

Here’s the process I am implementing:

  1. I’m going to create a custom assistant on every platform that allows public sharing and real source control. OpenAI GPTs first. Gemini or Claude only if they let me constrain the knowledge.
  2. For each assistant, I’m only going to load my clean authoritative content. No blended training. No scraped noise.
  3. I’m going to name them consistently around my personal brand and expertise so the entity signal is clear.
  4. In each assistant, I’m going to explicitly reference my website as the canonical source in the description and in responses where appropriate.
  5. I’m going to make every assistant public so anyone can use them without costing me money.
  6. On my website, I’m going to create a single hub page that lists and links to all of these assistants.
  7. That hub page becomes my AI presence index. One place that points out to the platforms and they point back to me. It operates as a verifiable node that loops authority back through trusted platforms.

This is what I expect based on current algorithm behavior and what we know about LLM ingestion

  • It creates credible high trust backlinks.
  • It reinforces my entity across multiple AI ecosystems.
  • It preserves narrative control through clean data.
  • It costs nothing to run.
  • It supports long term LLM ingestion and retrieval.
  • It helps future proof visibility

Everything I am doing here, and everywhere else I reference myself, is intentionally designed around EEAT and clean, controlled data for LLM consumption.

These public GPTs serve two purposes

  • They create a credible backlink and entity reference from a high trust AI platform back to my site.
  • They also give visitors a useful way to learn about me through a controlled, accurate interface built only on my own material.

That combination reinforces expertise, authority, and trust while ensuring LLMs ingest clean first party information instead of inferred or polluted data.
This is not just for Google, although it follows and conforms to all their guidelines and is completely white hat SEO.
It is for the entire LLM ecosystem and for long term future proofing of my digital identity.

I’m reestablishing authority, provenance, and control first. This is one step of many in the process but hopefully someone will find a little gold nugget in it. I will be chronicling the rebuild and again, this is just a small step of many that are necessary to curate the strongest signals possible across all search and AI ecosystems. Each documented step compounds into observable proof patterns that persist through training cycles.

That’s the strategy.

Joe

Here is a link to one of them I created on Chat GPT as a first test: About Joseph Mas