This is big that Google has finally published some official information on how to optimize for AI-Search, aka AI-SEO, or what some are calling AEO (Answer Engine Optimization) or GEO (Generative Engine Optimization). While I personally hate both AEO & GEO as acronyms, that’s only because of the amount of bullshit people try to sell you when using those acronyms.
There’s a lot of misinformation out there, and the “GEO/AEO” clean up has already begun. More on that at the bottom.
Too Lazy Read Google’s Posts?
If you’re too lazy to read Google’s official posts, I’ve got you with the TL;DR:
- SEO is still the foundation for AI search, this is mostly because of RAG (Retrieval-Augmented Generation).
- The only thing you have to know here is all the prominent LLMs are built off of RAG. But WTF is RAG?
- LLMs like ChatGPT, Claude, Perplexity, etc. basically validates their answer by cross referencing it with Google, Bing, YouTube, etc.
- Also, LLM models are frozen, so RAG is the part you can optimize for
- And you optimize for RAG with SEO. But think beyond Google, and think any platform with a search bar.
- Google sees optimizing for AI search as SEO, so they’re still calling it SEO.
- Aside from the GEO snake oil salesmen, this is what the community is pushing for
- So kudos to Google for at least doing us one solid
- Create non-commodity, “people-first” content.
- I’ll also add, there’s no such thing as “LLM or AI preferred content”
- And anyone selling you on the concept that there is is full of it because that’s not how LLMs get trained
- In short, they learn by digesting massive volumes of data like your years of academia. And if you’re distributing content to multiple surface, you’ve likely got the content portion of “LLM Content” covered.
- LLMs are not like Google, or search engines. Because those platforms index pages vs. search queries. LLMs don’t do that and form a general consensus based on the training data.
- Ignore most “GEO/AEO hacks like chunking and llms.txt.
- Chunking is basically the process of taking large pieces of information and breaking it down into smaller pieces and pages
- GEO experts claim it’s a critical step for Large Language Models (LLMs) and RAG (and remember RAG is SEO) to enable models to process vast information within their token limits.
- GEO experts will bullshit you all day long, but it’s often considered idiotic because it destroys context, resulting in AI that lacks depth, provides robotic answers, or hallucinations.
- While it is necessary to manage context window limits, the practice often leads to mediocre outputs, especially when the AI confuses concepts due to losing the “big picture.” So don’t chunk.
- llms.txt was a theory a lot of tested. Myself included. But so far the data shows it doesn’t do much, and is just another document that LLM bots can use as training data. However, it won’t ever surpass the vast amounts of other data that it’ll be trained on.
Do You Want the Longer Answers from Google?
Well, you’re gonna have to get it directly from them, because I ain’t got time to write way more in-depth info haha – so do yourself a favor and just go here:
- Latest Google Documentation on AI Optimization
- Google also updated spam policies: Clarifying that spam policies apply to generative AI responses in Google Search
The GEO/AEO Cleanup Has Begun
Oh, I almost forgot… My LinkedIn ramblings about businesses who bought into shady GEO tactics. I provide an example on my page that you should read, it basically mirrors exactly what Lily Ray is talking about with regards to programatic, AI-Content being risky.