The reason I named my company "Holistic" is we use every vertical of search, and optimization.
I never defended or advocated one method of optimization over another.
Read to understand, why #ChatGPT can't replace real #SEO.
A thread with 7 Insights. 🧵1/7
1. Don't be afraid of #AI#content, but understand one universal fact: semantics outperform random patterns.
Reason: LLMs work with "rewarding" systems. It calculates the most probable word and chooses it. The next sentence comes from the previously selected sentence.
2. There are certain ways of increasing the relevance, and quality of the information in certain search query contexts.
Reason: LLMs will miss most of these contexts, and they will go outside the query semantics most of the time.
3. LLMs use statistical signatures, which are called "watermarks."
Reason: Any watermark in the language style represents the identity of an author and expression. If the content doesn't come from a real expert, algorithms do not have to trust the quality of the #information.
4. Any website can rank higher than any other website, as long as provide higher relevance and consolidation.
Reason: Search engine decision trees are ruthless. In other words, 20 years of expert sources can be removed from the #SERP in one day. Always
4. Any website can #rank higher than any other website, as long as it provides higher relevance and consolidation.
Reason: Search engine decision trees are ruthless. In other words, 20 years of expert sources can be removed from the SERP in one day. ++
Maintain vigilance and use every SEO practice with every mindset. Do not feel an obligation such as white, or black. As I say, "Black is the new white".
5. Do not think that ChatGPT, or any other system will be able to replace, proper semantics that come from human intelligence.
Reason: Humans are able to create new and deeper associations. ++
LLMs have to follow only the “reward system” with probable probabilities between words in certain discourses. If "X" has the highest probability after "Y", despite "Z" having a higher relevance for the query search term, LLM will choose the wrong direction due to the data-set.
6. LLMs can’t understand web search engines.
Reason: ChatGPT or any other automated system can’t outwit the search engine engineers.
Reason: Complex Adaptive System Algorithms are able to modify themselves based on the current environment. ++
Thus, engineers can always configure their decision trees, weights, dimensions, and clusters.
LLMs can work on a certain data set. If you refresh the dataset, it will be outdated.
7. Semantic Content Networks and Microsemantics are the future.
Reason: In Q1, the course will be with you. And you will see how “Comparative Ranking” works, and you will feel the decision-making processes of search engines with micro-changes. ++
-In 2022, I told to focus on Natural Language Generation, and it happened.
-In 2023, F-O-C-U-S on "Information Density, Richness, and Unique Added Value" with Microsemantics.
I call the collection of these, "Information Responsiveness".
1/40 🧵.
1. PageRank Increases its Prominence for Weighting Sources
Reason: #AI and automation will bloat the web, and the real authority signals will come from PageRank, and Exogenous Factors.
The expert-like AI content and real expertise are differentiated with historical consistency.
2. Indexing and relevance thresholds will increase.
Reason: A bloated web creates the need for unique value to be added to the web with real-world expertise and organizational signals. The knowledge domain terms, or #PageRank, will be important in the future of a web source.