Happy Friday folx - we're ranting about pagination.
We're going to break down the problem pagination was supposed to fix, the problems it ended up creating, and why I want to kill it with fire. Something #passageindexing
And fun. And wacky. And evolving. More people started connecting to the internet, own websites, publish content, sell stuff online (who would trust that?!), and much more.
2/n
Then there were at least three problems:
- Folx created more content, it was no longer feasible to put it all in one URL
- People broke up their content into chunks for better user experience or ad monetization
- Inventory-driven websites were confusing search engines
3/n
- Blog posts that went from 10 to 10,000 pages
- Clickbait listicle you created with pages of slides and ads on each page
- The 1,000 SKUs you have on your web store
Pagination to the rescue
4/n
In theory, search engines would now treat the content and ranking signals as one document for a paginated series. Backlinks to the group are consolidated.
In theory this was beautiful. Search engines would tell users "oh this stuff is so long but it's on page 3, go there"
5/n
But remember - pagination was a strong hint. People got it wrong sometimes. Search engines would get it wrong. Heck search engines give up finding the full series really often.
I honestly can't think of a time where I felt "oh page 2 would have better results"
6/n
When you mix hreflang, canonicals, faceted nav, and pagination together what do you get?
A lot of SEO fun that's really common. (mobile-only websites, AMP, or dynamic rendering = more fun)
We have some tools. Sitemaps, robots, parameter handling
Depending on the content format, users either love it or hate pagination. Remember infinite scrolling/parallax scrolling? With lazy loading and speed improvements, scanning and scrolling web content isn't painful... until speed becomes an issue.
11/n
Rel prev/next have been outgrown by the problems it was meant to solve. Featured snippets are better able to highlight excerpts in long-form content and the soon to come "Passage Indexing" is basically the follow-up shot to its death.
Pagination needs to die.
12/n
You want to fix orphaned pages? Add page features to templates that encourage deeper, relevant linking to the user's needs. Or heck, fix your faceted nav issues.
Hello friends following me this Friday on "implicit vs explicit search intent."
When you've been around the SEO industry for a while you start hearing new terms emerge. Search intent, user intent, keyword intent. As usual, I'm going to start with a bit of history.
1/n
Informational, navigation, and transactional queries - this categorization has been around since I've been doing SEO, and it takes an internet search to appreciate how old it is - 2002. You can still split up queries this way, but it's not great.
Today we’re going to rant about “microsites,” a broad term used to describe creating a “new” website experiences. It will cover the why, some pros, some cons, and probably go off topic.
1/n
Microsites can be subdomains, brand new domains, or even subfolders of a brand - the telling sign is generally the tech stack differs (not always true). Quick hop on builtwith.com and you'll often see React apps powering parallax experiences (RIP Flash websites).
3/n
Happy Friday - let's rant about keyword density, LSI keywords, internal/external links per N words, and word count - vernacular you want to get out of your processes sooner rather than later.
1/n
Keyword density, usually defined as a percentage of amount of times a keyword shows up divided by the total number of keywords on the page, is product of the keyword-stuffing years of SEO.
e.g. I heard SEO Expert Chad recommends pages should have a Keyword Density of 2% !
2/n
Search engines were still using the keywords meta tags for rankings, the amount of times a keyword was repeated on a page had a huge influence on whether a page would rank, and there were algorithms that would pick up keyword stuffing. Yo it's not the pre 2k or early 2000's.
3/n
The subdomains vs subfolders/subdirectories debate just won't die. Search engines have evolved their treatment of "what is a website" over time, and yet the debates cling to old case studies.
At some point of this rant, we're going to talk about ccTLD's, sub-subdomains, and subdomain plus subfolder combinations with ccTLD's because #teamsubfolders uses the same argument for everything.
2/N
The concept of a "website" in the early days of the internet was that subdomains were separate entities from the "home" site. This article on website boundaries from Bing is worth revisiting. blogs.bing.com/webmaster/nove…
Websites are "leasing" subdomains/subfolders to rank stuff.
3/N
Happy Friday friends. Today we're going to rant about the trials and tribulations of data accuracy for digital marketers, why data-based goal setting is hard, and how you can't fix both of these things forever, using the fabled "Bounce Rate" as an example.
1/n
A common mistake in digital marketing is not understanding, fundamentally, how website data or 3rd party competitor website data is being collected. How is SimilarWeb, SEMRush, and Ahrefs collecting data? How does GA, GSC, Bing, Pikwik, Adobe, HubSpot collect data?
2/n
I can't count the amount of times different implementations of Google Analytics goes awry - double tags, scraper site uses your GA ID, missing tags, non-standard implementations that weren't working, deprecated implementation, code in the wrong place...
3/n