I think I've probably put more thought into this than most, as it has always mattered a lot to me when programming #SearchLove, that we amplified great advice.
So, a 🧵
Aside from just putting on a good conference, I believe this *really matters* in the real world, because I think that steps back in performance are at the root of a lot of underperforming SEO initiatives: searchpilot.com/resources/blog…
...and as SEO gets harder ( searchpilot.com/resources/blog… ) it also gets more confusing, and harder to tell the good from the bad.
1. We have to accept that we can never assess correctness perfectly, and come at it probabilistically. For me, that means for any given piece of advice thinking about a) how surprising is it? and b) how risky is it?
2. For both a) and b), my starting point is first principles:
- Does it seem like something that could be true?
- Does it seem like something that should be true?
- Does it seem like something Google would want to be true?
That last one is interesting - it's not always obvious whether Google *wanting* something to be true would increase my assessment of whether it's *likely* to be true.
If it's in an area where they have full information (e.g. "contents of title tag") and clear algorithmic understanding (e.g. kw matching) then broadly only things Google wants to be true will be true.
If it’s in an area where they have less information (e.g. “intention of someone creating this link”) or less algorithmic understanding (e.g. “actual expertise of this author”) then it’s quite likely that the louder Google is saying it, the more sceptical you should be.
3. Riskiness. Riskiness is in the eye of the beholder (though generally anything @ohgm suggests is worth bucketing in this group e.g. ohgm.co.uk/laundering-irr… ).
I want to think about this in the context not only of my own risk, but the risk of those who follow / trust me. I want to consider:
- Scale - is this going to have an effect sitewide or just on a single page?
- Reversability - how likely is it that if this goes wrong, we can undo the impact?
- Nature of the downside - a potential drop in clickthrough rate is less severe than a potential penalty for example
4. Then you need to think about what you want to do with this information about surprisingness and riskiness. Here’s what I do:
4. a) Surprisingness - the more surprising, the more I seek to validate the *process* by which someone came to the conclusion that they are referring to. I want to see more data, interrogate the logic of the conclusion, think about what could have confounded it, etc
4. b) Riskiness - the more risky, the more I seek to hear a breadth of experiences. Does this always hold? Has anyone experienced the downside I am scared of? Can I socialise the result a bit?
5. Scale of surprisingness matters. I want to differentiate between surprising-but-maybe-true and so-surprising-i-suspect-this-is-false. For the latter, most of the time I ignore it. Occasionally, I get sucked down the rabbit hole and attempt to debunk
6. Ultimately, if something is surprising, but not particularly risky: try to understand why it’s surprising, file it under “things to try”. Feel comfortable sharing while saying I’m not sure about this / it seems surprising
7. If something sounds risky, but is not surprising: if trying for myself, test in the safest way possible (consider scale, duration, reversibility). Consider sharing with health warning
8. Risky and surprising: express scepticism, avoid testing until I can get my head around it
9. It is a repeated game, so you can consider who’s sharing it - but you shouldn’t rely on reputation so much as *your* assessment over time of the quality of their information.
10. BUT you should pay close attention to whether someone is truly vouching for a piece of information or just RT’ing. In rough (decreasing) order of credibility:
i. Their own personal work
ii. Done by their team / with their oversight
iii. Shared explicitly with strong endorsement that suggests it matches their experience or they’ve dug into it
iv. They are “just” sharing it
11. Even for someone I trust greatly, if they are “just” sharing someone else’s information, I treat that as a very weak signal and it doesn’t add a lot of trust above my default internal snarky scepticism. Many people who do exceptional work share things for many reasons
...including that they haven’t read it, it vaguely sounded like it fit their priors, they like the person who wrote it, or they owed them a favour
This advice is probably more useful to folks with at least some experience. If you’re earlier in your career, focus on listening to a few folks you really trust, read widely, and do your own research
If you like this kind of thing, you are likely to like @juliagalef 's podcast rationallyspeakingpodcast.org (and I'm going to read her book: "the scout mindset")
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I'm not at all sure about the title (the power on the marketers' side is very distributed and subject to prisoner's dilemma-type issues) but the main article got me thinking in a few places. Most notably...
...being interested in @dannysullivan 's previous view that "I’ve wished for years that Google would let site owners have something like a “Yes, I’m really sure I want you to use my title tag” tag."
I know a lot of folks who started out in SEO, and are now in marketing leadership positions.
One challenge is that it’s hard to stay plugged in to SEO news, but you still have oversight of the SEO channel.
Does this sound like you? Here’s what you need to know:
As ever, there is a lot of bad information and rumour, so this is all based on the large number of tests we get to run @SearchPilot. Here's what Google is *really* doing:
1. JavaScript. Probably the biggest change of recent years.
I stopped complaining about the challenges with understanding how Google parses robots.txt and made (a version of) their open source parser available on the web instead: distilled.net/resources/free…
Stopped complaining *for now* I should say
My tool does have differences compared to the old search console one (because the SC one is wrong) and compared to the open source tool (because that doesn't capture all google crawler subtleties). I explain all in the post
OK. Here we go - thread of answers to questions that came up during my #FOS19 presentation in Amsterdam today - about SEO / CRO / full funnel testing (read more here: distilled.net/resources/anno… ) cc @basvandenbeld
Q: how do you test the homepage of a website?
A: although you can run CRO tests on a homepage, SEO (and hence full funnel) tests require a site section with multiple pages with similar template. You can only really do before/after tests. [contd]
The techniques I described are mainly applicable to large websites with large site sections (e.g. ecommerce, real estate, travel, jobs, large brick+mortar chains etc). In these cases, most organic traffic is not to the homepage