* GA BR (Google Analytics Bounce Rate)
* GA ToP (Google Analytics Time on Page)
* GA ToS (Google Analytics Time on Site)
* SERP CTR (Search Engine Result Page Click Through Rate)
are NOT (direct/indirect) ranking factors!
>>>
3/*
For starters, only a % of sites use Google Analytics,
so there'd be a Huge data hole.
And each of them are ambiguous/noisy,
with various reasons for whatever value,
(a page with the weather - quick visit, leave - does not mean the page sucks or is irrelevant!)
>>>
4/*
"But, but ... correlation!?"
Yes.
There are various studies that point to things like high Bounce Rates and low rankings.
But that is a casual correlation - not a causative one!
People may "bounce" from those pages for various reasons (some are ranking factors).
>>>
5/*
Metrics like BR or short "Dwell times", (time on page/site), may be indicative of an issue, or not -
so are not themselves actual signals!
Further, as they are non-descriptive, using them on their own would be pointless.
>>>
6/*
So we have a bunch of useless metrics,
and half baked questions/logic.
But to make matters worse, we have semantics and the fact that google doesn't really want to tell us some things.
>>>
7/*
When "we" say "ranking factor" - we usually mean anything that can cause/contribute to a positional change of a listing in the SERPs.
But to a Googler, it can mean that,
or it can mean a specific type of signal, with a certain influence, at a set stage in the algo!
>>>
8/*
This means that they can honestly answer such questions, but give a number of potential answers.
(This can be further compounded by changes, and even not knowing (not everyone is told everything!))
So asking questions and getting answers can be problematic!
No stupid metrics - instead using SERP data (which G has 100% of, unlike GA Bounce Rate!).
>>>
10/*
Pogo stick, Short click and Long click.
Pogo sticking covers Bounce Rate,
but a specific sub-type of bounce:
bouncing back to the SERP!
The user searches, clicks, returns to the search.
>>>
11/*
Then we have Short and Long clicks.
This covers "Dwell time" - how fast the pogo stick action is, if at all (some people search, click and don't return).
But even these aren't that well formed concepts,
and miss things out!
Now, unfortunately G+ has gone - which sucks,
as I beat this topic to death (several times) on that platform.
There are multiple potential data points that could be used, and the more of them, the more informative the result.
>>>
14/*
User session
Search session
Query
SERP type
What gets clicked
Whether they return
If so, how fast
Do they click another
Do they refine/alter the Query
Do they do a different search
Time
Quantity
>>>
15/*
Now, I now the majority of you that are reading this (thank you for getting this far :D)
are looking at that list and going either "hmmmm" or "ahhh".
Stop
Look at the list again, and start thinking "negative",
not just "positive".
16/*
Almost everyone jumps to the idea that a higher CTR and no return to SERP could mean a positive ranking gain.
It may be more viable to work it inversely!
Further, there are several things we may be able to gain from SERP interactions!
>>>
17/*
Appeal?
We know that, in most cases, there's a natural decline in CTR by descending position (Pos. 3 gets a lower CTR than Pos. 2, which is lower than Pos. 1 etc.).
What if people skip the first one or two, and a larger % of users click on lower listings?
>>>
18/*
Relevance?
We know that some search sessions are more than single-shot, as we search ourselves, and we sometimes refine the search.
What if the user does their search, clicks on a listing, comes back and tweaks the query a little?
What about what they click on then?
>>>
19/*
Suitability?
G don't just use the Search box data - that attach additional info to queries, such as language and location etc.
What if the click behaviours differ by device/resolution etc.?
(note the difference between Desktop/Mobile at times?)
>>>
20/*
So rather than leaping to the assumption that clicks improve rankings,
consider the potential of what non-clicks could mean, or clicks and then changes.
Think of all those:
* Search Suggestions
* People Also Ask
* those weird keywords you drop after algo updates
>>>
21/*
(And yes - "non-clicks" ... and Googlers saying "we don't use clicks" ... see how that could work? :D)
Consider Ambiguous queries (ones that could cover several meanings (jaguar), or several intents (pizza)),
and how G can decide what to show (or not), and where.
>>>
22/*
Then we have the question of data and processing.
But G don't have to do this for all queries.
They can do it for high-volume ones (surface the most satisfactory results).
They can do it for ambiguous ones (identify the most common sense being sought).
etc.
>>>
23/*
But ... what about Click Farms and faking it?
Remember that list (tweet 14)?
At the bottom were Time and Volume.
If "last click" was a factor, you'd want to see that as the case a set %, and see it supported over time.
>>>
24/*
Now imagine the number of clicks you'd need for popular search terms to make a significant difference to the "normal" click distribution figures.
And, if they were using "non-clicks", that scales more!
And you'd have to do it over a period of time.
>>>
25/*
And, if you wanted to retain whatever ranking influence you might obtain - you'd have to sustain those (non-) clicks (else the rankings could revert, or you could potentially fall further!).
See how far more complicated it could be than just
CTR
?
>>>
26/*
Now, I was fortunate enough to have @methode answer a few questions when he did a Reddit AMA 3 years ago (was it really that long ago?)
Proper (and continuous!) research should yield insights into motivation/cause, and locations.
You should be able to utilise "personas" (or demo-/firmo- and psycho-graphics) to locate additional locations, probable channels/sources.
Failing that, use Search for questions!
Search for the same things your consumers do,
and you'll likely find where they go for info ... and where you should be!
(providing answers, running ads, providing sponsorships etc.)
Do searches for Product/Service -brand, and see what comes up. Or +Comp. brand!
Originally, Keywords were THE thing.
Meta Keywords and string matching.
Other SEs came along, things evolved, Meta-Keywords basically died.
Yet the term remained.
Though how they are used has evolved,
the way they are used for research hasn’t really.
3/*
As competition for “keywords” got harder,
new terms came to the fore:
* Head term
* Longtail
* And then Mid-tail joined in
As more businesses went online, and more sites, pages and content appeared - it became harder to rank for the shorter “keywords”.
+ When looking at TLDs for Domain Names, check for confusion points (same name, different TLD etc.)
>>>
>>>
+ Sort the HTTP > HTTPS out, and pick either www or non-www - then get the 301s sorted out from day one.
+ Own your Name! Make sure you own a domain with your Brand, and you have social profiles for it (same for unique product names etc.).
Same for Directories.
Definition:
Internal links are links between your own pages,
within the same “site”,
(this may be the same subdomain, or across subdomains, depending on structure/organisation).
>>>
3/*
:: Features ::
Links often consist of:
* Location
* Content
* Attributes
Location:
The location may be a URL (different page),
a Fragment (specific point in the current page, or specific text (Fragment Directive)),
or both URL+Fragment (a part/text on a different page).
Heads up, Thread about Internal Links incoming,
(Which I'm grateful for the 3% lead, as I've half written it already :) (and my eldest tried to spike it towards Keywords, knowing I've not touched that topic yet- sod!))
24 tweets in ...
... maybe I should split it?
:D
Erm - apparently, I have to stop there!
Did you know ...
Twitter has a 24 tweets in series limit?
(Did anyone?)
I can add more tweets once I've posted the rest!
So I think I may split it into 2 threads,
(as reading through 20+ tweets has to be painful for most people)