Unfortunately, digital has kind of treated some of us a little too well.
It's lead to a (somewhat false!) belief that we can tag and track everything,
and attribute appropriately etc.
The reality is - no method of attribution is perfect.
There's always data loss, confusion, miss-association, contamination etc.
>>>
>>>
You also have to grasp that concept of contributions.
Certain things may not result in immediate action.
Thus the concept of Soak and Top of Mind etc.
You don't watch a TV Advert about laundry detergent,
then rush out and buy some.
But you know that brand!
>>>
>>>
Which leaves you with the problem:
how do you know if those adverts are increasing sales?
You have to be tracking before the ads run.
You need to know figures before the ads run.
You then compare the figures pre/post ad run,
over days/weeks/months.
>>>
>>>
Then it's basic math.
New figure - Old figure = difference.
The problem there is ... you seldom have linear/stable figures.
Sales tend to vary depending on not only the season,
but also the time of the month and day of the week.
So you have to compare like for like...
>>>
>>>
It's no good comparing by Date
(12th of X vs 12th Y),
as more often than not - those are different days!
So remember you may need to nudge one of the value sets along a little to get proper alignment
(2nd Monday of X and Y, last Saturday of X and Y etc.)
>>>
>>>
And similarly, one of the biggest Analytics fails I see,
regularly!,
is people comparing Month to Month,
without looking at the same 2 months from the prior year!
You need to know what the general difference is,
before you look for a specific difference!
>>>
>>>
X = 1,000
Y = 2,000
Your ads did 1,000 (+100%) ?
X 2021 = 300
Y 2021 = 500
X 2022 = 1,000
Y 2022 = 2,000
Now do the math!
(400(ish), not 1,000 (depending on method))
It's not perfect,
but it tends to be more accurate!
>>>
>>>
You can also test/verify ... by running holdbacks/cessations (much easier on Digital than more traditional channels).
Pause a campaign in a specific region,
or via a channel etc.
do it for a few hours/days (depending on size of campaign/volume of actions etc.).
>>>
>>>
If you've got the timing right,
you'll see the blip in your data.
(Do NOT expect the blip to be "live" for most campaigns ... pausing it at 5pm on Tuesday may not result in drops at that time ... but may influence Wednesday-Friday etc.)
>>>
>>>
So - again - none of it is perfect.
But deeper/longer tracking,
with proper analysis,
permits you to see what is more likely to be valuable,
even if indirectly!
>>>
>>>
I learned this due to a poorly converting ad run in a paper.
Diff. Telephone No's ... barely any calls on one of the numbers.
It was decided that they could cut that ad
(they did decide to invest the spend on the other 2 ads though!)
Over the month, calls fell.
>>>
>>>
Put the cut add back.
After a few weeks and bit, calls rose.
That ad did not generate much directly.
But it did get peoples attention!
It indirectly contributed.
You'd be amazed how much that happens with Digital
(esp. SEO and Middle of Funnel content etc.).
>>>
>>>
So ...
Ensure you use UTMs.
Make sure your web tracker captures First as well as Last (and ideally, the in-betweens).
Annotate start/end of campaigns.
Run little holdout tests.
Analyse results with stronger comparisons.
Accept it's never going to be perfect.
:D
• • •
Missing some Tweet in this thread? You can try to
force a refresh
2. Different mediums (text, image, video, audio),
tend not to count.
So you can create 1 of each, for each target term/query (that's 3+)
3. Indeed, Intent makes a difference.
But it's not just Nav -vs- Trans -vs- Comms -vs- Info!
There's Edu vs Opinion, News etc.
>>>
3/?
4. In some cases, there are also SERP SubListings.
If G sees multiple relevant pages (for term+intent),
and that there is a structure/flow between those pages,
you might get nested listings (so not exactly competing!)
(tends to require a "match" and "deeper match").
... (relevance) ... and G does seem to associate "topic" with "site" (both loosely used terms).
Which means you have topical-authority derived from content.
(JM has said about having a site that talks about X, it will struggle to rank for Y (if unrelated).)
>>>
So we end up with some confusion - due to the same word,
with what are similar concepts,
but via different methods.
Then we have the "accepted language" issue.
If you ask a detailed question,
you may see a Googler side step the question,
because you said "rank" not "rerank"
.
:: *sigh* - someone found the cookie-cutters! ::
*checks calendar*
@NicheSiteLady & @NicheDown
It's called "cookie-cutter content";
when you basically copy a piece of content, change a tiny % of it, to rank for n+ terms.
Now, I know it says "affiliate",
but it applies for just about any type of site,
be it's monetisation via
* direct sales
* ad-rev
* affiliate payments
* referral fees
(The term MFA used to be applied (made for ads (affiliates))
So, the problem is - though it can (does!) work,
(bad Google, bad!),
it's possible that G will catch it at some point,
and may hammer a site for it
(so please - at least give people a warning!).
There are ways to handle it "better",
with reduced risk.
Good content design is grounded in knowing what the user wants, and how they want it.
(I know the general rule is stuff like "write for an 8th grader" or "16yo" etc. - but that doesn't work when your audience is literally brain surgeons!)
The audiences language and knowledge "levels"
defines things like whether you can include:
* abbreviations
* topic/industry terms/jargon
* sentence length
* sentences per paragraph
* distance between references
* overall length of content
etc.