In many cases, there are competitors for topic(s) you want to target
You have several options: 1) Find terms that aren't competed 2) Find terms that are weakly competed 3) Provide more value 4) Produce multiple inter-supporting pieces
: Non-competed :
This is often harder than it sounds.
It's also complicated by semantic-clustering.
(the "exact" phrase may not be competed, but there are likely umpteen variations that are!).
Further, in the vast majority of cases,
these will be (very!) low volume
>>>
>>>
: Weak competition :
The problem you'll often find here is that they are often a mix of both low-volume and low-value terms.
(It's logical - it's not like people will ignore high-value, high-traffic terms :D)
You'll have to cover many to make it worth while.
>>>
>>>
: Greater value :
If you're going to compete, go in swinging!
(No! - I'm not advocating copying competitors and throwing in a few extras!)
You can:
* go deeper
* go broader
* provide more data points/info
* more suitable language
* easier consumption
* diff. medium
>>>
>>>
: Multiple pieces :
Fight smarter (and harder :D).
Build up a topical/relevance stack - produce content for different stages of the journey, cover from multiple pain points, produce for different "roles" and tasks etc.
Interlink and maximise topicality!
>>>
>>>
Sadly - there is no single "best" approach,
it depends on the market/SERPs and your resources (inc. topic knowledge/content creation skills).
But - the more of those things you do,
the greater your chances of winning "something"
(be it higher or newer rankings/traffic)
>>>
>>>
But you do need to factor in costs.
Sometimes, the "low hanging fruit" is actually cost-inefficient (the time/effort/cost to target those terms doesn't really pay off (for a long time!)).
So prioritise based on value/return to the business,
and balance against "ease".
>>>
>>>
Also factor in what strengths you already have
(stronger ranking pages/terms, with higher quality/converting traffic, that produce more revenue (margin or ad-views/affil.clicks etc.)),
and see if you can leverage those more/better,
and build upon those foundations.
>>>
>>>
And don't forget - stacking isn't uni-directional.
You can stack with different mediums
(text, image, video, audio, slides etc.),
and with different content formats for different audiences
(interviews/perspective/news, data/research edu/guide, comparison/review etc.)
>>>
>>>
If all of that is a bit of a muddle for you,
try picturing a fruit tree.
Big, juicy fruits at the top,
lower down tends to be smaller, less juicy.
One side of the tree produces more than the other.
Some pieces are withered, bug-eaten etc.
Pick with care :D
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Relative figures (grown by X%, gain by N%, return of Y%)
don't mean squat unless there is a real figure provided alongside (ideally, the initial/base figure - failing that, the final, and people can do the number crunching).
Why is it important?
>>>
>>>
Perceptual skew.
Go from 10 to 100
Go from 100 to 500
Go from 5,000 to 6,000
Each of those is increasing at a lower %,
but the actual figure increases.
It's easy to have impressive relative figures (such as ROI), when you start with a low figure!
Pray/Hope
4️⃣ ... it's for the right audience
5️⃣ ... it's something that contributes to your business goals
6️⃣ ... it's a relevant topic that aligns with your site
Simply copying is Not a good move
(nor is it a "strategy"!).
Most sites already have a load of content that ranks for a ton of irrelevant terms that do Nothing for the business!
Which means you're wasting resources
(at the least, time and effort, maybe money)
>>>
>>>
Investing a modicum of time/effort at the start,
to check the topic/intent, nature/purpose and suitability of the pieces could help you avoid such wastage.
It's this sort of shit that has lead to the internet being filled with highly similar, largely redundant crap!
You will see the same sort of pattern whenever there is a mass ingestion of content,
and usually when it fails to get traction/acquire links,
often due to being low quality (inc. cookie-cutter/dupes)
1) New content is added 2) G discovers new content 3) G crawls new content 4) G indexes new content 5) G ranks the new content for X number of terms
>>>
>>>
6) Site gets a huge uplift in traffic 7) G see's little traction/satisfaction 8) G see's little/no link acquisition 9) G starts to tag content as questionable 10) Pages start to drop in rankings 11) Pages start to get deindexed 12) Traffic falls
So, G have invested Millions into advancing their algorithms, developing cutting edge approaches to parsing content and identifying important information.
But …
… rather than using it for "Ranking"
… they chose to steal Traffic.
And this isn't the first time Google have committed such an offence.
Every few years, they make changes to the SERPs,
to benefit "their users".
> Knowledge Panel
> Featured Snippets
> People Also Ask
> FAQs
Each such addition - reduces traffic ...
>>>
>>>
... from the very sites that G are obtaining that content from!
And the plans for their AI addition (#Magi) is no different.
It will utilise content from 1+ sources,
and present it to "their users",
(similar to a FS, but a generated composite).
The problems here are: 1) Correlation 2) Shallow observation 3) Copy cats 4) Many factors are small/tiny, and don't show any visible impact at the top of the SERPs for high-volume/high-value terms