Now ... I know some people claim G can't do a "people database", and thus can't really do "authors",
but personally - I don't see an issue.
Technically, it should be smaller than the Link Graph (fewer people than sites, fewer edges etc.).
But not all qualifications are public.
So G would have only a sample.
They could use association and inference.
Similar to the concept of Authority sites and Distance for links - they look at mentions/citations/links from such sites to "people" (thus why mutual interlinking between works/profiles is important).
But that doesn't cover things like "reviews".
Not unless G are scoring people with some form of "trust" metric.
That's a fair bit of resources,
vs simply parsing content looking for "I" and "we" etc.
But I'd be happy for someone at @googlesearchc to say I'm wrong ;)
Shallow/(relatively) Simplistic parsing of text,
or a more complicated record/system tracking and scoring people?
And honestly - people really should know.
Either, people are getting confused with EEAT,
and thinking they must hire expensive people etc. to gain an edge,
or you've got a collection of data on people that the public should know about,
or it's just smoke and simple parsing.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
So, G have invested Millions into advancing their algorithms, developing cutting edge approaches to parsing content and identifying important information.
But …
… rather than using it for "Ranking"
… they chose to steal Traffic.
And this isn't the first time Google have committed such an offence.
Every few years, they make changes to the SERPs,
to benefit "their users".
> Knowledge Panel
> Featured Snippets
> People Also Ask
> FAQs
Each such addition - reduces traffic ...
>>>
>>>
... from the very sites that G are obtaining that content from!
And the plans for their AI addition (#Magi) is no different.
It will utilise content from 1+ sources,
and present it to "their users",
(similar to a FS, but a generated composite).
The problems here are: 1) Correlation 2) Shallow observation 3) Copy cats 4) Many factors are small/tiny, and don't show any visible impact at the top of the SERPs for high-volume/high-value terms
Instead, there has been a significant shift to:
* what contributes to the businesses goal(s),
* what is readily achievable with the resources available,
* how to present needs/requirements to get them implemented,
* understanding of resource allocation
>>>
>>>
As time goes by, SEO grows from being a specialised and highly misunderstood "extra",
to a core aspect of marketing,
and is causing businesses to mature (as there's a shocking number of them that don't grasp marketing in general either!).
It's the inverse of "time served"
(another thing that always irritated me - the assumption that because someone has done the job for X years, they are good at it!).
Some people are smart, others sharp, others invested time/effort learning/research years ago.
>>>
>>>
Some people may have passive experience (family business, spent lots of time soaking up insights from someone in that role etc.).
Others are able to transfer/adapt what they have,
and apply it sideways to something else.
Though markets may differ,
industries respond differently,
audiences react in various ways ...
... there are "safer bets", and "more probable" performers.
We know this based on experience (and even prior tests).
So you go with those as your baseline!
>>>
>>>
You then divvy up the resources,
and allocate a % to "the wilds" - those that you have no experience/knowledge of, those that seem a little out there, those you may have doubts over.
Those still need testing
(if only to shut someone in a chair up!).