Thursday, May 24, 2018

10 Facts You Think You Know About SEO That Are Actually Myths

10 Facts You Think You Know About SEO That Are Actually Myths

SEO is renowned for misinformation, misunderstandings, and misconceptions. This is in no small part due to Google being somewhat of a black box in order to try and limit gaming of SERPs.
However, in recent times, Google has taken noticeable steps to become more transparent through increased activity in the SEO community.
Whether it’s regular Webmaster Hangouts, speaking engagements at conferences, or insightful metaphors on Twitter, the likes of John Mueller, Gary Illyes, and Danny Sullivan at Google are helping to dispel the myths of SEO with facts.
To further banish these myths, I’ve put together a list of 10 common misunderstandings about Google and SEO and why they’re wrong.

Myth 1: Google Penalizes Duplicate Content

The duplicate content penalty doesn’t exist. Google doesn’t penalize sites for having duplicate content.
Google understands that duplicate content is a natural part of the web and aims to index the highest quality and most relevant page so that searchers aren’t repeatedly presented with the same content in the search results.
Unless a site is trying to manipulate rankings and is entirely made up of duplicate content, the worst case scenario resulting from duplicate content is that similar pages are folded together in the index and an alternative version of the page is shown instead.
SEO professionals can provide search engines with a number of signals as to which page they want indexed, including correct use of canonicals, sitemap inclusion, and internal links pointing to the preferred page.

Myth 2: Google Respects the Canonical URL as the Preferred Version for Indexing

Just because you set a URL as the preferred version for indexing via a canonical tag, it doesn’t mean that this page is the one that Google will select for indexing.
The rel canonical is treated as a signal by Google for the preferred page and isn’t always respected.
Such instances can be found in the new version of Google Search Console in the Index Coverage report under the flag ‘Submitted URL not selected as canonical’.
Google may choose a page other than the one you have selected as the canonical when they judge another page in a set of duplicates to be a better candidate to show in search.
In such cases, it would be advised to consider whether the canonical page that you have selected is actually the one you want indexed. If it is, then you will need to look at the signals discussed previously (sitemaps, internal linking etc.) to check that they are pointing to your preferred version.
The key is to ensure you’re sending Google consistent signals as to the preferred version of the page.

Myth 3: Quality Updates Result in Algorithmic Penalties

In a recent interview, former Google engineer Fili Wiese spoke about the myth of algorithmic penalties:
“One misconception that often exists is around things like Google Panda or Phantom as the industry called it, or Fred. Quality updates basically. But people think those are penalties or algorithmic penalties (when they’re not).
The thing is there is no such thing as an algorithmic penalty, it’s actually a recalculation. It’s like a big black box with the formula inside, you put something in, something comes out, and what comes out is the rankings and what goes in is your website.
The algorithm changes are just basically changes within the black box, which means that what comes out on the other side is slightly different now. Does that mean you’ve been penalized? No. it may feel like it, but you’re not penalized.”
This is a subtle difference that Wiese raises, but an important one in understanding how Google’s search algorithms operate.

Myth 4: Google Has 3 Top Ranking Factors

This was big news in March 2016 when Andrei Lipattsev announced that links, content, and RankBrain made up the top 3 Google ranking factors.
However, Mueller has since dismissed this statement in a Webmaster Hangout, saying that it isn’t possible to determine the most important ranking factors because this changes from query-to-query and from day-to-day.
It isn’t helpful to focus on individual ranking signals because search engine algorithms are too sophisticated for this to be a useful way of conceptualizing algorithms.
Instead, SEO pros should focus on optimizing their sites to improve user experience, match user intent and, more broadly, improve site quality while keeping up to date with Google’s latest developments.

Myth 5: Google’s Sandbox Applies a Filter When Indexing New Sites

A further misconception comes in the form of how Google treats new sites in the index. There is a long-held belief among some in the SEO community that Google applies a filter to new websites in order to stop spammy sites from ranking soon after launch.
Mueller put the Google sandbox to bed in a Webmaster Hangout, when he said that there was no such filter being applied to new sites.
He did, however, say that there may be a set of algorithms that might look similar to a sandbox but that they attempt to understand how the website fits in with others trying to rank for the same queries.
This, in some cases, may mean pages rank higher or lower for a period of time while Google’s algorithms work out how they fit in with competing pages.

Myth 6: Use the Disavow File to Maintain a Site’s Link Profile

One staple of an SEO’s responsibilities has historically been pruning a site’s backlink profile by disavowing low-quality or spammy links.
Over the years Google’s algorithms have gotten better at understanding these types of low-quality backlinks and knowing when they should be ignored. As a result, the need for SEO pros to maintain and update a disavow file has diminished significantly.
At BrightonSEO in September 2017, Illyes stated that if backlinks are coming in organically to a site, it’s extremely unlikely that the site will receive a manual action. Illyes went on to say that he doesn’t have a disavow file for his own personal site.
Now it is only recommended to make use of the disavow file when a site has received a manual action, in order to remove the offending links.

Myth 7: Google Values Backlinks from All High Authority Domains

Successful link building should be judged by the authority and relevance of the backlinks pointing to the target site. Still, backlinks from high authority domains are highly sought after regardless of how relevant they are to the target site.
However, another insight gleaned from Illyes’ BrightonSEO Q&A revealed that Google takes into account the context of backlinks, meaning that SEO pros should perhaps give more importance to link relevance when going after links.
Illyes thinks there is value in fixing internal and external links, but it is important to keep context in mind. If a poor quality article (which has nothing to do with your site) links to your site, Google will ignore it because the context doesn’t match.

Myth 8: Google Uses Page Speed as a Major Ranking Signal

Google has used site speed as a ranking signal since 2010 and intuitively you would think they have a clever way of incorporating it as a key part of their algorithms, especially as it has become such an important topic in SEO.
However, Mueller has explained that, while there are plans to introduce a speed update later in 2018, Google only uses speed to differentiate between slow pages and those in the normal range. In fact, DeepCrawl (disclosure: I work for DeepCrawl) found that Googlebot will crawl and index pages that take up to three minutes to respond.
Speed may also indirectly influence rankings through feedback from user experience, like visitors bouncing from a page that takes too long to load, but as it stands Google’s use of speed in their algorithms is rudimentary for the time being.

Myth 9: Fred Was an Algorithm Update Related to Link Quality

Google continuously update their search algorithms at an average rate of 2-3 per day. The Fred algorithm update in March 2017 was thought to be an update related to link quality.
However, Illyes has made it clear that there was no specific algorithm update like Panda or Penguin – in fact, he called the ranking fluctuations Fred as a joke.
Illyes went on to say that 95-98 percent of these ongoing updates are not actionable for webmasters. Fluctuations always happen but you should focus on having a high-quality site with lots of people talking about your brand via links, social networks, etc.

Myth 10: Crawl Budget Isn’t an Issue

Crawl budget is a complex and much-debated topic, but it is overlooked by some who overestimate Google’s ability to crawl all the pages on a given site.
Google can crawl all of the pages on a given site at once for small to medium sites (up to around 200,000 pages). However, crawl budget is a pressing issue for those managing large enterprise sites because they need to ensure important pages are being crawled and on a regular basis.
One way to check if Google has crawled the vast majority of your site is by looking to see if Googlebot is crawling lots of 404 pages, as this indicates most of the important pages have already been crawled.

Conclusion

I hope this post has helped to kill off some of the long-held sacred cows of SEO. When it comes to SEO, there are facts and myths. Make sure you’re focusing on facts, not fairy tales!

Reference: https://www.searchenginejournal.com/seo-myths-facts/253609/

No comments:

Post a Comment