Why isn’t my website showing up on Google? When will it be indexed? Why is my website missing from search results? These are all very popular questions. No surprise there - more visibility in Google search results means more traffic to your site, the ability to get inbound leads, bigger sales, etc.
The reasons why you’re not seeing your website in Google search results may be varied. Perhaps your site is not SEO friendly? Read on to find out the most common ones.
Five reasons why your site isn't showing in Google infographics by Linkedin Ad Creator/Builder Alan O'Rourke
First of all - time. The most common reason a page isn’t appearing in Google is that not enough time has passed since its release. Google needs time to find a page, index it and only then will it be able to show it in the search results.
How to find out if a page is indexed? Type site:page URL in Google.
Or check it directly in Google Search Console
Type in your page URL in the search bar at the top of Search Console’s dashboard and check the index status.
Less competitive content may appear at the top of search results shortly after being indexed. Competitive content will be most probably already available, which means you will need time and backlinks to your page for it to be visible in Google.
Links are invariably one of the most important parameters in page quality evaluation. Search engines will be able to find a page if it has incoming links. The higher the domain score (Pulno Domain Power), the faster Google finds the page and the chances are great that articles posted on this website will rank higher. Obviously, it is a great generalization, however, sites without backlinks are less probable to appear on top positions.
In order to be indexed, a page needs:
Links to individual pages may be placed in the sitemap file, provided you regularly update the file. However, it is advisable to have the links directly on the website (homepage and other pages).
The robots.txt file may be used to block search engines from crawling certain pages on your website. The file is located at the root of the website host, e.g.:
https://www.populationof.net/robots.txt
If the robots.txt file looks like this
User-agent: *
Disallow: /example/
and the page address is:
https://www.website.com/example/pageA.html
then this page won’t be indexed by the search engines. Of course, there are exceptions to the rule (e.g. when there are external links pointing directly to this page), but in general, pages blocked in the robots.txt file are not indexed by search engines.
HTTP header is an element of an HTML page code sent from the server to the search engine bots. You can specify the pages you don’t want indexed by setting the X-Robots-Tag value to noindex in the HTTP header.
X-Robots-Tag: noindex
This way you let the search engines know that the page should not be indexed. It is a good way to block bots from crawling PDF files, for example.
When communicating with the search engine, the server sends over an HTTP response status code. The 200 code means the request has been processed successfully on the server. If your page returns a code starting with:
If the status code is other than 200, the page will most likely not be indexed properly.
The canonical parameter is used to point search engines to the preferred page that should be indexed.
If you add a rel=canonical to the page below
https://www.example.com/red-boots.html?page=old
that looks like this:
<link rel="canonical" href="https://www.example.com/red-boots.html" />
the search engines will stop indexing the page
https://www.example.com/red-boots.html?page=old
and will start indexing the canonical page:
https://www.example.com/red-boots.html
You can point rel=canonical to the very same page, however, if you specify a different page in rel=canonical, the search engines will index the canonical page. Remember that the rel=canonical parameter is only a suggestion for the search engines and is not always recognized.
To have the page blocked by search engines and stop appearing in Google, you can also add the robots meta tag to a page. Adding the below element to the page
<meta name="robots" content="noindex,follow" />
will certainly block it from showing up in Google. It is worth noting that long term usage of noindex, follow will lead to nofollow on links.
Google relies heavily on your website’s texts.
and/or
the chances are that the page won’t be indexed by Google and rank high. It is true that there are many counterexamples and instances of duplicate content showing up in Google. These results, however, don’t hold top positions for very long and are most likely to experience great drops in positions with the nearest algorithm update. Making sure your content is high quality, increases your chances for better rankings.
Pages with little or no unique content are crawled by search engine bots less frequently and therefore, reach much lower positions in search results.
Another reason your site is missing from search results may be multiple redirects. If a redirect is set properly, search engines will index the destination URL. However, if there are many hops in a redirect chain (more than 5) or if there’s a loop, the page might be not indexed. Such a page won’t show up in Google at all.
Apart from text, search engines crawl other resources as well, i.e.:
In case search engine bots are blocked from crawling these files, they might have a problem with fully understanding the site content. Similarly to the instances mentioned earlier, it is better to let the bots access the resources too. Don’t block them in the robots.txt file or HTTP header.
Frequent server errors have a negative effect on the crawl rate as well. If search engine bots often receive a 5xx response status, the chances of indexing that page get thinner.
Google penalties may be one more reason your site isn’t showing up in the search results. The status of manual penalties set by Google can be verified in Google Search Console.
Recognizing an algorithmic penalty is a subject for a whole new post. You can always check if your domain name appears in the top positions in Google. Another helpful tool is the site: search operator. If the search for site:yourdomainname.com brings fewer results than the number of pages you have actually published, you can assume an algorithmic penalty. Unnatural backlink profile is a very common reason for this type of penalty.
Search engines can’t access web pages secured with a password. Setting a password is a good way to block access to a page on a production or test server. However, if you want your pages indexed, don’t block them with a password.
5.7 million articles are being published every day. The chances that there already is an article on the same topic as yours are incredibly high. It is good to know your competition - search for the keywords related to the content you aim to create and see why the articles on top positions actually rank so high. Most commonly, these are long and attractive posts, full of relevant images and with a clear structure. If you want to rank high, you need to create quality content and gain excellent backlinks
The situation is similar to the one in sports - only the most hardworking can achieve the best results.
Slow loading pages have much lower conversion rates, and are less likely to get indexed properly. Slow server, especially if generating a lot of errors (see point 10), will make search engines ignore such pages.
There may be many reasons why Google hasn’t indexed your website. Checking all of them manually is not only difficult (e.g. HTTP headers), but also time-consuming. If some of your pages are not indexed, I suggest using Pulno to check the issues instantly and automatically.
Jacek Wieczorek is the co-founder of Pulno. Since 2006, he has been optimizing and managing websites that generate traffic counted in hundreds of thousands of daily visits. |
24-02-2020
Enter website URL to start free audit