Patrick Stox is Product Advisor, technological SEO and Brand Ambassador at lasignoralaura.com. He’s an organizer for the Raleigh SEO Meetup, Raleigh SEO Conference, Beer & SEO Meetup, Findability Conference, and also moderator ~ above /r/TechSEO.

You are watching: How to remove a website from google search results


Shows how numerous different websites are linking come this piece of content. Together a general rule, the much more websites connect to you, the greater you location in Google.


Shows approximated monthly search traffic to this write-up according come lasignoralaura.com data. The really search website traffic (as report in Google Analytics) is generally 3-5 times bigger.


There are many ways to eliminate URLs from Google, however there’s no one dimension fits all approach. The all relies on her circumstances.

That’s an important point to understand. Not only will using the wrong an approach sometimes cause pages no being removed from the index as intended, but it can likewise have a an unfavorable effect onSEO.

To help you conveniently decide which method of removed is best for you, us made a flowchart therefore you have the right to skip come the relevant section that the article.

*

In this post, you will do it learn:


*

What I typically see SEOs perform to inspect if content is indexing is use a site: find in Google (e.g., site:https://lasignoralaura.com). While site: searches have the right to be valuable for identifying the pages or sections of a website that might be problem if they present in find results, you have to be careful because they aren’t normal queries and also won’t actually tell you if a web page is indexed. Lock may display pages that are well-known to Google, yet that doesn’t typical they’re standard to present in regular search results without the site: operator.

For example, site: searches have the right to still show pages the redirect or are canonicalized to an additional page. Once you ask because that a details site, Google may present a web page from the domain through the content, title, and description from an additional domain.Take for instance moz.com which offered to be seomoz.org. Any type of regular user queries that bring about pages top top moz.com will display moz.com in the SERPs, if site:seomoz.org will display seomoz.org in the search results as shownbelow.

*

The reason this is vital distinction is the it can lead SEOs to make mistakes together as actively blocking or remove URLs from the index because that the old domain, which prevents consolidation the signals prefer PageRank. I’ve seen many cases with domain movements where human being think they do a mistake during the migration because these pages still display for site:old-domain.com searches and also end up proactively harming your website if trying to “fix” the problem.

The far better method to check indexation is to use the table of contents Coverage reportin Google search Console, or the URL investigate Toolfor an separation, personal, instance URL. These devices tell you if a page is indexed and also provide added information on how Google is treating the page.If you nothing have access to this, just search Google because that the full URL that yourpage.

*

In lasignoralaura.com, if you discover the web page in ours “Top pages” report or ranking because that organic keywords, that usually means we experienced it ranking for regular search queriesand is a an excellent indication the the web page was indexed. Note that the pages were indexed once we experienced them, yet that may have actually changed.Check the day we last observed the page for aquery.

*

If there is a difficulty with a particular URL and it requirements removing native the index, follow the flowchart in ~ the start of the article to discover the correct removal option, then jump to the appropriate section below.


If you eliminate the page and also serve one of two people a 404 (not found) or 410 (gone) status code, then the page will be removed from the index shortly after the page is re-crawled. Until it is removed, the page might still present in search results. And also even if the web page itself is no longer available, a cached version of the page may be temporary available.

When you can need a different option:

I need an ext immediate removal.See the URL removal device section.I have to consolidate signals choose links.See the canonicalization section.I need the page accessible for users.See if the noindex or restricting accessibility sections fit your situation.

Removal choice 2: Noindex

A noindex meta robots tagor x‑robots header solution will tell find engines to remove a page from the index. The meta robots tag functions for pages whereby the x‑robots response works for pagesand additional document types favor PDFs. For these sign to be seen, a search engine demands to have the ability to crawl the pages—so make certain they aren’t blocked in robots.txt. Also, keep in mind that removed pages native the index may prevent the consolidation that link and other signals.

Example that a meta robots noindex:

Example the x‑robots noindex tag in the header response:

HTTP/1.1 200 OKX-Robots-Tag: noindexWhen you might need a various option:

I don’t want individuals to access these pages.See the restricting access section.I should consolidate signals like links.See the canonicalization section.

Removal option 3: Restricting access

If you desire the web page to be easily accessible to part users however not search engines, then what you probably want is just one of these 3 options:

some kind of login system;IP Whitelisting (which just allows specific IP addresses to access thepages)

This type of setup is finest for things like inner networks, member only content, or for staging, test, or advance sites. It enables for a team of individuals to accessibility the page, however search engines will not it is in able to access them and will no index thepages.

When you can need a various option:

I need more immediate removal.See the URL removal device section. In this specific case, you may want much more immediate removed if the contents you space trying come hide has actually been cached, and you need to prevent users from seeing that content.

Removal option 4: URL removed Tool

The name for this tool from Google is slightly misleading as the means it works is the it will certainly temporarily hide the content. Google will still see and crawl this content, however the pages won’t appear for users. This temporary effect lasts for 6 months in Google, if Bing has actually a similar tool the lasts for three months. This tools should be used in the most extreme situations for things choose security issues, data leaks, personally identifiable details (PII), etc. For Google, use the Removals Tooland for Bing, see how to block URLs.

You still require to apply another technique along with using the removal device in order to actually have actually the pages gotten rid of for a longer duration (noindex or delete) or prevent users native accessing the content if lock still have the web links (delete or limit access). This just gives you a faster method of hiding the pages while the removal has actually time to process. The request deserve to take as much as a day to process.

Removal alternative 5: Canonicalization

When you have actually multiple execution of a page and want come consolidate signals like web links to a solitary version, what you want to perform is some kind of canonicalization. This is largely to protect against duplicate contentwhile consolidating multiple versions that a page to a single indexed URL.

You have several canonicalization options:


If you have actually multiple pages to remove from Google’s index, climate they have to be prioritized accordingly.

Highest priority: These pages room usually security-related or related to confidential data. This has content the contains an individual data (PII), customer data, or proprietary information.

Medium priority: This usually involves content intended for a certain group of users. Company intranets or employee portals, content supposed for members only, and also staging, test, or development environments.

Low priority: These pages commonly involve duplicate contentof some kind. Some instances of this would include pages served from many URLs, URLs through parameters, and also again might include staging, test, or advance environments.


I want to covering a few of the ways I usually see removals excellent incorrectly and also what happens in each scenario to help people recognize why lock don’twork.

Noindex in robots.txt

While Google offered to unofficially assistance noindex in robots.txt, that was never an official standard and also they’ve now formally eliminated support. Countless of the sites that were act this to be doing therefore incorrectly and also harming themselves.

Blocking native crawling in robots.txt

Crawling is not the exact same thing together indexing. Also if Google is blocked from crawling pages, if there are any internal or exterior linksto a page they have the right to still table of contents it. Google won’t understand what is top top the page because they will certainly not crawl it, however they know a page exists and will even write a location to present in search results based upon signals choose the anchor textof links to thepage.

Nofollow

This generally gets confused for noindex, and some human being will use it at a page level expecting the web page not to it is in indexed. Nofollowis a hint, and while it initially stopped web links on the page and also individual links with the nofollow attribute from being crawled, that is no much longer the case. Google deserve to now keen these links if they desire to. Nofollow was additionally used top top individual web links to try to protect against Google from crawling v to specific pages and also for PageRank sculpting. Again, this no much longer works due to the fact that nofollow is a hint. In the past, if the web page had an additional link come it, climate Google could still find from this alternate crawlpath.

Note that you can discover nofollowed pages in mass using this filter in the Page traveler in lasignoralaura.com’ site Audit.

*

As it seldom makes sense to nofollow all links on a page, the variety of results should be zero or close to zero. If there are equivalent results, Iurge girlfriend to inspect whether the nofollow directive was accidentally added in location of noindex and also to select a an ext appropriate technique of removed if needbe.

You can also findindividual links marked nofollow using this filter in link Explorer.

*

Noindex and canonical to another URL

These signals space conflicting. Noindex claims to eliminate the web page from the index, and also canonical says that an additional page is the variation that should be indexed. This may actually work-related for consolidation together Google will typically pick to disregard the noindex and instead use the canonical as the key signal. However, this isn’t an pure behavior. There’s an algorithm involved and there’s a hazard that the noindex tag might be the signal counted. If that’s the case, climate pages won’t consolidate properly.

Note the you can discover noindexed pages v non-self-referential canonicals using this set of filter in the Page explorer in SiteAudit:

*

Noindex, wait because that Google to crawl, then block from crawling

There room a couple of means this usually happens:

Pages are already blocked but are indexed, people add noindex and also unblock so the Google have the right to crawl and see the noindex, then block the pages indigenous crawling again.People include noindex tags for the pages they want removed and after Google has actually crawled and also processed the noindex tag, castle block the pages native crawling.

Either way, the last state is blocked from crawling. If girlfriend remember, earlier, us talked around how crawling is not the exact same as indexing. Even though these pages are blocked, they can still finish up in theindex.


What if that content around you however not ~ above a website youown?

If you in the EU, you deserve to have content eliminated that consists of information about you thanks to a court order for the ideal to be forgotten. You have the right to request come have personal information eliminated using the EU Privacy removed form.


To remove images from Google, the easiest method is v robots.txt. While the unofficial assistance for removing pages was eliminated from robots.txt together we mentioned earlier, just disallowing the keen of photos is the right method to eliminate images.

For a single image:

User-agent: Googlebot-ImageDisallow: /images/dogs.jpg For all images:

User-agent: Googlebot-ImageDisallow: /

Final thoughts

How you eliminate URLs is fairly situational. We’ve talked around several options, yet if you’re still puzzled which is right for you, refer back to the flowchart at thestart.

See more: How To Make Dippin Dots At Home Made Dippin Dots Ice Cream, How To Make Homemade Dippin Dots Ice Cream

You can likewise go v the legitimate troubleshooterprovided by Google for content removal.