PageRank Is The Primary Google Search Ranking Factor

Yes it is time for another controversial SEO post, sit back and enjoy.

Every single time I write a post mentioning PageRank, I get comments about PageRank not being important for ranking.

These comments very rarely differentiate between toolbar PageRank and the PageRank of whatever kind Google stores on their servers and upgrades on an extremely frequent basis for every page. I know from first hand experience that the toolbar PageRank has very little to do with rankings, and is manually manipulated based on Google’s commercial goals.

PageRank By Any Other Name…

The Ranking Factors article at SEOmoz in many ways skirts around the issue, referring to Toolbar Pagerank, and then ignoring the concept of what is real PageRank by splitting things down into multiple related items.

  • Link Popularity within the Site’s Internal Link Structure
  • Global Link Popularity of Site
  • Topical Relevance of Inbound Links to Site
  • Link Popularity of Site in Topical Community
  • Global Link Popularity of Linking Site
  • Link Popularity of Site in Topical Community
  • Topical Relationship of Linking Site
  • Internal Link Popularity of Linking Page within Host Site/Domain

The only direct question specific to PageRank was:-

  • PageRank (as measured by the GG Toolbar) of Linking Page

Aaron Wall in answering this question actually gave a response hinting at the real importance of PageRank

The toolbar is perpetually outdated, but Google uses PageRank values to help set crawling priorities and to determine if a document should go in the regular or supplmental index.

Some Simple Questions

  • Can a page rank without a Title tag?
  • Can a page rank without any internal linking?
  • Can a page rank even on a new domain?
  • Can a page rank without direct external links?

Ultimately with almost all the ranking factors, it is a balancing act, but with PageRank or however you wish to describe “Google Juice”, it becomes a little more fundamental.

No PageRank, No Google Juice = No Index

I realise if you take a purely theoretical stance, that if you created a 1000 page site full of original content, and then point Google to the sitemap for that site, that Google might index the whole site, and if you remove that link, some of the pages might remain indexed for a short or long period of time.
I haven’t done the test, but a random surfer in theory could land on one of the isolated pages, if Google chose to keep the unconnected pages in the index.

PageRank Flow & Real World Indexing

I need a real world example to demonstrate how important juice flow around a website or blog is important, and I decided that Michael Martinez effectively was asking for this by saying:-

I do absolutely nothing to make SEO Theory “SEO friendly”. It is better indexed in Google than most SEO blogs.

Take that for what it’s worth.

I am always up for a challenge, especially when Michael went on to say

My complaints about the poor quality of Google’s search results stem from Google’s willful, deliberate segregation of the Web into two categories: Preferred Pages (Main Web Index) and Supplemental Pages. Preferred Pages are always shown first in search results regardless of how much more relevant the Supplemental Pages may be to queries.

Actually Google seems to have 3 types of pages

  1. Main Index
  2. Supplemental – apparently being phased out, but it could be all FUD, on that I agree with Michael on
  3. Not Indexed

Michael forgot about the pages that are receiving so little juice, Google doesn’t even bother indexing them, even on sites that are “better indexed than most SEO blogs”.

It is possible that Michael is doing some kind of indexing test, or he could also have selectively decided that he doesn’t want his old content in Google’s index.
Thus I am not going to link directly to the following pages which would damage his test results.

That being said, Michael did ask to be quoted on it, and to quote him I am sure he would want the person doing the quoting to provide good, if not conclusive evidence for or against his stance. I am not going to claim conclusive evidence, but at least I have spent a little time on this reply.

Michael links to his date based archive pages from every page in his sidebar, thus they should be receiving a fair amount of juice. However that juice doesn’t flow very deeply and he only has 5 posts on each page of his archives.

If you go just 3 pages deep, Michael starts to have indexing problems.

Every article listed on that page is not in Google’s index… at all!

Not Indexed By Google

Seo Theory 5

Not Indexed By Google

Seo Theory 4

Not Indexed By Google

Seo Theory 3

Not Indexed By Google

seo theory 2

Not Indexed By Google

seo theory 1

It seems the content on Michael’s SEO Theory blog isn’t as well indexed currently as you might expect, but as I mentioned earlier, that might be due to experimentation

I Have Pages Not Indexed As Well

I decided a while back it would be hard to write a post like this without having some pages of my own to point out, so I did a number of things.

  • I didn’t make extensive structure changes to improve things based on my WordPress SEO articles
  • I switched off translation plugins
  • I don’t include unique article descriptions
  • When I upgraded to WP2.3, I didn’t include Custom Query String, so my archives are not as flat as they used to be – I should note there are a few versions of CQS now available for WP2.3+ including Custom Query String Reloaded
  • I have been using underscores with my tag_pages rather-than-dashes

I had a tough choice back in October, after being hit with a sizeable fake Toolbar PageRank penalty (currently -3) – continue making changes to my site structure to improve search engine performance, or keep the site relatively unchanged.

It is hard to say whether the penalties are/were material unless you bite the bullet and not make changes required by Google.

The only change I decided to make was to not include CQS when I upgraded to WP2.3+ – I decided that this would allow me to eventually provide some examples of pages falling out of the index, and then I would be able to demonstrate how I improve site structure to fix the problem.

With the changes Google made to the reporting of supplemental results, or if you believe them removing supplemental results altogether, it did take a little while for things to settle down.

Google Spider Indexing

I was waiting for a little deeper indexing activity to be visible, and then to wait a week or so for that activity to show in results. I did point out a few months ago that Google Webmaster tools provides these indexing charts, but the scales are still broken.

Not In Google Index

Andy Beard Content not indexed

The big difference is that I had to go back 7 pages in my January archives from last year to find a page that was no longer in the index, and my date based indexes are not on my sidebar on every page of this domain.

Related Links Are Transitory

Related links certainly help passing juice to older related content, but eventually even if you list 10 related pages, and use very specific control of related pages using a plugin such as Simple Tags, the related posts become superseded.
I will probably end up tagging this post seo, wordpress, linking, linking structures, pagerank, ranking factors

I have used most of those tags in the past, thus it is most likely that I will get 10 related posts, but also that some previously related posts will become displaced on the list, and that change will not just happen on this page, but all pages on this domain that are related.

Deep Linking to Older Content

Deep linking to older core content always brings a little fresh life back to them, and gives them a fresh injection of Google juice. Once you get to 500+ pages of content, it becomes harder and harder to give life back to all of them, and thus only what you class as “pillar” content gets a much needed burst of life.
There is a constant ebb and flow, 2 steps forward, one step back.

Temporal Factors

Maybe there are temporal factors taken into account by search engines, and some kind of temporary PageRank assigned to new content.
What I do know is that if content is buried deep in your archives, so deep that it doesn’t receive any juice and isn’t indexed, then a link from that page is totally worthless.
An old link on a TBPR PR10 domain that is buried deep in the archives might still have some value, whereas being 30 pages deep on a blog that receives very little link love, or maybe an archived forum post, isn’t going to be worth much, if anything.

Google may remember old links that have lost juice for a period of time after they have been removed. Donna has spent some time looking into this.

To Be A Contender, You Have To Be In The Game

If your pages aren’t in Google’s index, they can’t rank for anything, even long tail queries.

To be in Google’s index, pages really have to have a certain undefined amount of juice, no matter what other factors you gain merit for.

Thus PageRank is the primary Google Search Ranking Factor, because it is the only factor you 100% have to fulfil to have a chance for your pages to rank in Google’s search results.

To give you a good parting analogy, all plants need water – different plants thrive with different amounts of water, and you can give a plant too much water – I don’t know if you can have too much Google juice, but you might have too much over a short period of time… a downpour which washes away the soil.

Liked this post? Follow this blog to get more. Follow


    • says

      This has nothing really to do with images, and my images always have been full path.

      I also don’t block hotlinking as there are so many online sites that might use images legitimately such as Feedreaders, and Stumbleupon, that adding limitations is not very viable with cheap bandwidth.

  1. says

    Andy: “If you go just 3 pages deep, Michael starts to have indexing problems.

    Every article listed on that page is not in Google’s index… at all!”

    Michael: Wrong.
    The first article on that archive page is indeed indexed by Google.

    Google finds and indexes the articles (which are saved as individual posts). It doesn’t matter if date archive pages appear in Google’s index or not as long as the articles make it into the index and can be found.

    If you want to find the ‘best kept secrets in SEO’ post on SEO Theory, that’s not hard to do, either.

    As for whether Google will index and list a site without any inbound links, yes, I’m running an experiment. Yes, the sites are still indexed. No, they have not been indexed for long. No, I don’t know if Google will drop them if they don’t get any inbound links. Yes, they will eventually have links pointing to them.

    The point of the experiment is not to see how long an unlinked site will stay in Google’s index. The point is to see if you really need links to get into Google’s index.

    You don’t.

    But once you get there, it’s up to you to do something to make it worth Google’s trouble to keep you in the index (in my opinion).

    You’re welcome.

    • says

      I was very specific with my title, but I wouldn’t be surprised if it was hard to get a 1000 page site without any kind of juice to have all 1000 pages indexed by either MSN and Yahoo. In fact even if you have a lot of juice it is hard to have 100% of your site indexed by them.

  2. says

    True, you need to have some PR (more than 0) to be indexed, but does that make it the primary ranking factor? I think it would make more sense to call it the primary indexing factor. Of course you need it to be ranked, but that doesn’t mean that it affects rank more than any other factor, and that’s the way most people would read the statement “PageRank is the primary Google search ranking factor”.

    Similarly, couldn’t one argue that having a PR>0 is no more important than having something — a word, an image file — to index?

    • says

      Bob you have to have a piece of content, though a picture on a page can rank, just a few words, a flash page… anything can rank, or at least be in the index if it has some juice.

      Sure it is a little bit of a chicken and egg thing, but if you have a 750 word article and you want it to appear in the Google search results, the primary factor is that it receives enough juice to get in the index in the first place.

      Other factors goven what it will rank for and how high in the results. PageRank can also affect that, at least possibly, but it is the only thing that is 100% required to appear in results, providing you have a document that can be addressed using some kind of URI.

  3. says

    Dealing with supplemental is a daily battle for me. I think I spend more time re-writing articles for distribution, creating squidoo lenses, social bookmaking etc (for deep linking) than I do creating content.

    It’s not how I want it to be but I feel Google gives us no choice. A page can be on the 1st page one day and disappeared completely the next.

    I have also removed date archives as these do get PR and I feel they are taking the juice from other pages. I have no idea if that has any effect.

    • says

      April pages can disappear from the SERPs even for sites with lots of juice, and every site benefits from more links.
      I think you would do well to include a lot more pictures in the blog you link to, even if they are only from some kind of DIY affiliate program, though take advatage of public domain pictures and creative commons you can use commercially with attribution.

      I actually like to use Data archives even though I allow them to get indexed for any blog or “WordPress as a CMS” site, if a site has some kind of content that can be affected by when it was written. A long as they don’t have lots of external blogroll links then they are more of a conduit of link equity.

  4. says

    Remember my crazy experiment when I banned Google from my website? Well it lasted only a week. The interesting part was that when I allowed googlebot back on my (old) blog two weeks later, the pages started to appear in the index magically within hour or so. The pages that had most incoming links were appearing firs. Too bad I did not document well the process…. But then its your job. ;)

    • says


      I’m going to go a little off topic from this post for a minute because I found your experiment interesting. I have a few questions I wouldn’t mind your thoughts on.

      1. Was your site effectively de-indexed within a week’s time?
      2. Do you have an analytics package that can still show say a week prior, during and after?
      3. Would you say that rankings returned to normal, improved or were lowered with your experiment?

      I’d also be curious to know the groups thoughts on trying this in hopes to have some of the older pages on a site re-indexed and refreshed. Kind of a new leg up on life…

      John Jones

      – 10 minutes of SEO, SEM & Internet Marketing
      Posted 06 Feb 2008 at 4:21 pm

  5. says

    I think I get what you’re trying to say Andy, but my own experience doesn’t match up with your conclusions. I’ve got a PR 1 site (admittedly toolbar PR) that has just under 900 pages indexed which works out to nearly 100% of the pages. I’ve done nothing but on-page SEO. It would seem that if PR is the primary factor, that the site wouldn’t be doing so well.

    However, I do consider PR to be an important factor and I generally disagree with the folks that write it’s not worth paying attention to. I figure SEOs should look at any data that Google provides and decide what to do with that information on a case-by-case basis rather than dismiss it outright.

    • says

      Marios I have sites like that as well and have experimented with all kinds of sites.

      It doesn’t take a lot of juice to get pages in the index, but it does take a good linking structure if you don’t have a lot of juice.

      A typical good linking structure for low juice sites is to force a spider from the home page to go only to the sitemap, and then from the individual pages link only to the home page – what I have discussed before as a “spider circle”

      The technique doesn’t require using nofollow, but using nofollow does allow you to add additional human navigation that might reduce the amount of juice that flows to the sitemap.

  6. says

    This post was like listening to myself talk.

    Back in fall 2006, I remember not even Rand Fishkin believed in the importance of PageRank. But by 2007, the word PageRank worked its way back into his vocabulary. Dan Thies understands the role of PageRank in SEO; Jill Whalen is also starting to “get it” after listening to Dave Crow – the Chief of Google’s crawl team – talk “passionately” about PageRank. Almost every Google patent I’ve read mentions PageRank. Some comes right out and says on-page/anchor text analysis is more CPU intensive than link analysis. As Mike Grehan mentioned somewhere, the “abundance problem” (where millions of pages are relevant for a term) forces to Google to look at not what is relevant (millions of pages are) but which pages are authoritative.

    Its also interesting to read Vanessa fox write this on SEL:

    “Links from authoritative sources are strong reputation signals for factors such as PageRank that determine how highly a site is ranked. Anchor text from links play a big part in what search engines think a site is about.”

    Clearly, Google’s ranking (sorting/filtering) algorithms are more complicated than ever now; but before you can worry about search position your pages have to “show up” – and as Woody Allen(?) said, “90% of life is just showing up”

    • says

      PageRank or “Google Juice” can actually be spread really thin, you don’t have to buy a first row seat, popcorn, candy & soda to watch a movie, and those additional expenses don’t necessarily make the movie any better.

      However if you are only giving 5% of the juice to the pages on the next tier down, eventually you get pages that don’t have a chance of being indexed.

      SEOs seem to get polarized on this stuff

  7. Hugo Guzman says

    Seems like the title of the article should be “Inbound link(s) are the primary Google search ranking factor”.

    Regardless of toolbar values, or the algorithm that determines internal non-public PageRank, the true determining factor of whether a page is indexed or not is the makeup of its inbound link portfolio.

    So while I understand what you’re getting at with this well-written and researched post, I think that you’re trying to dress up something that should really be a given.

    If a page doesn’t have enough inbound link “juice”, from either internal or external sources, then said page will not stay in Google’s index.

    • says

      Hugo you are looking at things a little backwards.

      Does Michael’s domain have lots of incoming links from authority sites? Yes, he gets links from Search Engine Land all the time

      Does my domain have lots of incoming links from authority sites? I get lots of links – far more than any of my niche sites, yet many of those have more pages indexed than this one.

      The juice has to get to the pages, from where it comes in, so you could then skirt around this and say then that is a combination of external links and internal linking structure.
      But then the links also have to be on pages on the other domain that have juice, and maybe they will not be counted if the other domain isn’t in the same topical area.

      Whether a page is initially considered for indexing comes down to one factor, does it have enough juice to be considered – has it passed its entrance exam?

      You can pour a whole bucket of champagne into the top glass in a tower of champagne glasses, and many of the glasses will remain empty.

      You are right that this should be a given and I did spend a lot of time trying to explain it as a fundamental concept.
      And it is still being disputed and I knew it would be

      Go back to any article on Sphinn that discusses WordPress SEO, and see how many cover anything to do with crawl depth and maintaining a flat linking structure.

  8. massa says

    you put a lot of thought and effort into this post and it shows.
    Looks like I have some training to do.

    • says

      As I suggested with a link in the article, that might be fud

      All they seem to have done is broken /* queries to the extent they can’t be used for anything useful, as those domains that you know are highly optimized for crawl depth are reporting worse results than their actual indexed number of pages.

    • says

      Just another tip

      If you are looking to rank pages for Real Estate in Poznan, doing it on an SEO blog where the owner speaks fluent Polish in total disregard for his comment policy isn’t a good idea.

      ul. Sienkiewicza 6a/3
      74-101 Gryfino
      Siedziba w Poznaniu:
      os. Batorego 38f/78
      60-687 Poznań
      NIP : 858-168-81-42
      REGON: 320152340

      • says

        I can’t stop laughing at the exchange above. Andy they should just hire you as a consultant. If they are a true real estate firm that is.

  9. Hugo Guzman says

    I have a lot of respect for you, Andy, but it is your post that has things backwards. You’re using a Google construct (“PageRank”) as the basis for being indexed and ranking for particular search phrases.

    But “PageRank” is based on the context of the inbound links a particular page has. As a matter of fact, that’s Google’s own definition.

    Again, I think that everything in your post is spot on, except for the sensationalist title and reference to “PageRank”.

    I’ll put it to you this way, if you wanted to write a similar post referring to Yahoo and/or MSN, would you still title it “PageRank is the primary Yahoo/MSN search ranking factor”?

    Of course not. You would name the phenomenon by it’s proper terminology “inbound links”.

    There’s no such thing as “PageRank” on a purely fundamental level. There are only links (aka “citations”).

    That’s what Google’s groundbreaking algo is based on.

    • says

      I think I went for a factual title based upon the content of the article I was writing. It might be looked on as sensationalist by people who want to merge the concept of PageRank into 10+ other factors and then marginalise the idea that the juice really has to reach all your pages for them to be in the index.

      Links from dead pages don’t give any benefit, so then you have to start the qualifying process down all the other ranking factors, which just confuses the hell out of people, when ultimately what you end up with is PageRank, albeit with possibly a little extra filtering or slightly different weighting.

      There are very few blogs even in the SEO space that flow juice to historical content well, though Search Engine Land is actually doing it quite well, with its main archive and category archives as flat as a pancake.

      Even their posts from 2006 are in the index, and that is not just because SEL has so much more juice to throw around, or gained lots of links to those articles.
      Most of the pages that link to those old posts on other domains are probably no longer in the index.

      All the linking factors related to a page actually ending up in the index effectively equate to PageRank, or “Google Juice”

      Throwing in some modified version of PageRank for other engines just complicates things, which is why I included Google in the title, and made it specific to PageRank and not some variant like Trustrank.

  10. says

    Andy this is a great post. Not only is the article itself thorough but the comments by other Internet Marketers really make this post one of those pages that you should ensure remains a “pillar”.

    I don’t have anything to really contribute to your well thought out article except that DoshDosh sure knew how to pick the right part of your post for the Sphinn Submission.

    On another note; your analogy made me chuckle a little. I’m sure the gardening concept has been used by many people and this certainly won’t be the last time it is used but it did make me reflect on a post long ago by a friend. I’ve posted the url but not linked to the post.

    John Jones

    – 10 minutes of SEO, SEM & Internet Marketing

    • says

      I just submitted your friends post to Stumbleupon as I thought it was excellent. I have seen similar analogies, but that one was particularly well done.
      Links dropped in my comments are quite welcome when they add to the discussion, and they end up gaining a little juice in just the same way as the links to comment authors.

      I must admit I threw in the analogy at the end, and it might not be perfect but it does give something else to think on.

  11. says

    Hugo, is not a person; its a collection of algorithms. In every algorithm there are things called variables.

    IF X >= 12.3 { call_to_function(X+y)

    X and y are variables. (Yeah, I know you don’t need me to explain this stuff to you)

    PageRank is a variable. “Inbound links” is a CONCEPT. Big difference. The fact that Google calculates and recalculates values associated with this variable for gazillions of pages on a daily basis should make you stop and think WHY? To show pretty green dots in your toolbar?

    Lots of inbound links from authority sites don’t matter if that authority site blatantly sells links and all those links’ PageRanks are devalued. Why do you think there’s less PageRank in the SEO space now, and dropped from TBPR 7 to 6? She wasn’t writing any paid reviews.

    Why do you think real estate sites suddenly tanked last year? The links pointing at their sites stayed the same. But they stopped passing PageRank because those guys played silly games with their states pages.

    You should not chase PageRank. But many SEOs these days are like a NASCAR driver that doesn’t know what the hell is under their car’s hood and clueless about how to fix a flat tire. All they know how to do is “turn left” and “drive fast” (“quality content” and “quality links” ring any bells?).

    • says

      I think the main reason Vanessa lost PR7 was because she lost a blogroll link from the Google Webmaster blog.

      That being said this isn’t about toolbar representations of pagerank, even though the toolbar occasionally gives indications on how this is working for a particular snapshot in time.

  12. Hugo Guzman says


    Thanks for chiming in! I also have a great deal of respect for you, but again, I think that you’re dealing with semantics.

    “PageRank” is a variable in Google algorithm (I referenced that in my last comment). But the very term “PageRank” is simply a metaphor for the score assigned to a page based on inbound linking factors.

    I guess the only way that I can explain this is by pointing out that the term “PageRank” is short for “Page Ranking” and is a direct offshoot of the term “citation ranking”.

    And if you understand how citation ranking works in the academic community (Thesis A has a hire “citation ranking” than Thesis B) then you realize that the word “link” and “citation” are synonymous.

    PageRank is a scoring based on links. Not the other way around.

    Sufficed to say, I think that you, Andy, myself, and others, understand how Google indexes and ranks pages. My original point is that throwing in the word “PageRank” simply serves to confuse less experienced SEO folks that still haven’t completely deciphered the word’s meaning and context.

  13. says

    “PageRank” is short for “Page Ranking” and is a direct offshoot of the term “citation ranking”.

    Hugo, “Page” in PageRank refers to Larry Page.

    That said, I agree that many people are confused about PageRank. “How can I get more PageRank?” “Will a bigger PageRank improve my SERP rankings?”

    Those questions tend to come from people who are only interested in learning how to manipulate search results. TBPR is not all that useful for doing that. So, if all you care about is ranking higher, you may conclude that PageRank is meaningless.

    However, its as meaningless as gravity from Google’s point of view.

  14. says

    I used to think that ideally you would want all of your pages indexed, minus pages like privacy policy and those types.

    But I guess as you get more and more pages, having them ALL indexed just won’t happen. The important thing to take away from this is like you said Andy,

    “Once you get to 500+ pages of content, it becomes harder and harder to give life back to all of them, and thus only what you class as “pillar” content gets a much needed burst of life. There is a constant ebb and flow, 2 steps forward, one step back.”

    • says

      Drew if you divert juice into various archive pages, and keep them flat as if they are sitemaps, and also have an HTML sitemap, you can keep far more pages in the primary index.

      If you look on PageRank as a minor factor for ranking once a page is included in the index, so much the better, that means you can spread it a little thinner to ensure more pages are indexed.
      Due to the random surfer, you have a little more juice to play with as well if you have more pages in the index.

  15. Yicrosoft Directory says


    You are very brave to make this post. Although not everyone one agrees to what you say, you still say it. With that being said, I do agree with everyone else when they say that PageRank is overrated. But, despite what everyone says, they almost always take a peek at the Page Rank of a site when determining whether or not they can deem it ‘Googleworthy’.

    Great post, keep up the good work.

  16. says

    Imagine if they had called it BrinRank instead of PageRank? A lot of the confusion would have been avoided as no body has a web-brin.

    Good article Andy, I liked it a lot. It seems too often that the throw away term used today is, “Don’t worry about PageRank it doesn’t matter” without much regard to how crazy that statement is. A proof most will offer just about and search result where a PR2 is outranking a PR6 page. What’s being missed in that argument in all the pages with no PageRank which are being crawled once in a blue moon and returned in the results even less.

    Sure PageRank isn’t the deciding factor in page ranking, but it is an essential requirement for the page to be considered for ranking in the first place, and it’s not to be forgotten. It’s the oxygen that lets a page live, often taken for granted, but cannot be ignored. Sure Tom Brady is a better quarterback than me because he can throw the ball better, but take away his oxygen and I’d be much more effective. In the same way a better on-page optimized site will perform better than one with no on-page considerations and tons of PageRank, but that page will not exist in the index without at least some viable PageRank.

    In the end, even the “PageRank doesn’t matter” crowd are still benefiting from it existence because at least to some extent Google’s primary directive still works somewhat. Good content does draw links and popularity, its just more clouded today due to newness factors, link acquisition velocity and acceleration, and the many other signals used to rank pages. However it all starts with some sort of PageRank, and that cannot be denied.

    To the detractors, just start publishing your new pages without any links to them from your own site or any other for that matter, and we’ll all see how much Google likes them. It could be the definitive content in the world and no one will ever see it in the search engines.

    And finally the other metric that doesn’t get discussed enough is the allocation of Google’s resources which is primarily based on PageRank, as Andy has touched on. This again plays to a page needs PageRank to even be able to compete for a ranking position. Crawling speed, indexing speed, updating frequency, and even the statistics that they display for a site in webmaster tools is determined by PageRank and not relevance or quality. Just try to restructure a site’s internal URL structure with 301s that has little PageRank it will take months and months, do it on a well linked site and the effect can be seen within days.

    Just like oxygen, with PageRank you don’t think about it much when you have it, but remove it and it becomes an immediate concern above all other things.

  17. says

    I liked how you point out deep linking to old content. I think a lot of times we are worrying so hard about making sure our new content is seen that we forgot the great resource for traffic we have already created, our old content. Linking back to content keeps it alive.

    • work at home ideas says

      I think to avoid this problem, try to have just a few categories which represent your key business goals or programs/service you promote. Then install WP plug-in for related posts. In this way all old posts url will be included in new posts.

      Peter Lee

  18. Hugo Guzman says

    @halfdeck – Your failure to recognize double-meanings aside, I’m glad to see that you seem to finally be agreeing my initial point.

    The article title should have switched out “PageRank” for “inbound links” or maybe even “citations”.

    Citations (links) get pages indexed and ranked. Nothing more, nothing less.

  19. says

    I have to admit that eventhough I believe that Google Page Rank is overrated..I can’t still help myself from looking at that tiny white and green box everytime open a new page..^^

    I guess I’ve already gotten used to it…^^..but mind you I Stumbled a lot of sites that have low PR ranking or no ranking at all yet. the content of these sites are really good..^^..even better than some of those high ranking PR sites.

    • Hugo Guzman says

      This is exactly why Andy’s article was misleading.

      In the context of the “web” all links from indexed pages pass “PageRank”.

      However, if page A gets a link from page B, but page B is not indexed, then obviously page A will not receive any benefit from that link. But that’s not because the link “isn’t passing PageRank” as you’re presupposing. It’s because, by definition, that page is not part of the “web”. It is not part of Page and Brin’s “citation ranking” equation. It has no citations of its own.

      So let’s say that we reword my statement “links get pages indexed and ranked” and change it to “links from indexed pages get pages indexed and ranked”. We know have an accurate description what gets a page indexed in a search engine without using the sensationalist word “PageRank”.

      Mind you, HalfDeck, many people will read your prior comment and think that you’re referring to toolbar PageRank (i.e. the little green bar). And as we know, Google can and does block certain pages from passing toolbar PageRank (to thwart link sellers and buyers) but you and I are discussing the passage of “real” PageRank (i.e. the ability to get a page indexed, etc.)

      The sooner that we stop saying PageRank and start referring to this phenomenon using non-branded terminology, the quicker that we can help educate the entire SEO community about what it takes to index and rank web pages.

      That is what we’re trying to do, right? ; )

  20. says

    I try to get the internal linking right by having search engines not follow any pages except actual post pages using the WordPress Robots Meta plugin.

    I get every page listed one page away from every page by having an individual post archive page using the SRG Clean Archives plugin to get a link to every post.

    And finally I use the Related posts plugin to make sure that people as well have a better experience by getting a list of articles that they would probably be interested in seeing.

  21. Quality Link Directory says

    I do think that there really is a corelation between PR and SERP. They both have the same backbone – quality link building.

    You can rank on your keywords and working on your PR as well. Although there are interesting debates on forums on this topic.

    Based on my experience, its how you build your backlinks that will have a final say.

    Nice article Andy! Kudos ;)

  22. says

    Will someone please tell me how and if they are calculating the “actual PR!!??” I understand the the TBPR is faked by Google. Well, how in the heck can anyone calculate REAL pagerank?

    It should be possible to nail down but would require lots of testing and analysis. Has anyone done this yet or attempted to do this?

  23. says

    “It’s because, by definition, that page is not part of the “web”.”

    Hugo, you’re getting warmer.

    Search engine guide, search engine watch, and cre8asiteforums are all part of the web but their paid links are not passing PageRank, and some of their straight links are even passing less PageRank according to Matt Cutts who noted that there’s less PageRank circulating now in the SEO space.

    In other words, even if a page is part of the web, a link can pass less PageRank than it “should” or not at all.

  24. says

    Interesting. At first when I read the title I was like what what what… Then I realized that you meant what I had been thinking for a while – but you just used the word pagerank. Thanks for this great post – it cleared up a lot of things.

  25. says

    I’m not sure I understand your methods here. Why does the fact that the results are empty when entering the url as a search query mean that the page isn’t indexed by Google?

    Just doing a quick test on my own small blog, I found a number of articles where a search for the url came up with nothing, but the post was still indexed (as I would term it). This includes using the site: syntax, and when searching for keywords contained within the post.

    What is special about searching for the url itself…?

  26. says

    Simon I have a follow-up post coming, but due to the demands of having to provide almost encyclopedic evidence for anything regarding SEO, unless I want to be called a sham, it is going to take a while.

    You will however find in most cases that the posts which can’t be found on a URL search are the ones buried down very deeply in your site architecture.

    If you prefer splogs and scrapers ranking better for your own content, it might not be a problem.

  27. says

    i think it is silly when people say pagerank doesn’t matter. even the toobar pagerank, while novel, still provides information to a site visitor, at a glance, about a pages ‘credibility?’ (not sure if that is the right word).

    for example: if i goto a business site that doesn’t have any pagerank (na/10), i will be very wary to purchase something from that site. for that alone, even the toolbar pagerank matters, to me.

    real pagerank matters too, alot for the reasons you mentioned above.

    pagerank is google’s greatness. it’s a big part of the equation as to what page shows up 1st in the SERPS too (directly or indirectly)…of course it matters.

    good article.

  28. says

    Don’t ever underestimate inbound links. Apple ranks #3 for “computer” on Google yet there is no “computer” on the index page.

  29. Greg Reynolds says


    Put up a related post about Google indexing problems on my WordPress blog.

    Lots of examples of Google screwing up:

    1 – De-indexing older posts (170 of them that each had Google top 10 rankings)

    2 – Not indexing posts (60 of them out of 485)

    3 – Indexing new posts in completely stupid fashion (4 in a row to the index page with absurd descriptions that make no sense)

    4 – Taking so long to index posts that mine end up in Google’s omitted results because somebody’s already copied them.

    Google’s indexing has gotten so pathetic that you just have to laugh about it.

    And the really funny thing is that the whole blog was done as a test of how to get top Google rankings without any page rank whatsoever…lol!

    Using just WordPress and plain vanilla, white hat, organic SEO took a site that had zero page rank (and an Alexa ranking in the 6,000,000 range) from 40 visitors a day to 50,000 visitors a day in just five months time.

    Of course, then Google had their indexing meltdown and can’t seem to get anything right anymore.

    Maybe I should have called the post “Top 10 Ways Google Is Like Britney Spears” instead…


  30. says

    Interesting Andy. PageRank is a number. PageRank, unto itself, is not relevance specific. That is to say, a page does not have more PageRank for “something” than it does for “something else”. Therefore, PageRank cannot be the primary ranking factor. Relevance is.

    While I understand that in order to be ranked, you need to be indexed. Not in the index, can’t be ranked.

    For me it’s kind of like saying (uh-oh… cooking analogy) the the primary factor in baking a cake is electricity or natural gas because without it, the oven won’t work. :)


    • says

      True, but there are many other analogies

      To win a gold Olympic medal, you have to make it to the finals.

      There are SEOs who seem to suggest that to bake a cake you don’t need any power source.

      How many decks of cards can you shuffle at once? Googebot might have very big hands, but there are limitations to how efficiently data can be sorted.

  31. says

    LOL Andy! I won’t argue that there are indeed many SEO’s that believe a power source isn’t neccessary for baking. For some reason, I tend to avoid those restaurants. :)

    In my eyes, any known, unkown, or speculated ranking factor serves to do one thing… influence relevance.

    While Googlebot can indeed shuffle many cards at once, it cannot shuffle them all at once. Only the cards “it” deems most relevant as determined by all the factors that influence relevance.

    What factors have the most influence on relevance will continue to be the subject of endless debates and testing.



  1. PageRank Is The Primary Google Search Ranking Factor

    Every single time I write a post mentioning PageRank, I get comments about PageRank not being important for ranking.

    These comments very rarely differentiate between toolbar PageRank and the PageRank of whatever kind Google stores on their servers and…