Blogcatalog – Does Onclick Pass PageRank?

Note: Old code from this post has been removed – sorry blame WP and poor code support

When Blogcatalog was relaunched, the links from the directory were nofollow.

 

In my initial review I suggested that it would be a popular decision to remove the nofollow from the links, and also use a different method of tracking clicks – Blogcatalog have to track clicks because it is part of their blog rating system, and might also help with advertising sales for premium positioning.

Most blog directories do not provide direct linking.

For better or for worse, Blogcatalog decided to use javascript and “onclick” for their tracking. There is a lot of confusion in the SEO community as to whether “onclick” passes PageRank and other ranking factors, and Google honestly have done very little to ensure that this confusion is set to rest.

Here is an example of the redirection code being used

<br /> <a onclick="return o('andy-beard-niche-marketing');" href="http://andybeard.eu">http://andybeard.eu</a><br /> 

This article will prove that onclick can pass PageRank – it will not prove that it always passes PageRank.

Some Background On Onclick and Pagerank

Sometimes people mistakenly point to an article by Matt Cutts written in 2005 which was discussing using javascript.

Matt’s article on “sneaky javascript was regarding javascript redirects being used for cloaking, typically with “onload”.

In the comments he was specifically asked about onclick for tracking, and didn’t respond.

As an example there was a comment on Matt’s blog by “Brian”, and based on the tone it was written in, I would suspect that was Brian White who works with Matt.

How are links you describe sneaky? I think Matt is referring to cases where a spammer wants one thing indexed, Google ranks the page based the spammy text (above in the example), and the user gets something much different. Google is reserving the right to rank against what the user sees and determine relevancy for themselves.

You see, Matt was referring to something totally different.

Patrick Cornwell in the comments said:-

I second PhilC’s concern, but on the subject of clicked links: I have a couple of sites where link clicks are tracked ‘onClick’ but the eventual URL is exactly as displayed in the HREF. To me that would concur that nothing sneaky is going on, but again it makes me slightly nervous reading posts like this that I’m about to fall out of the index!

PhilC’s concern was actually due to using redirects for frames and that was the only question Matt answered in the comments.

Matt said:-

PhilC, we’re aware that sites have code to show frames in the correct way. I believe that such sites will not have any cause for concern–this is a common idiom.

Matt didn’t answer the onclick question. Maybe there wasn’t a way he could a clear answer… one of those “It depends” situations.

Washington Post Blogrolls

In 2006 there a was a huge debate surrounding paid links on the Washington Post and their use on “onclick”

The clearest explanation or hint at what was happening, and how the links were interpreted by Google was on the Search Engine Watch forums a year ago.

That was dealing with links on the Washington Post where it was fairly obvious to a human that the links were sponsored.

Nowhere within that thread do Brian White or Matt Cutts state that every link that uses onclick automatically passes no juice.

Brian White said:-

These links will not count for PageRank value. For instance, gadgets-weblog.com is not receiving PageRank from washingtonpost.com. Neither will the links count from washingtonpost.com to finance-weblog.com, for that matter.
Matt has alluded to this many times in the past, and I came in to reiterate the point.

Matt Cutts said:-

Yup, we certainly noticed these a while ago. dyn4mik3, it may look like a clean link, but the fact is that the onclick behavior invokes a new page and different behavior from a typical hyperlink, and that’s visible to anyone viewing/analyzing the source code.

They do state that those specific links pass no juice.

Google rarely state everything, even if it would clear things up a little.

Google Use Onclick With Google Analytics

Patrick Altoft has been writing some great tutorials on how to use Google Analytics Click Tracking. Google use “onclick” for their click tracking.

Surely Google wouldn’t have an algorithm which allowed them to use onclick for tracking but prevented a 3rd party using their own custom solution?

Alternative To Nofollow

If onclick is looked on as an automatic indication that a link is paid for and shouldn’t be counted, it would have been mentioned as a way of disclosing paid links by now.

For many years, even before nofollow, you have been able to use code such as this for dynamic linking that is not counted by search engines.

 

You might also see this suggested in many places

 

These are purely javascript, thus not going to pass any juice.

3rd Party Testing Of Onclick

After the Webmaster World discussion, there was some testing done, such as on SEO Revolution

I got curious, so I went through all the test domains and looked for a pattern, and sure enough, JavaScript onclicks do not pass PageRank, or the PageRank they do pass is so small that it is not seen. It was verified that Google does follow the links and index the pages. However, from a pure PageRank perspective, no “juice” is being passed.”

BlogCatalog – Blog Catalog Doesn’t Pass Pagerank?

There is currently a story on Sphinn claiming that it is a fact that Blogcatalog doesn’t pass PageRank.

As I have explained in the comments there, it is very hard to determine whether specific links pass PageRank, because Google in their wisdom have determined not to tell webmasters all of the links that are counted for ranking purposes.

As an example, at the time I responded whilst checking my own links, Google was not showing links from SearchEngineLand, Problogger, Techcrunch, John Battelle, Lorelle on WordPress, Blog Herald and many many others.

In fact to be honest most of my best links were notable in their absense

But I didn’t give up my search for facts rather than interpretation of inconclusive forum comments that were quite dated.

I eventually found a link from BlogCatalog that is being counted

Blogcatalog PageRank

That shows a link to pontotriplo.org/quickpicks/ very clearly

I also took a longer screenshot that shows a number of other blog directories such as MyBlogLog also listed.

Longer screenshot

Google’s Lack of Information & Tools

Your average mom & pop webmaster isn’t going to know whether one inplementation of a tracking script or another is allowed by Google’s webmaster guidelines.

Sneaky redirects is about cloaking, not click tracking. It is about indexing one page full of junk, but showing a user another.

Here is what the webmaster guidelines currently say

Sneaky Javascript redirects

When Googlebot indexes a page containing Javascript, it will index that page but it cannot follow or index any links hidden in the Javascript itself. Use of Javascript is an entirely legitimate web practice. However, use of Javascript with the intent to deceive search engines is not. For instance, placing different text in Javascript than in a noscript tag violates our webmaster guidelines because it displays different content for users (who see the Javascript-based text) than for search engines (which see the noscript-based text). Along those lines, it violates the webmaster guidelines to embed a link in Javascript that redirects the user to a different page with the intent to show the user a different page than the search engine sees. When a redirect link is embedded in Javascript, the search engine indexes the original page rather than following the link, whereas users are taken to the redirect target. Like cloaking, this practice is deceptive because it displays different content to users and to Googlebot, and can take a visitor somewhere other than where they intended to go.

Note that placement of links within Javascript is alone not deceptive. When examining Javascript on your site to ensure your site adheres to our guidelines, consider the intent.

Keep in mind that since search engines generally can’t access the contents of Javascript, legitimate links within Javascript will likely be inaccessible to them (as well as to visitors without Javascript-enabled browsers). You might instead keep links outside of Javascript or replicate them in a noscript tag.

In Blogcatalog’s case, they are tracking clicks, which are applied to rankings which are useful.
A user is sent to the same page as they would be sent to without javascript.

It took a lot of time to determine that the links were being counted, though it may only be a minimal amount.

Maybe giving my profile on Blogcatalog a link will give it a boost and make it more visible though it does have a lot of links on the page going to other people’s profiles.

Other Alternatives

It would be possible to just use a PHP redirect

 

But then people would be claiming that they are using sneaky PHP redirects rather than clean links with a javascript onclick.
It is my belief that the javascript option offers a cleaner link to most visitors, giving them a clear indication of where they will end up when they click the link.

This was one reason why Alexa redirects to try to game Alexa were so bad, and why MyBlogLog banned their use. They are just ugly and confuse people.

Debunking a Few Other Points Raised

  1. The profiles on Blogcatalog are not being paid for. It is possible to have the profile listing be more prominent, but in no way is anyone paying for a listing of their blog and they have fairly high editorial integrity of the sites that are admitted, though a few have slipped through from previous owners. It does take time, and the community does help ensure that rogue sites are removed.
  2. There is no requirement to link to Blogcatalog – they now provide a way to authenticate a blog using a meta tag, and the widgets. Not every blog platform supports this.
    You can also authenticate using a widget which is purely javascript
    You can authenticate using a small badge or a link, and I haven’t read anywhere that those links have to be followed and I have never seen a requirement to have them on every page.
  3. Nofollowed links from content snippets – die to the cyclic nature of ranking calculations, it is doubtful that having more extenal links on the page to your content would actually be a huge benefit, but that is very hard to determine with some experimentation. It might help newer content to rank slightly better due to the temporary anchor text, but it does take search engines some time to take that into account.
    The links currently use a redirect, in this case a 302 redirect – it doesn’t really matter because they are nofollowed. It seems to be fairly standard industry practice for blog directories to nofollow links to the blogs they list, because they provide user generated content.

Overall, I think the primary objective of Blogcatalog should be to structure their SEO efforts around getting as many of their pages indexed in the search engines as possible, and providing a useful user experience.

How many pages a site has in the SERPs changes a lot, as does how it is being reported.

Currently Google is showing 729,000 pages indexed in total, and 179,000 in the primary index. It was something like 211.000 in the primary index a couple of days ago, so that shows how these things jump around a little.

Google may well be discounting the value of many of the links from Blogcatalog, just as they may discount the majority of links any site receives, and only retain a percentage.

They are more likely to discount links from deeper pages, thus unless Blogcatalog created a purely flat profile, some of those deeper listed sites do get less juice just like old blog posts on poorly optimized blogs.

Disclosure: I do some minor consulting with Blogcatalog and will potentially gain financially if they are ever “flipped” but I try not to let that affect my judgement whilst talking about them or their many competitors, many of whom I have helped for free in the past, and where I maintain good relations

Liked this post? Follow this blog to get more. Follow

Comments

  1. says

    Nofollow is sure shorter and a more simple way (for me) to do this rather than using onclick. Especially for the newbie blogger or those who really don’t know more stuff on web development.

  2. says

    Wow, I just saw a posting where a user said they got 500 surprise visitors to their webpage as a result of YOU stumbling upon it. PLEASE PLEASE PLEASE PLEASE PLEASE PLEASE PLEASE PLEASE PLEASE PLEASE PLEASE PLEASE PLEASE PLEASE STUMBLE MY blog.

    It’s my own cartoons, so you might even be entertained!

    Keep up the good work.
    Johnny

    PS: If you like the cartoons, I could provide you a “widget” to display them in some obsure corner of your site…for FREE!

  3. says

    I think the noscript tag would be a nice addition in making sure a vanilla html link was there for reference purposes or for those with script disabled. (We all know the power of the noscript tag from the recent SEJ outing of that payday loan hitcounter site).

    For me, the best implentation would be some kind of 301 implementation that would be seen by all.

    Something like click.php?url=url which when followed returned

    header(‘HTTP/1.1 301 Moved Permanently’);
    header(‘Location: ‘.$url);

    Although such an approach would need a little additional work to check if it was a verified url at the backend.

    Provided its not a 302, then personally, I dont see any problem.

    In terms of their existing set up, my gut feeling is that the onclick event is irrelevant and that PR is passed simply because its contained within a straight a href container. Its not produced via document.write (which would be an issue as the bots wouldnt see it)

    Anyhow, this can be tested can it not? Just gotta stick up a secret page or domain and point a page from BC to the page that nobody knows about, and voila, end of hypothesis. :D

    • says

      A test would only prove some, just like the example screenshot is proof that some do pass juice.

      I also found some blogs whilst testing that didn’t have any backlinks being reported at all, from anywhere!

      A redirect isn’t obvious for users, and is ugly just like the Alexa redirects – there are a few other options but the current one in use is the cleanest.

      If you use other methods, you are hiding the fact you are redirecting.

      There really should be a published best practice or clearer indication.

  4. says

    I recommend that you read this.

    It may look like a clear link, but is not pass PR.

    http://forums.searchenginewatch.com/showthread.php?p=90316

    It is a discussion where Matt Cutts and Brian White both from Google mention that OnClick Java does not pass PageRank.

    If you look closely you will see that it is very similar to what BC uses.

    PS: Andy I don’t like you. Everyone else may think that you are all that and a bag of chips, but I think you are nothing more than a BC Guppy with a brown nose.

    • says

      Rose

      In my article there is a whole section extensively quoting from that page on SEW, and also from other references.

      There is a good chance that if I have extensively quoted from it, then I also read it.

      But then I am also showing proof that as far as is possible with the tools Google is providing currently, that links with onclick can be counted by Google.

      I didn’t make up that SERP, it shows a link coming from a blog profile page on Blogcatalog.

      I don’t know why Google decide to show that link, any more than I don’t know why Google choose not to show me links from so many other sites.

      The Washinton Post situation was totally different, they were selling off-topic links which was obvious with a human inspection.

      The links from Blogcatalog are provided free, and lead from a blog profile that is within a category, and point to the related blog.

    • says

      The problem here isn’t that Andy didn’t see this thread, but that you read into it what you hoped to find there. Neither Brian nor Matt come out and say what you claim they do, and interestingly, when Danny questions the point directly, neither choses to respond.

      And that develops as a pattern. Every time any one asks a question that could only be answered by resting this issue, Google’s representatives remain silent.

      The simple fact is that this does not close your case as you hope, and even the least rational reflection will bring you to that understanding. Does it prove you are wrong? No. It doesn’t do that either, but a lot of other information leads to the invertible conclusion that either your view is incorrect to some extent or that Google is becoming fractured and schizoid.

  5. says

    The link command being all squiffy doesn’t help matters either, add a ‘just because it shows as a backlink doesn’t necesarilly mean that it passes pagerank’ theory into the mix and, well, suffice to say we could all end up in circuitous go nowhere very fast cul-de-sacs.

    I get why BC want to track such things sure, info is power etc. Maybe they aren’t interested in users who surf Javascript disabled who knows, a detractor might say that this was a way of restricting link juice but…heck that would be so pre 2007 it just wouldn’t be funny. PR leakage I’m sure is the least of a site like BC’s worries. We all can all name a good few domains that link out like crazy yet rank for many things. Where is the PR leakage theory there I ask?

    Im sure too that those search dudes play around in the space and plant all manner of anti seo landmines all over the shop simply to cause confusion and uncertainty. They’d be silly not to. Why they still give us guys tools to deconstruct them is very odd and yes, makes you think twice about what we are being shown anyways!

    The onclick example in that SEW thread is slightly different to this too, as in the BC onclick event example, it calls a function and not a full url. The example cited at SEW cited too separate full urls. The one in the href and the one in the onclick call, a marginal difference but a difference nontheless.

    I’m surpised that MC and BW even commented in that thread as really, there shouldn’t be any problem with a company tracking what their users do. Where was the harm in washington post doing what they did? Who got killed even. Why should a little bit of tracking effect the ultimate destination? As long as the user ends up at what they see in the status bar then it really shouldn’t matter.

    That said, as the onclick event would only work for users with JS enabled, then perhaps it would be beter for them to simply document.write the output of the onclick aspect of the code, that way, the bot wouldn’t see it anyway, so it becomes a moot point.

    Ultimately the intent is to give users a link and track user behaviour. The SE’s shouldn’t get all bent out of shape over that.

    • says

      The harm, and the thing that Brian seems to have been hitting on was that these links were in a paid “blogroll”. Even if one believes that onclick tracking automatically means no passing, and even if there were no onclick tracking in these links Google would have sought to blog PR passing from them.

      It’s okay to sell links, but only for those on Google’s short list, and it would seem that, at least a year ago, the Post wasn’t on that list.

      • says

        Absolutely Dane, I too alluded to this in my Sphinn comment.

        Sometimes those guys have a habit of creating FUD. It isn’t the 1st time something will be left open to interpretation and wont be the last either!

  6. says

    Hi Andy

    Great post. I find the whole area so confusing

    It’s also annoying the way Google so consistently choose not to answer questions clearly! They know what issues are being had, why don’t they clarify this for people more often. The amount of times in the past 2 weeks I have seen people asking questions and getting vague answers is high. Too high.

    Overall they’re a great company but the great stone wall of silence makes it difficult for the lay man to get by.

  7. says

    Interesting stuff; I found your blog a while back from John Chow I think and I just thought of it today and thought I’d stop by. I’ve honestly never used onclick in my entire life :-) I do follow on my blog, however. I’m not huge on code, I deal with it because I have to I guess…I think I just need an assistant.

    • says

      I have to say Matt’s statement in that thread is open to interpretation. I don’t have much faith in what the link: command returns either though. Still, the entire debate brings up an interesting question – easily provable by running a test:

      1. Install a sitewide onclick link pointing at a new page.
      2. Track crawler activity and indexing.

      Sitewide links will direct enough PageRank to a new page to get it indexed in a few days – assuming onclick doesn’t block link juice / bot crawl.

      Anyone want to bet money on what happens? :)

      • says

        Halfdeck, think about it, a site wide link from my blog probably carries as much juice as a single link from John Battelle (PR8)

        I found blogs on Blogcatalog that had been established for a while, at least a few weeks, and Google’s link commend wasn’t showing up any links at all.

        This is getting into Vanessa Fox “SEO is Art” territory.

        I linked in the article to test results stating that Google does crawl the links, so it is not the same as “nofollow”

        JavaScript onclicks do not pass PageRank, or the PageRank they do pass is so small that it is not seen. It was verified that Google does follow the links and index the pages. However, from a pure PageRank perspective, no “juice” is being passed.

        I know Rose read that too, but without a lot more details, it is impossible to say whether those tests were conclusive.

        I am intending to do some additional testing to see if I can Gogglebomb a page to rank for something weird, but that won’t be conclusive because the onclick might modify the juice given.

        The 2 best terms to describe SEO are YMMV and FWIW

        • says

          “a site wide link from my blog probably carries as much juice as a single link from John Battelle (PR8)”

          Yeah, it should be more than enough to get a page indexed.

          “It was verified that Google does follow the links and index the pages. However, from a pure PageRank perspective, no “juice” is being passed.”

          Ok, now this makes me wonder: does Google index URLs then run PageRank calculations, dropping URLs with very low PageRank after indexing? Or does Google “guess” the PageRanks of newly discovered URLs and prevent indexing of URLs that probably have very low PageRank?

          Guessing the PageRanks of every newly discovered URL would be more involved, I imagine, than just dropping low PageRank URLs from the index during the PageRank iteration phase.

          So, his statement doesn’t make sense if I assume that Google guestimates the PageRanks of a newly found URL and index or not index depending on that PageRank because:

          If no PageRank is being passed, a URL shouldn’t be indexed. If very little PageRank is being passed (but enough to force indexing), then the URL should be in the supplemental results.

          Therefore, if he set up his test correctly and his page got indexed, then his claim that no juice is being passed is false – assuming Googlebot is proactive about not crawling or indexing URLs with very low PageRank.

          If instead I assume Google will crawl first and drop URLs later, then I’d ask: “did the page stay indexed? Or was it dropped from the index a month or two later?”

          Also keep in mind a page that Google includes in its main index will never have zero PageRank; the minimum PageRank of a page is (1-d) or .15 if we use the original PageRank formula with d = .85. So “or the PageRank they do pass is so small that it is not seen.” isn’t convincing. Was the PageRank passed or was it “created” by the page itself?

          Finally, his post was written back in Aug 2006 when even on WMW the line between indexing/crawling and PageRank was somewhat unclear. He also doesn’t say how he measured PageRank (my guess is the toolbar).

          We all love to hear a simple “yes/no” answer to things like this, but Google is a program, and when we ask questions about a program with thousands of lines of code, the answers can sometimes be pretty complicated.

        • says

          Halfdeck, there has been a response from Matt suggesting that your link to his article about links in the webmaster console was correct.
          http://sphinn.com/story/5310#c7990

          That also doesn’t make sense as I have responded.

          Webmaster console is generally a lot more up to date
          Webmaster console includes nofollow links

          No answer on the onclick though

  8. says

    Dan Crow mentioned that Google were working on psuedo-pagerank.

    That should cause further brain swelling for you Half ;-)

    Excuse my dumbness here, but if Andy you are saying that an anchor with an onclick event doesn’t pass pagerank I’m ever so slightly confused:

    That being the case then Urchin events added to a link would inhibit the flow of PR. I’ve never tested this, but I’d be mighty shocked if Google advised people to use their tracking software without mentioning that doing so may actually affect the juice of that link. Maybe I haven’t read enough of the posts and comments, but this couldn’t be a blanket change in behaviour applied to ANY anchor containing an onclick attribute?

    Rgds
    Richard

    • says

      Richard whichever code you wrote with a link to this side didn’t work.

      You need to use square brackets for the code here

      The links are interesting reading if you haven’t seen them before, and you should probably read the 2 threads on Sphinn as well, both are on the front page currently.

      One of the sites linked though to was a useful tutorial on Urchin.

      I am saying it should still pass pagerank, Rose is saying it isn’t based purely on 2 threads.

      Matt Cutts is also now saying the link: commend doesn’t effectively mean anything at all as far as which links pass value.

      Maybe every site should just use nofollow on every single link, start a national nofollow day

  9. says

    Sorry – it was simply the code you published at the head of your post.

    We had a guy over on Google Groups recently proposing something similar – nofollowing every link on your site in case Google penalises you… *sigh*

    So just that I’m clear about what you’re saying Andy – my reading of the above gives the impression that your testing of an anchor with an onclick attribute set passes little or no juice?

    I see Half chiming in with some interesting thoughts on whether a page would be indexed without pagerank (and he’s one guy who knows his PR).

    • says

      Here is what I wrote in one of the threads on Sphinn

      Whilst I might be able to set up an effective example to try to prove whether a weighting is given for anchor text with an onclick component, using some mumbo jumbo phrase such as your recent meta keywords experiment, that still wouldn’t prove anything because Google might decide to apply a penalty for the onclick at a later date.

      Whatever test you set up for this, it is never going to be conclusive.

      I have always maintained that Google could possibly be doing some kind of 2 pass system for link allocation

      1. Decide which links they should use
      2. Decide which weighting to give them

      Sitewide links in the sidebar people suggest have less value than links in content, and the same is possibly true of comment links if detected.

      At what stage in the calculations is that being discounted?
      How does the juice deducted get allocated?

      If all you have is sidebar links on your front page, then it doesn’t matter what percentage they are allocated, it will all be in proportion 0.15:0.15 is the same as 1:1

      Matt is effectively saying that just because links show up, that doesn’t mean they have value, thus even the links I have found to sites from Blogcatalog could be totally meaningless.

      That effectively makes the link: command about as useful as “Are you feeling lucky”

  10. says

    Rose, you don’t win any brownie points for bringing up off-topic issues like MyBlogLog. Bottom line: we have no conclusive evidence that onclick links do or do not pass juice. You’re jumping to the conclusion that it doesn’t. Making personal attacks only weakens your position.

    • says

      Halfdeck, that was actually Rose repeating one of my comments on Sphinn, which is a throwback to a previous attemp by Rose to discredit Blogcatalog, around the same time she was banned from the service.

      It is actually relevant to this conversation, because Rose seems to think the only directory which might have bad signals is Blogcatalog, whereas I would personally be more concerned with the links on MBL if I was worried about link credit.

  11. says

    Halfdeck , you said I did not win any brownie points for bringing up off-topic issues like MyBlogLog. Well as Andy said it was him that brought it up and now you say oh nevermind. Well why is it ok for Andy to bring up off topic issues?

    Got to love the SEO circle… I can’t call Andy here a Guppy yet users can call me a dumb a**.

    I can’t mention BC but Andy can.

    Andy if you think this all has to do with my being removed for pointing out their lack of terms of service and that they were a clone- you better think again. The funny thing is NOW Daniel is making changes.

    Quoting Danny ” Thank you for bringing it to our attention, don’t be surprised if you see it implemented soon.”

    You just don’t get it and you never will Andy.

    Here let me dumb it down.

    It was Mr Doubts who first found this June 7, 2007.

    He reported it here.

    http://moredoubts.wordpress.com/2007/06/07/blogcatalog-is-not-what-it-shows/

    BlogCatalog should have listened then, but no they were to stubborn to.

    It took a post of 58 Comments to wake them up.

    Credit goes to Mr. Doubts though.

  12. says

    Dane no I have not. However, if you are referring to the links on BloggerTalk they have been fixed. Funny- we take suggestions and make changes.

    Andy- why did you delete my comment?

  13. says

    Quoting Danny ” Thank you for bringing it to our attention, don’t be surprised if you see it implemented soon.”

    He was replying to Sebastian.

    It was Mr Doubts who first found this June 7, 2007.

    http://moredoubts.wordpress.com/2007/06/07/blogcatalog-is-not-what-it-shows/

    BlogCatalog should have listened then, but no they were to stubborn to.

    It took a post of 58 Comments on Sphinn to wake them up.

    The good thing is I hope they now change things around to be fair.

    Credit goes to Mr. Doubts for first finding this though.

    Andy this was not personal. It had already been pointed out back in June and I was only stating that it should be fixed.

    Halfdeck why it ok for Andy here to bring it up yet not me? I was only quoting what Andy said to Shawn. That certainly did not WIN brownie points in the debate.

  14. says

    Quoting Danny ” Thank you for bringing it to our attention, don’t be surprised if you see it implemented soon.”

    He was replying to Sebastian.

  15. RoseDesRochers says

    BlogCatalog should have listened then, but no they were to stubborn to.

    It took a post of 58 Comments on Sphinn to wake them up.

    The good thing is I hope they now change things around to be fair.

  16. says

    “Well as Andy said it was him that brought it up”

    Yeah, I know.

    “and now you say oh nevermind. Well why is it ok for Andy to bring up off topic issues?”

    Andy explained why he didn’t think the issue was off-topic, so I let it go.

    “I can’t call Andy here a Guppy yet users can call me a dumb a**.”

    Look, Rose, I know its not fair. The only justification for Greg Boser’s behavior is “she started it” which doesn’t fly. Like I posted over on Sphinn, his idea is dumb. I don’t think anyone else in that thread objected as strongly and bluntly as I did to Greg’s post. But you did cast the first stone, and what happened next isn’t all that unexpected.

    It’s Friday night – make yourself a drink and relax (I’m way ahead of you).

  17. says

    [Andy, I hit a spam filter so I'm resubmitting this again, in case my other comment gets lost in the spam box]

    “Well as Andy said it was him that brought it up”

    Yeah, I know.

    “and now you say oh nevermind. Well why is it ok for Andy to bring up off topic issues?”

    Andy explained why he didn’t think the issue was off-topic, so I let it go.

    “I can’t call Andy here a Guppy yet users can call me a …”

    Rose, I don’t think anyone else objected as bluntly as I did to Greg’s post. But you did cast the first stone, so what happened next shouldn’t have surprised you, even if it was completely uncalled for and unjustifiable.

    It’s Friday – make yourself a drink and relax (I’m way ahead of you).

  18. says

    Hey Andy, are you blocking co comments on your blog somehow? I don’t get the tracking tool, and it’s not picking up the comments here after I commented.

  19. says

    Just wanted to update everyone: We have changed our click tracking this evening to a ‘cleaner’ version. While there was no real evidence that our previous links weren’t passing PR, and a lot of the same claims can be made about the new links, we figured it couldn’t hurt to take the safer route.

    A big thanks for sebastian on sphinn for digging up a solution.

  20. says

    “Do not assume just because you see a backlink that it’s carrying weight.”

    When he says this he is speaking about the webmaster tool, not the link: operator. When asked if this same advice applies to the link: operator he does not reply.

    So while it is *possible* that this could relate to the link: operator, such a conclusion can not be drawn from this data.

    Since the webmaster tool includes nofollowed links where the link: operator does not we do know there are differences and it is not unreasonable to consider that this might be one of them.

  21. says

    Half, I think almost all TLA links are in that format, or affiliate links ;)

    It is not just SEJ, and there is no proof that money changed hands

    Maybe Pat Gavin buys more drinks at SES than Matt Cutts

  22. says

    “and there is no proof that money changed hands”

    Nope. Still, it would be too easy if I can check the effectiveness of a paid link by running a link: command.

    For example, link:customermagnetism.com brings up (ok, no question these are paid links):

    seroundtable.com
    unofficialseoblog.com
    v7n.com
    seo-scoop.com
    arcadeshack.com (arcade site linking to an SEO site? yeah.. doesn’t pass the smell test)
    seo4fun.com (yep, I’m selling them a link too)
    videogamesblogger.com
    thebostonchannel.com

    Do those paid links count? According to the link: command, they do … if you assume that all URLs returned by the link: command pass juice.

    I’m not saying they don’t pass juice. My point is if link: command gives away that information, its revealing way too much info; I don’t believe Google is dumb enough to let SEOs know which of their backlinks are actually counting.

  23. says

    I do agree with that to a certain extent.

    The claim of the initial post was

    “BlogCatalog – Blog Catalog Doesn’t Pass Pagerank”

    You would hope that of my 28600 (though Yahoo has reported up to 40000 in the past) links that Google only report as 863 and then only let me see 189, that there would be some value to them.

    Maybe it shows the links that would have been counted if they hadn’t been detected as piad (FUD)

    And it could well be that links using onclick are being counted or lumped in with paid links, but there is no proof, and the statements that Google have made are as always too ambiguous to act as proof.

  24. says

    I certainly didn’t ask Dane to write anything, he must have decided on his own to use the power of the masses to bring in some more data.

    I didn’t do that for my original research because I wouldn’t want to be a scaremonger. There would be too many people on Blogcatalog who wouldn’t understand the reason why they were doing the search.

    Dane’s further research if anything corroborates this post. A large number of people are finding links.

    I can only see 2.5% of my own links using the link command, many from extremely notable blogs.

  25. says

    As evidenced by the number of times I’ve had to assure people that there is nothing “wrong” if their search does not return a result.

    And It is my research, not Andy’s. My methods are mine.

  26. says

    Here’s a confirmation from Matt Cutts, Dane:

    “That’s just not true. Or rather, if most SEOs take it that way, then most SEOs are wrong. :) Halfdeck actually bothered to hunt down where I’ve said it before. I’m happy just to re-state: if you assume links for a link: query automatically carry weight, then you’ve made a faulty assumption.”

  27. says

    Actually, I just read it at the sphinn discussion.

    Am I entirely alone, or does the fact that he took the time to make that post and STILL did not address the topic of the discussion say something?

Trackbacks