Categories
Blogs

Well Done Blacknight

Cork was the setting for the 2009 Irish Blog Awards, and it was a fantastic weekend.

On the night The Best Blog of a Business was won by Blacknight. Well done to all the folk at Blacknight.

Unfortunately Michele couldn’t make it to the awards, but George was there to pick up their prize.

Categories
Blogs

Well Done to The Best Blog of a Business Shortlist

The shortlist for the Best Blog of a Business category has been announced:

Well done to all, and I hope to meet with everyone over the next weekend down in Cork. I’ll be about from Friday until Sunday, so if you’d like to meet up ping me one way or another.

Best of luck to all finalists.

Categories
Blogs

Best Blog of a Business – Irish Blog Awards 2009

RedCardinal.ie (i.e. me) is sponsoring the Best Blog of a Business category in the forthcoming Irish Blog Awards 2009. I’d like to say congratualtions to all those who are shortlisted for this category, and wish you all the very best of luck on the night. I’m looking forward to meeting everyone down in Cork.

Shortlist for Best Blog of a Business

I know quite a few of the names on the shortlist, but many more there are new to me. If you’re in Cork then please come and say hello. I’m hoping to spend the weekend hanging out with all the great Cork folk. Lastly well done, and a big thank you to Damien for all his work again this year.

Good luck again to all!

Categories
Blogs

I have a favour to ask please

I’ve had serious issues since upgrading to WordPress 2.3.3.

David Behan has been really cool letting me know about the issues initially, and updating me on the status. Apparently Netvibes still cant parse my feed 🙁

Bottom line: my feed is still shagged. And my Feedburner numbers never recovered from the 50% plunge.

So, my little request: if you have issues with my feed can you ping me with what you’re seeing please to help me debug this problem. I can honestly say you wont appreciate your feed numbers until you lose half of your readership in one foul WP upgrade.

Thanks especially to David, but to anyone else who helps out.
Richard

Categories
Blogs

How to lose half your subscribers in 15 minutes…

Just upgrade to the latest WordPress install.

*sigh*

Donncha kindly informed me I had a bug in my Subcribe to Comments plugin, and also gently nudged me to upgrade to the latest WP install.

The install went nice and easy. Plugins however didn’t play fair. At all… In particular the Google Sitemap Plugin caused lots of issues. My Feedburner plugin also went a bit haywire.

Long story short – I lost over half of my feed subscribers according to Feedburner 🙁

Sorry to everyone for all the utter crap that was going out in my feed – this upgrade has been a bloody nightmare.

Categories
Blogs SEO Social Media

Can Excessive Outbound Linking Hurt Your Rankings?

I’ve said it before – links form the fabric on the Internet. On that occasion I was making a point about how silly it was to impose arbitrary rules on how people can and cannot link to your site. Today’s post is more about how links on your site can help or harm you.

And as a case in point, the site I’ll be looking at may be a perfect case study for excessively linking to external sites. Let me introduce…

The cute whore

There’s a saying, I believe originating in Cork, which refers to certain individuals as ‘cute whores’. I’m not sure if it can be taken as derogatory, but when I use it I do so with respect for the individual I’m referring to. That cute whore in question is just about to go on his holidays, so this post may not be much use to him for a short while. But some use I do hope it will be.

The great SEO freebie give-away

Ok – I hold my hands up and say that the last few guys waiting for their free search engine optimisation reviews really have been waiting. So I’m trying to work my way through the last few site now, starting with Pat Phelan’s Roam4Free. But bear with me while I digress for an instant.

The purpose of business blogging (IMO)

In my view business blogging should have just one ultimate goal – to become an authority in your chosen field or niche. If as a business blogger you achieve that goal I am quite confident that business success will follow. I am thoroughly convinced of this.

In fact this belief is fodder for a post that I have been threatening for a long time, and a subject that I have discussed with an increasing number of people over the past few months. (I really should write the post.)

The reason for this interlude? Pat is a business blogger through and through.

Back to the SEO advice

From what I can determine Pat really is a cute whore. He’s a doer first, and a talker after. You cant but admire his achievements, and look forward to some of the new ideas he has up his sleeve (he’s been kind enough to share a titbit or two with me from time to time).

Roam4Free

But Pat has also been blogging significantly over the past year or so. In fact, I think he has probably become somewhat of an authority on his chosen niche – telecoms, in particular VOIP telephony.

Back to the post topic please…

So what about excessive outbound linking?

Well in the case of the Roam4Free blog the homepage (as of 9am August 10) had 18 internal and 66 external links. So Pat is really linking out from his posts. Or so it would seem…

What’s really happening here is that Pat uses Technorati and Flcikr (amongst other web2.0 bits & bobs). So on every post Pat assigns some Tehnorati tags, and he hosts his images up on Flickr. I can remember once reading one of Pat’s posts in Greader. He had an image of Roam4Free in the post and I thought I’d give it a visit to see if he had launched anything new. It was a link, but it brought me over to Pat’s Flickr stream. Hello back button and crappy user experience (IMO).

Don’t get me wrong – I’m not saying that Flickr is bad, but I think that people like images, and often click on images, so it’s probably better to link intuitively rather than out to Flickr.

The real problem though is that Pat’s pages are littered with links to Flickr and Technorati, and these links are just spewing PageRank where it could be far better used internally on Pat’s site.

Suggestion #1

Add rel="nofollow" to all the outbound Technorati and Flickr links. If you do want to push some PR to those pages then perhaps do so selectively, e.g. have a link to your Flickr stream from the homepage, or build a page listing (and linking to) your Technorati tags.

I have to be honest and say that I’ve never been a huge fan of tagging, although I know it can have value. In this case those Technorati tags are just sucking the life out of Pat’s blog and really have to go. The same advice goes for the add-to-feed-reader links bottom navigation column. NOFOLLOW those fellas as well.

Suggestion #2

I looked for a robots.txt file and got this:

Error 404 – Not Found

Search bar and other tools go here! If you’re reading this, it needs to be implemented, remind me!

Well Pat, you also need to implement a robots.txt file to block out some of the content that you don’t want wasting Search Engines’ time. For instance, I can see literally hundreds of pages with URLs like index.php?tag=[tag]. At first I couldn’t figure where they were coming from. Then I saw that Pat is using two tagging methods within his posts – one to Technorati, the other within his site.

Well after NOFOLLOWING the Technorati links I think Pat should block access to the internal tag pages. I’m pretty sure that they will produce at least some dupe content, and I think Pat would do better to focus on his posts and categories. (TBH I would drop one or the other tagging techniques)

Here’s the start of what I think should go into Pat’s robots.txt:

User-agent: *
Disallow: /index.php?tag=
Disallow: */trackback

The other benefit of blocking that content is that you wont be wasting PR on non-performing content. Currently it appears that Pat has almost 2.5k pages in the supplemental index. Most of these are comments feed and the aforementioned tag pages. But there’s quite a lot of post pages in there also. Retaining more PR internally on the site by removing the leakage (#1 above) and removing superfluous content should bring more of those pages back into the primary index.

(Personally I would NOFOLLOW my comment feeds as well, but advising that is sure to start an argument.)

Suggestion #3

Another reason for many of the post pages to go supplemental may be because of WordPress’ inherent pagination issues. These can be solved using Jamie Sirovich’s excellent PagerFix plugin.

The pagination issue will be especially important for Pat’s blog as he tends to be a serial poster making multiple posts on a any given good day. More posts = higher level of pagination.

In case you’re not aware of the pagination problem – the basic gist is that when you first publish a page it appears on your homepage. Then over time it rotates down to page 2. And then further again. Each time the post moves to an older page it adds one more click to the path from the homepage. Each extra click means less PR to the page. If you use WordPress’ default pager that is. PagerFix does just what the name suggests – fix the WordPress default pager.

Any further thoughts?

The only other thing I would suggest to Pat is possibly to link a little more to his own properties from within his posts. Pat has links to ‘Our Brands’ in the sidebar, but my experience is that links within the body content carry more weight, so don’t be afraid to Pat to plug yourself more often 😀

I think that Pat might also be well advised to upgrade WordPress from 2.04 as there may be some security issues with that install.

Hope some of that will help you out Pat, and look forward tot he next toy you’ll be releasing soon.

Categories
Blogs Security

Technorati Wiki

Very light posting from me…

Here’s a quickie – check out Technorati’s developer wiki. Let’s just say it’s been moderately spammed (to death)….

Technorati Developers Wiki

Categories
Blogs Search Engine Optimisation

Spiderability is the First Step to Search Engine Nirvana

I have a post brewing about taking things for granted. We’re all guilty of it at some time or other.

The most needy review to date

I’ve been (slowly) working my way through some blog SEO posts to help those folk who took up my offer of some free consulting. Today I’m looking at a site that really will benefit from some basic SEO tips. In fact, I think this is definitely the site most needy of help I’ve looked at to date (although the SEO for Blogspot folk are fighting hard for that particular crown).

Basic Seo starts with letting the bots in

Search engines rely on bots (or ‘spiders’) to crawl the Internet and collect the information you publish on your websites. These bots are basically slimmed down web browsers whose modus-operandi is extremely simple – crawl content, find links, save content, follow links, crawl content, find links…. That’s there one and only job. Ok, they do a few other things while they’re at it, but that gives the basic gist of what the bots do.

Bots don’t like Javascript

So the we’ve learned that search engine bots are slimmed down web browsers. One of the things they don’t do is Javascript. So here’s the first rule of spiderability:

1. Don’t use Javascript to create navigational components.

Use good old plain HTML. That’s what it’s for so make use of it.

META refresh gives a strong spam signal

A few years back spammers started to use META refreshes to spam the search engines. META refresh is a small piece of code that is inserted into the <head> element of a HTML document. It basically tells your browsers to go to a new location after a predetermined number of seconds. Spammers used this because the bots didn’t actually enforce the rule when they found it – they simply crawled the content on the page and returned that to the search engine for indexing. The spammer could simply place nice search engine friendly text on the initial page which would rank nicely and as soon as a human visitor came along and visited this page they would be instantly redirected to the spammers real page. The search engines didn’t like this. So the second rule is very simple:

2. Don’t use META refresh when the proper and upstanding thing to do is issue the correct header response.

Every page includes a HTTP header detailing the manifest of the page. This header includes a HTTP response code that tells the browser what to do with the page. The well known 301 redirect is simply a code that is passed as a header response that tells the browser (accessing agent) that the location of the resource requested has changed permanently, and to go to the new location. It is fairly trivial to send header responses – on Apache based systems redirects are relatively easy to set up using the .htaccess file with mod_rewrite.

Some Irish language lessons

So perhaps it would be appropriate to introduce the site that I’m looking at today: siopaeile.com. ‘Shiopa eile’ is Irish for ‘another shop’ and the blog in question is the brainchild of Paul O Mahony.

Siopa Eile shopping blog
Siopa Eile Shopping Blog

I think the most interesting thing (from my point of view) about siopaeile.com is that it is not indexed in Google. Siopaeile.com is therefore getting zero traffic from Google. Given that Google is often the number #1 referrer for many websites, Paul is really starting with a blank page. So here’s the advice I would give to Paul in order to better optimise his site.

The SEO tips

The siopaeile.com blog can be found in a subdirectory called ‘blog’. Currently the root index page contains a nasty META refresh into that directory. Here’s what the root page returns:

<html>
<meta HTTP-EQUIV=”REFRESH” content=”0; url=http://siopaeile.com/blog/”>
<p>
Please wait as you get redirected to <a href=”http://siop[… ]”>http://siopaeile.com/blog</a>.
</p>
</html>

[NB – edits my own]

Now I would say that as a matter of urgency this needs to be change:

Tip no. #1:

If nothing will be published in the root directory then move the entire blog up into the level. In general deeper means less important, so the closer to the root the more important the content appears to the search engines. I would normally say that all pages should be redirected after the move, but in this case nothing is indexed. Any inbound links should be redirected to their new homes though.

Moving the blog into the root may require some higgery-pokkery in WordPress, but I think it would be well worth it.

If it is not possible to move the blog into the root directory then I would suggest removing the META refresh and adding a 301 redirect into the .htaccess file.

The .htaccess file can be found in the root directory. You can use an FTP program (such as the free FileZilla) to grab this file and re-upload once you’ve finished editing. Here’s the code that needs to be included:

<IfModule mod_rewrite.c>
Options +FollowSymlinks
RewriteEngine on
RewriteCond %{HTTP_HOST} !^siopaeile\.com [NC]
RewriteRule ^/$ http://siopaeile.com/$1 [R=301]
# that redirects www.siopaeile.com/ to siopaeile.com/
# and www.siopaeile.com/blog/ to siopaeile.com/blog/
# but wouldn’t redirect www.siopaeile.com/ to siopaeile.com/blog/
# so fall through to redirect root to /blog/
RewriteRule ^/$ http://siopaeile.com/blog/$1 [R=301,L]
# that redirects root / to siopaeile.com/blog/
<IfModule mod_rewrite.c>

BIG FAT WARNINGI know enough to get around .htacess and mod_rewrite, but I always tell folk to test the code very well. For me personally mod_rewrite is one of the most difficult aspects of my job, and very often I have to experiment to get the code right. Get it wrong and your sever is likely going to bang out 500 errors to beat the band.

So why no indexing in Google?

I have to be honest here. I first started writing this post some time ago. I even emailed Paul to ask him about the META refresh (he probably thinks I’m either quite mad or incredibly useless for taking this long to actually write about his site…).

When I first looked at his backlinks in Yahoo! there seemed to be none (but take it from me – never, ever put your entire faith into Yahoo’s SiteExplorer tool). That would normally explain the issue – Google wont index a site unless it finds at least one external link to that site. That external link must be FOLLOWed (i.e. without a rel="nofollow" attribute), and it may be the case that unless the link originates on a semi-trusted site it will be ignored.

Well Yahoo! is reporting quite a few links now. The few links that I checked were from Paul commenting on other people’s blogs. Commenting and interacting with others is a great way to get attention and traffic. But you will not get any Search Engine benefits if the link you acquire from other sites is NOFOLLOWed. Unfortunately for Paul this was the case on the pages I checked.

Tip #2:

Paul needs good text-rich anchored FOLLOWed links (like shopping blog), preferably from on-theme websites (unlike mine).

Use the tools Google gives you

Google has been far and away the most progressive Search Engine in terms of informing webmasters about their sites status. The Webmaster Console can give valuable data to a webmaster enabling you to diagnose all sorts of issues. In Paul’s case the console will likely not yield much information (Google appears to be completely oblivious to his site). The console may give up one useful piece of info in instances where your site is not appearing in Google – Penalty notification.

If your site is under any penalty you will be notified within the console. That’s pretty cool because if you inadvertently broke the guidelines (and got caught :mrgreen:) this tool not only informs you, but it also allows you to file a re-inclusion request after you’ve fixed up the offending material.

Tip #3:

Make use of Google Webmaster Console to appraise your site condition and diagnose any issues with crawlabilty and HTTP errors.

Just a quick note: I’m not suggesting that Paul’s site is under a penalty. My gut tells me it’s a spiderability and link issue.

So any other tips?

Well I would strongly suggest reading some of my previous posts in this ‘series’. Many contain tips that can be followed on any site:

  1. Getting you site out of supplemental index (Krishna De)
  2. Page titles and SEO (First Partners)
  3. SEO for Blogspot (Blogger) sites (multiple sites)
  4. SEO for photoblogs (McAwilliams)
  5. Corporate Blogging SEO (Bubble Brosthers)
  6. PageRank Flow, Comment Feeds in Supplementals, NoFollow & robots.txt (BifSniff)

Apart from the above, there are many other tweaks that Paul can make. I would certainly include post titles (they seem to be missing from the homepage) along with links straight through to the actual post. I would also consider NOFOLLOWing all the links to the social media site and Technorati tags. I noticed a few other actionable items, but the first and foremost priority is letting the search engine bots into the site and getting the pages properly indexed. Hope that helps Paul.

To the others who are still on my list

I have to admit when I made my offer I sort of knew it was a little risky. I thought I’d spend a short amount of time on each site and zip through the reviews. I’m finding that I’m actually spending multiple hours on each site, and being just a mere one-man-show means that I often have to drop the review mid-sentence to work with clients (and I haven’t been short of work thankfully). But I do promise that everyone will get a review. So here are the blogging heavy-hitters that still await their reviews:

  1. http://www.argolon.com/
  2. http://blog.roam4free.ie/
  3. http://www.mneylon.com/blog
  4. http://www.headrambles.com
  5. http://www.mediangler.com

Slow but steady progress (emphasis on the slow – sorry guys)

Categories
Blogs Search Engine Optimisation Search Engines

WordPress Mobile Plugin with WURFL Killed my Rankings

A couple of days ago my site absolutely fell out of the SERPs. I really couldn’t tell what was causing Google to receive 404 errors that Webmaster Console was reporting.

Further Digging

This was really beginning to hurt me so I decided to grab my raw access logs and look to see what was going on:

66.249.65.97 – – [23/Jun/2007:02:59:52 -0400] “GET / HTTP/1.1” 302 5 “-” “Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)”

66.249.65.97 – – [23/Jun/2007:02:59:55 -0400] “GET /wp-mobile.php HTTP/1.1” 404 20530 “-” “Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)”

What’s happening there is Googlebot is requesting my homepage, getting a 302 redirect for /wp-mobile.php, and then a 404 not found for that file. In my stupidity I didn’t copy across the file in question as per the installation instructions (although I’m not sure why the plugin doesn’t simply redirect to the plugin folder?).

So you can see how Google was getting those 404 errors. But my stupidity aside, there is a very nasty flaw in Ruadhan O’Donoghue’s plugin: mobile content is served to search engine robots.

If you serve excerpts for each post on your homepage then you really want the Search Engine bots to see that content. Granted, my own cock-up added to my issues by serving 404’s to the bots, but I think the plugin will need some modification to ensure that regular web-crwalers aren’t getting the minimal content that mobile devices get. For actual post pages this isn’t really an issue, but for the homepage this plugin could really affect your rankings – I for one need to ensure that my homepage is served correctly to the bots.

Here’s a few requests from my log:

74.6.69.105 – – [23/Jun/2007:03:05:04 -0400] “GET /statistics/02-03-2007/social-media-marketing/ HTTP/1.0” 302 0 “-” “Mozilla/5.0 (compatible; Yahoo! Slurp; http://help.yahoo.com/help/us/ysearch/slurp)”
74.6.69.105 – – [23/Jun/2007:03:05:24 -0400] “GET /wp-mobile.php HTTP/1.0” 404 20492 “-” “Mozilla/5.0 (compatible; Yahoo! Slurp; http://help.yahoo.com/help/us/ysearch/slurp)”

72.30.216.101 – – [23/Jun/2007:03:10:34 -0400] “GET /contact HTTP/1.0” 302 0 “-” “Mozilla/5.0 (compatible; Yahoo! Slurp; http://help.yahoo.com/help/us/ysearch/slurp)”
72.30.216.101 – – [23/Jun/2007:03:10:35 -0400] “GET /wp-mobile.php HTTP/1.0” 404 20495 “-” “Mozilla/5.0 (compatible; Yahoo! Slurp; http://help.yahoo.com/help/us/ysearch/slurp)”

66.249.65.97 – – [23/Jun/2007:02:59:52 -0400] “GET / HTTP/1.1” 302 5 “-” “Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)”
66.249.65.97 – – [23/Jun/2007:02:59:55 -0400] “GET /wp-mobile.php HTTP/1.1” 404 20530 “-” “Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)”

It appears that MSNbot isn’t affected by this, but both Googlebot and Yahoo!Slurp are served up the mobile equivalent of your blog.

The plugin is taking over the parsing for all those user agents as they accept mobile content. But the fly in the ointment is that the content served up by Ruadhan’s plugin is extremely paired down: the homepage simply includes links to your last 10 posts. I’d say this could spell the kiss of death for your search engine rankings (even if you manage to copy the files across *doh*).

I’ve left a comment on the plugin page over on the .mobi blog, and trackbacked to Michele’s post where I first saw this plugin. Hopefully Ruadhan can come up with a workaround for this issue, as I’m quite sure a mobile plugin will be very useful given that mobile devices are going to appear more and more in your logs going forward.

Categories
Blogs WebDev

Stop WordPress Overwriting Custom .htaccess Rules

For as long As I care to remember I’ve been having issues with my WordPress .htaccess file.

.htaccess file is a small Apache file that lets you do all sorts of funky things with requests made to your server. It’s also one of SEO’s best tools. I have a lot of custom 301 redirects set up, including a redirect which makes my site available only via the www subdomain.

WordPress Permalinks

Well WordPress has a habit of rewriting the .htaccess file to allow some of the SEO-friendly URLs you regularly see (also known as ‘permalinks’). And each time it does so I lose my rules. It’s a royal pain in the arse and when this happened just the other day I thought I’d take the time to fix this for once and for all. I had to dig through the WordPress Codex to see what was causing all the trouble. save_mod_rewrite_rules() is the culprit. That little function, and my own ignorance of how WordPress processes the .htaccess file.

The solution

As with most solutions it’s really very simple. As with most simple solutions it’s only simple if you know about it. So here it is:

WordPress .htaccess file looks like this:

# BEGIN wordpress
<ifmodule mod_rewrite.c>
rewriteEngine On
rewriteBase /
rewriteCond %{REQUEST_FILENAME}!-f
rewriteCond %{REQUEST_FILENAME}!-d
rewriteRule . /index.php [L]
</ifmodule>
# END wordpress

Now here’s the really important bit:

Never place your own rules within the ‘wordpress’ block

The WordPress block is the bit that starts with # BEGIN wordpress and ends with # END wordpress. My mistake was to place my rules within this block (after the rewirteEngine On line). This seemed the sensible thing to do – after all rewrite rules must come after rewirteEngine On, and my understanding was not to repeat this command.

How WordPress rewrites .htaccess files

When WordPress rewrites the .htaccess file it does so by first checking that the file is writeable, then exploding the file using the # BEGIN and # END strings. Anything outside of those markers should be left intact after WP rewrites the file.

In my case I had to add a new block with a second rewirteEngine On so that Apache wouldn’t break (although I don’t think this is strictly the correct way to write the file). Here’s what my new revised .htaccess file looks like:

<ifmodule mod_rewrite.c>
RewriteEngine On
[... ]Funky custom rules go here[ ...]
</ifmodule>
# BEGIN WordPress
<ifmodule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</ifmodule>
# END WordPress

Perhaps the WordPress folk could add an additional comment into the .htaccess file that explains this better?

Well there you have it – how to stop WordPress overwriting your custom .htaccess file rules.