Categories
Keywords Search Engine Optimisation Search Engines SEM Statistics

Overture Keyword Tool Dead, but Wordtracker Saves The Day (and now Trillain jumps on board)

Keyword research is the most inexact element of Search Marketing. The lack of any one clear source of accurate keyword data is perhaps the biggest problem any search marketer will face.

Overture, the free keyword tool from Yahoo!, is either dead or dying

Most people will be familiar with Overture’s keyword tool which has been returning free keyword estimates for many years. Now the old adage the you get what you pay for still holds true, and most professionals have long since moved to paid services.

But apparently the Overture tool isn’t too well cared for by its owners, as Loren Baker at Search Engine Journal finds out from John Slade, Global Product Management with Yahoo Search Marketing:

First, I’d like to clarify that Yahoo! Search Marketing’s public keyword research tool (formerly known as the Overture’s Keyword Selector Tool- KST) continues to exist today… the responsiveness of this free tool is diminished due to the sheer volume of hits it receives each day, therefore browsers may time out and error pages may appear…

The same Yahoo! representative is then quoted as saying:

… the public tool continues to be available but my advice to our advertisers is to use the protected keyword research tool.

which bascially confirms something we all knew a long time ago – the Overture keyword tool pretty much sucks.

One man’s risk is another man’s opportunity

And lo and behold, hot on the tracks of the rumours about the demise of Overture’s tool comes an announcement via Aaron Wall that Wordtracker has introduced a new free version of Wordtracker that returns up yo 100 keywords.

The Wordtracker tool can be found at http://freekeywords.wordtracker.com/.

Want a little bit more?

I use a number of keyword tools, both free and paid. To be honest there’s no fail safe method to generate 100% accurate keyword lists, but by using a combination of tools you can come up with pretty decent ones.

As I wrote previously, I’m using NicheBot (that’s an affiliate link) which has some great features for generating and processing keyword lists. Recently I’ve also added another professional tool to the armoury which I’m really liking a lot (I’ll post about that later).

If anyone knows of other tools that fly a little below the radar I’d love to hear about them.

[UPDATE – Now Trillian is offering a free keyword tool – http://www.keyworddiscovery.com/search.html]

Categories
Google Keywords Search Engines SEO

Say Goodbye to ‘Google Bombing’ & Hello to ‘Take Care with Your Anchor Text’

Google has announced a change to their algorithm that minimises the effect of ‘Google Bombing’. Results for bombed search phrases are now showing references to ‘Google Bombing’ rather than pointing at well known websites.

The most famous ‘Google Bomb’ was for ‘miserable failure‘, which previously pointed at the White House bio page of George Bush. Searches on that query now produce results pointing at references to ‘Google Bombing’.

What was Google Bombing?

Google Bombing was the practice of extremely heavy and concerted linking campaigns using a particular anchor phrase. In the case of George Bush, links were created using the anchor text ‘miserable failure’:

<a href="http://www.whitehouse.gov/president/biography.html">miserable failure</a>

Previously Google apparently considered both the anchor text and volume of links as a proxy for authority for a particular search phrase. This no longer happens.

But could this change affect your rankings?

Although we cannot be sure whether these algorithm changes have a threshold before they kick in (I’m sure it’s not so simple), it is worth considering the implications for regular link building efforts. The change seems to filter results – I couldn’t find Mr. Bush’s bio ranked at all for the phrase ‘miserable failure’.

Now Google is known for employing the brightest minds on the planet, and it’s very likely that the changes will not affect any sites other than those that previously ranked well after ‘Google Bombing’ campaigns. That said, Google has been known to make the odd mistake here and there.

If your back link profile is heavily skewed toward one or two anchor text phrases you could see your rankings affected by these changes. I imagine sites that target unrelated or semi-unrelated search phrases would be more at risk.

Varying you anchor text has always been a pre-requisite…..

Professional link builders and SEOs should already know to create a varied anchor text profile for their clients. But in cases where a large proportion of any particular backlink profile target an anchor phrase for which a site is not well trusted you might run foul of this tweak.

It might be timely to watch some of the highly optimised niches to see if any other sites are affected by this.

Categories
Marketing Search Engines Standards Technology Usability

Apple iPhone the Tipping-Point for Mobile Internet?

Rightyo, ever man and his dog is reporting the much rumoured Apple iPhone.

Judging by the photos Apple will release an ultra sleek device (no surprise). And of course, Apple are renowned for nailing fantastic technology interfaces. Keeping it simple with absolutely functional interfaces is the cornerstone of Apples design.

But could this herald the long-postponed ‘Mobile Internet Era’?

Read/Write Web is talking up the UI, which, without a stylus, should be interesting. (When I heard about pinching gestures I thought about this great video of a multi-touch interface.)

The out-of-the-box partnership with both Google and Yahoo! shows a quite serious posture to target the Internet user – the free push-email from Yahoo! must be of concern to Blackberry? And the iPhone will also include some pretty comprehensive connectivity options to keep you on-line.

But it’s Apple’s ability to take a product and push it into the mainstream, both physically and mentally, that offers the greatest chance that the iPhone will be the tipping-point for mobile Internet. If Apple manages to achieve the same success for the iPhone as the iPod I think Mobile Internet will finally become a mainstream reality. (Notice the slide on Read/Write showing 10m units and 1% market share in 2008. Bear in mind iPod has 80%+ share…)

When mobile devices become the prevalent access points for the Internet (there’s a lot more mobile phones than computers) we are going to have a sea-change in search (hello localisation), and websites will need to get clever about publishing content for mobile (hi xHTML, device-optimised content).

Now, I just wonder how Irish businesses/websites will be positioned for the Mobile Internet?

Not unrelated, but if you want to eliminate you mobile roaming charges try Roam4Free, an international sim that lets you roam for free.

Categories
Google Search Engines SEM

Link Sellers, Duplicate Content & AdSense Guidelines – Google Pronouncements

In what looks like a charm offensive Google seems to be reaching out to webmasters.

Selling links can hurt your website’s health

Last Friday a Googler by the name of Stephanie, who is based in Dublin no less, made a posting on the Official Google Webmaster Central Blog concerning the official Google stance on paid links, and in particular link sales. (Strangely, the author’s name has been subsequently changed to – I recall there being a full name on the post initially and it was French I believe?)

Perhaps of particular note:

We have more people working on Google’s link-weighting for quality control and to correct issues we find. So nowadays, undermining the PageRank algorithm is likely to result in the loss of the ability of link-selling sites to pass on reputation via links to other sites.

There have been rumours that Google has a team of covert link buyers who identify link-selling sites (I’m not sure this isn’t fairly obvious though?).

So if you do sell links you might be at risk of being blacklisted within the pagerank algorithm.

[As an aside, can’t wait for WP2.1 and the auto save feature – I wrote a post about this on Friday only to see it disappear in front of my eyes when FF made an uncharacteristic history-1 manoeuvre :(]

Adam Lasnik on Duplicate Content

An issue that seems to pop up again and again is duplicate content. Canonical URL has been referenced a number of times by the likes of Matt Cutts, and now Adam Lasnik has written an official post on the Webmaster Blog:

We recognize that there are many nuances and a bit of confusion on the topic, so we’d like to help set the record straight.

Adam discusses what exactly duplicate content is and isn’t, and then offers some advice on how to avoid the issues that are usually associated with the problem.

The usual remedies are advised along with one or two nerve-soothers:

  1. use robots.txt to block access to dupe content;
  2. use proper 301 redirects;
  3. ensure internal linking is consistent;
  4. ccTLD for country specific content;
  5. advice for syndicated content;
  6. preferred domain from Webmaster Console;
  7. keep boilerplate content to a minimum;
  8. avoid thin-content pages;
  9. CMS issues;
  10. Scrapper sites.

It’s a good read to get the official Google line on duplicate content.

Using images in the vicinity of your AdSense blocks

This actually seems to be a policy shift by Google. The use of images close to ad blocks had been found to increase the CTR on publishing sites. Although publishers previously had to clearly separate ad and image blocks, it appears that Google no longer wants to see images near to ad blocks:

We ask that publishers not line up images and ads in a way that suggests a relationship between the images and the ads.

The posting also gives some visual examples of what’s now outside the guidelines, and one of the images included a clear border between the images and the adblock.

The actual guideline seems somewhat fuzzy to me, and it is not clear just what is and what isn’t viable within TOS. The examples given and the wording of the post require some implicit assumptions – it appears that having 4 images aligned with a four ad block is unacceptable, but how about having three images?

More debate to come methinks…

Categories
Google Search Engines WebDev

Google now Selling Domains

Well everyone has known for quite some time that Google has been using WHOIS info. After becoming a registrar some time back, many theories have sprung up about the use of WHOIS by Google, and more recently over whether Google could see through private registrations.

Well today Google announced some new features of Google Apps.

You can now register your domain name as part of the service. The cost is $10 per year with free private registration. Regardless of your feelings about privacy and Google seeing your WHOIS info, that price is still cheaper than GoDaddy who charge $4.95 per year for privacy on top of $8.95 for a .com registration.

Funnily enough the service is actually through GoDaddy

Categories
Blogs Browsers Google Marketing RSS Search Engine Optimisation Search Engines Standards Technology WebDev

Really Simple Guide to RSS

After Missing Sinn Fein’s RSS feed for my eGovernment Study I thought it might be a good idea to take a look at RSS – what it is and how to use it.

What is RSS?

Really Simple Syndication is a format for publishing web pages and other content.

In essence RSS is very similar to the content you would find on any website, with a few differences. RSS does not include any styling information that would give the ‘page’ a custom design or layout. If you can imagine reading this page without the header up top, the sidebar on the right or anything else that is superfluous to the viewing this story.

An RSS ‘feed’ can also contain more than one ‘page’ in a single file. That’s the real beauty of RSS – you can look at many stories or pages from a website without leaving the RSS ‘page’ or feed.

But perhaps the biggest difference between RSS and a regular web page is the ability to aggregate or combine multiple RSS ‘feeds’ (published RSS files are often referred to as a ‘feeds’) in your ‘reader’. A ‘reader’ is a program used to read and display the ‘feeds’ or RSS pages. Here’s what mine looks like:

Really Simple Guide to RSS - Google Reader

I read the feeds from over 100 websites just about most days. Now if I was to visit all those sites it might take me 3 or 4 hours, but my reader shows me the feeds fom all those sites on one page. I can view the website name, the title and a snippet of each item. When I click on a story title I can read the content of that ‘page’:

Google Reader open story

Using my reader to aggregate thee feeds I can keep track of many, many blogs and websites.

RSS Readers

I use Google Reader. It’s free and rather than sit on my computer it sits on the Internet so I can access my feeds from any computer with Internet access.

The main web browsers and email clients now incorporate RSS features also. Firefox, Internet Explorer, Safari and Opera allow you to track and read feeds right in your browser.

So how can you tell if a site publishes a feed?

When you visit a website you might see the following icon appear in your address bar:

RSS auto discovery through META

That icon has been adopted by all the major browsers for the purpose of depicting RSS feeds. It is available for download at Feed Icons. Older feed icons might look like this:

RSS icon XML icon Feed icon

You can see that orange is the predominant colour used to depict RSS.

Making your feed icon appear in the address bar

Since most of the major browsers now support RSS it is a good idea to notify the browser that you have a feed so that the RSS icon appears in the address bar. To make your feed visible to agents you should include something similar to the following META in the head section of your page:

<link rel="alternate" type="application/rss+xml" title="RSS 2.0" href="http://www.site.tld/path/to/rss2.0/feed/" />
<link rel="alternate" type="text/xml" title="RSS .92" href="http://www.site.tld/path/to/rss0.92/feed/" />
<link rel="alternate" type="application/atom+xml" title="Atom 0.3" href="http://www.site.tld/path/to/atom0.3/feed/" />

This auto discovery technique is also used by most readers and blog aggregators so it is a good idea to include it.

RSS features and uses

RSS can be used for many purposes. E-commerce stores can publish their products via RSS. Employment sites often offer customised search feeds so users can keep tabs on particular job-type vacancies. Many large sites offer multiple feeds so you can track only the information of interest to you.

Search engines and RSS

Search engines love RSS. They just devour feeds because they are very machine readable. Feeds also contain something search engines love: TEXT. And lots of it.

Very often my feed will rank well for specific search phrases and my site might have 2 or 3 pages ranking on the first SERP (Search Engine Result Page) – the post, my homepage and my feed . When multiple results from my site appear on a results page the probability of receiving a referral increase dramatically.

So does RSS matter?

RSS is here. It has not reached the tipping-point just yet, but the integration of RSS into the major browsers during 2006 means that RSS should become more and more mainstream over time.

And just as I finish this what appears in my reader?

the latest research done by Japan.Internet.com and goo Research shows that RSS’s bringing more accesses to the sites.

Q1: Do you visit more sites due to RSS feeds?
– More, 34.6%
– Hasn’t changed, 59.5%
– Less, 5.8%

Q2: Do you visit sites you read on RSS feeds?
– Always, 23.5%
– Sometimes, 58.1%

From Multilingual-Search.

Perfect :mrgreen:

Categories
Search Engines Security Statistics

How Safe are Search Results?

Via SearchMob

Fascinating report from McAfee SiteAdvisor on the possible dangers of clicking on search results served by the top search engines.

Possibly the oddest finding for me was this:

8% of sponsored results are rated red or yellow – almost three times the percentage of red and yellow sites found in organic results. Notably, scam sites are found at a much greater frequency in sponsored results.

I would have thought that the major Search Engines would be far more vigilant about their sponsored listings?

Categories
Google Search Engines WebDev

Google Search for US Patents

Google has announced the launch of Google Patent Search.

You can now search the US patent corpus using the the interface used for Google Book Search (worth a look itself just to see the progress of web-based UI’s).

I’m sure Bill Slawski over at SEO by the Sea will be interested in this.

Categories
Search Engines SEO Statistics

Irish Property Sites Keeping an Eye on Each Other?

You may or may not know of a service called Alexa. Alexa is Amazon.com’s search engine and Internet statistics service.

Alexa collects statistics on general Internet usage from the browsing habits of millions of Alexa toolbar users. This data is then presented on the Alexa website.

Can you trust Alexa?

There’s are one or two problems though – Alexa is too easy to game, and the toolbar users are very often extremely biased both geographically and technically (heavily US tech users).

But, without prejudice to these issues, Alexa does have some nice features. For instance you can compare traffic data from a number of sites over long periods of time.

Strange what you run into

Recently while searching for some info on a particular Irish Property site I noticed a SERP entry for www.alexaholic.com:

Alexaholic.com

This page had been set up by a visitor to alexaholic.com who entered the following five Irish property websites:

  1. www.myhome.ie (blue line)
  2. www.daft.ie (red line)
  3. www.funda.ie (green line)
  4. www.sellityourself.ie (brown line)
  5. www.privateseller.ie (cyan line)

So what? I hear you ask

Of course just about anybody could have gone over to alexaholic.com and set up this comparison (the site is public and free). But what really caught my eye was the trends of some of the sites.

Now before I go on let me explain that I studied economics at TCD for 4 years, and had a healthy (or perhaps unhealthy) interest in the stock markets. In particular I had an interest in charting and technical analysis.

Trends

Not withstanding the Alexa bias, the following trends are quite interesting:

  • November an December are bad months for the property websites;
  • Until Q4 2005 both myhome.ie and daft.ie were joint leaders for website reach;
  • December 2005 saw myhome.ie visits plunge, while Daft.ie failed to set a new low for the year;
  • Since 2005 the paths have diverged significantly for the two large property sites;
  • Daft.ie Internet reach appears to have become more volatile but a clear up trend is in place (higher highs, higher lows);
  • Myhome.ie conversely has entered a very clearly defined down trend – their reach is falling.

The €50m website

Myhome.ie was bought by the Irish Times last July for a reported €50m. Although no accuracy can be attributed to the Alexa data, over large sample sizes the trends reported may be representative of the actual actual figures.

If so the Irish Times may have quite a job on their hands catching up with Daft.ie.

Funda

Toward the recent area of the chart you might notice a blip in the green line. This is the data for Funda.ie which launched with the Dublin Coastal Development back in September.

The chart shows the uphill struggle Funda will have to compete with the big boys of Irish on-line property.

A pinch of salt

Of course you can’t trust Alexa data. But the charts still give some food for thought and give some indication of the competitive environment facing websites in the Irish property niche.

Categories
Blogs Google Search Engines

What’s Up with Google and Thinkhouse PR?

Is someone in Google de-indexing pages which are less than flattering in their reference to Thinkhouse PR?

Going back a couple of months now, but you may recall a small fuss over Damien Mulley’s Thinkhouse PR post disappearing from Google’s SERPs.

Although the page was re-indexed, Google were never forthcoming about what caused the issue to occur:

Hey, I like a good conspiracy as much as the next guy (big X-Files fan… well, of the early years at least), but I must respectfully note that there’s no nefarious banning that’s gone on here.

While it may be seen as unfortunate timing, some pages of mulley.net are currently not shown in our search results due purely to algorithmic factors… nothing manual or otherwise intentional about it.

It’s quite possible that this may change as we continue to update our algorithms regularly.

Regards,
Adam, on behalf of the Search Quality Team at Google.

P.S. — Ironically, with the online attention you’ve received about this issue, your pages may automatically end getting crawled more frequently or deeply, resulting in more of your pages being shown in our search results… so I humbly recommend a bit of patience.
Source: Adam Lasnik comment on www.mulley.net

Tinfoil hats ready

Now call me paranoid, and perhaps on that occasion it was just coincidence, but I find it very curious that my post about moviestar.ie was crawled, indexed and ranked #3 for a Google search on ‘moviestar.ie’, but very shortly after was completely de-indexed from Google (site:, cache:).

In case you’re wondering if I’ve gone completely batty moviestar.ie are a client of Thinkhouse PR.

Just too many coincidences?

My site is actively crawled and indexed by Google. Every page is indexed. Except one.

I am purely white-hat. My site complies with all Google guidelines. Their are no bad links either into or out of the absent page in question.

Some questions for Google

Perhaps I’m just a crackpot… but I’m not buying this as a coincidence.

I would really like to know the following:

  1. Do any Dublin-based Googlers have the ability to remove pages from the index?
  2. Under what circumstances would an indexed page be de-indexed?
  3. Does Google have any relationship with Thinkhouse PR?

I will be posting this over at the Google Webmasters Group in the hope that Adam Lasnik might answer some of my questions.