Cross Browser Testing Just Got Easier – Adobe Browserlab

Adobe have released Browserlab, a great tool that allows you to test web pages across a number of browsers/platforms. When you enter an URL Browserlab will grab the page on the required browser/platform combo and show you how the page looks inside your current browser.

You can check out the tool here:

Browsers JavaScript Search Engine Optimisation

Using Text Replacement with Flash – Dangerous?

If you use Flash replacement techniques could Google misinterpret your pages and apply a penalty? Flash replacement techniques are quite a hot topic of late after reported comments attributed to Google about the implications of certain Flash replacement techniques.

What is Flash Replacement?

Flash is undoubtedly a far more aesthetically pleasing medium than plain text rendered in the browser. Although recent browsers from both Apple and Microsoft have introduced anti-aliased text fonts, most Internet users are still using non-aliased viewers. And this is where Flash can appreciably improve the suer experience.

Flash replacement involves substituting plain text output with Flash-based textual content which uses anti-aliased fonts. (Anti-aliasing, for anyone unfamiliar with the phrase, basically means removing jagged edges from text.)

There is a number of techniques available for Flash replacement, sIFR (Scalable Inman Flash Replacement) and SWFobject being perhaps the best known.


sIFR involves the use of JavaScript to detect and read the text content of any particular DOM element (any piece of text on a web page for example) and sending that text to a small Flash module which returns the content in Flash format.

The process is seamless and the user gets to view your headings and selected text in a nice anti-aliased Flash font.


On the other hand, SWFObject simply replaces any text node with a pre-compiled Flash movie. The text contained within the text node is superfluous and has no direct relationship with the Flash rendered. It is this technique that I came across recently when checking a site for an enquirer.

The connection with SEO Ethics

Yesterday I wrote about SEO Ethics and how I feel that companies who promote competing websites are far more likely to cross into what in my opinion is unethical territory.

I came across the specific situation discussed in that post from a enquiry made on this site. I was asked to perform a quick analysis of a site (which will remain nameless). The site in question made heavy use of Flash. The site also used FlashObject (a prior incarnation of SWFObject). Here’s what I found on the site:

  1. The website was built in both plain HTML and Flash, and used FlashObject to replace large chunks (virtually all) of the text content with Flash movies;
  2. With Javascript disables the homepage rendered correctly in plain HTML and all content appeared to be both accessible and usable;
  3. However, on inner pages it quickly became apparent that large text nodes were rendered with visibility:hidden and these pages were both unusable and displaying quote different content to users with and without Flash

Here’s the exact CSS class applied to the main content:

.content {

So what’s wrong with that?

When I first came across this implementation I immediately emailed back the enquirer and asked that they contact the developer (who is a large well-known Irish web co). After a few days I contacted the enquirer again to find out how the developer responded. I’m not going to quote this, but you’ll have to trust me that the following accurately reflects the response given:

Following up on your concerns that the your website has hidden text, please be assured that your website is fully accessible to the Search Engines.

If you turn off JavaScript in your browser, the secondary pages of your website are returned.

The search engines Spiders view the html code of your website.

All areas of your site that use Flash do so with “Flash Replacement Text”, which is 100% search engine friendly.

I would also like to show you how you can see all of the pages that Google has indexed. Type into the Google search bar you will see that every page of your website is indexed.

I hope that this helps to reassure you that your website is search engine friendly.

I want to deal with some of the items mentioned above to clarify exactly what the Search Engines are seeing, and what the official views are on certain implementations being used.

‘your website is fully accessible to the Search Engines’

This is indeed true. There is no bar on Search Engines accessing and crawling the pages in question. It is also true that the search engines (and in this particular case I’m referring to Google, which represents c.90% of Irish search traffic) have been known to check your CSS files to look for anything untoward.

visibility:hidden is a very strong signal of spam. That property is used to hide content within the browser view. Here are the Google guidelines on hidden text:

Hiding text or links in your content can cause your site to be perceived as untrustworthy since it presents information to search engines differently than to visitors. Text (such as excessive keywords) can be hidden in several ways, including:

  • Using white text on a white background
  • Including text behind an image
  • Using CSS to hide text
  • Setting the font size to 0

[… ]
If your site is perceived to contain hidden text and links that are deceptive in intent, your site may be removed from the Google index, and will not appear in search results pages. When evaluating your site to see if it includes hidden text or links, look for anything that’s not easily viewable by visitors of your site. Are any text or links there solely for search engines rather than visitors?

In my opinion having text hidden in the version served to Google constitutes hidden text as defined in the guidelines and opens the offending site to the possibility of penalty or ban.

‘”Flash Replacement Text” put in place, which is 100% search engine friendly’

This is where the distinctions blur, and opinions diverge. There is a lot of current discussion on this topic over on Google Groups at the moment (see here and here).

Berghausen (a Google employee) has stated:

The goal of our guidelines against hidden text and cloaking are to ensure that a user gets the same information as the Googlebot. However, our definition of webspam is dependent on the webmaster’s intent. For example, common sense tells us that not all hidden text means webspam–e.g. hidden DIV tags for drop-down menus are probably not webspam, whereas hidden DIVs stuffed full of unrelated keywords are more likely to indicate webspam.

I bring this up because, although your method is hiding text behind a very pretty Flash animation, you are still presenting the same content to both the user and the search engine, and offering it through different media.

On the face of it it would appear that Flash replacement shouldn’t be an issue. On the face of it…

Google’s Dan Crow (head of Crawl) recently attended a SEMNE group event on the subject of ‘Getting Into Google’. Apparently he was very frank on a number of issues, one of which was Flash replacement. SherwoodSEO attended the event and reported the following:

  • sIFR (scalable Inman Flash Replacement) – sIFR is a JavaScript that allows web designer to customize the headlines displayed on their pages. Headline text rendered in HTML can look blocky and unrefined – sIFR paints-over that HTML with a Flash-based equivalent. This gives the headline a smooth, refined look, while still preserving the indexable text that Google needs to process the page. Dan said that sIFR was OK, as long as it was used in moderation. He said that extensive use of sIFR could contribute negative points to your website’s overall score. Yes, that’s a bit vague, but “vague” is not as bad as…
  • SWFObject – SWFObject is a more elaborate JavaScript designed to swap-out an entire section of Flash with its HTML equivalent. Think of the Flash section of a webpage as being painted on a window shade. SWFObject decides if you have Flash installed (i.e. you are a web surfer) or not (i.e. you are a search engine.) If you don’t have Flash, the window shade rolls-up, and an HTML text equivalent is displayed on-screen. Dan pulled no punches on SWFObject: he characterized it as “dangerous.” He said that Google takes great pains to avoid penalizing sites that use technical tricks for legitimate reasons, but this was one trick that he could not guarantee as being immune from being penalized.

Now when the head of Google Crawl says that a particular technique is “dangerous” and cannot “guarantee as being immune form being penalized” I sit up and take note. Dan Crow is in charge of Google’s entire fleet of Googlebots. In my opinion his comments carry considerable weight.

If using SWFObject has been classified as “dangerous”, what might happen when you use this implementation AND use visibility:hidden for the text replaced by the Flash? Well in my opinion this implementation wont improve your standing with Google.

‘Google has indexed every page of your website’

Google bans sites every day. I regularly contribute over on Google’s Webmaster Help Group and see cases of banned sites every other day. Often threads are started by webmasters whose sites have performed well for months and years. Then suddenly, without any change to their site, dropped form the index.

My point is that indexation does not guarantee that your page hasn’t broken the guidelines. A penalty can be applied at any time. And when it does it hurts.

My overall thoughts on this?

I spent quite some time both analysing and researching the issues at hand (time that could and should have been applied elsewhere). Given that the developer of the site also happens to be the supplier of SEM services referred to in my SEO Ethics post, I cant say with any certainty that their responses to this situation were genuine. If so, then it displays ignorance/incompetence at best. If not, then I think their ethics must be called into question.

Browsers Google Search Engine Optimisation

When User Agent Sniffing Goes Horribly Wrong

Ok, this serves as a good example of how not to do UA sniffing.

User Agent sniffing is a process of discovering the User Agent (browser in most cases) of the client visiting your page. Historically designers and developers used UA snifing to determine what hacks they would implement to ensure consistency across browsers and platforms.

UA sniffing is also used for many so-called ‘black-hat’ SEO techniques. You sniff for Google’s UA, which is unique, or the IP addresses of Google’s spiders and serve up a ‘special’ version of your page for the GoogleBot. In search engine terms this is known as ‘cloaking’, and is possibly the worst offence you can commit. (If you get caught :mrgreen:)

When User Agent Sniffing Goes Wrong

In this Blind People Can’t Eat Chcolate I alluded to some issues with the new Lily O’Briens Chcoolate website.

Well screen readers weren’t the only user agent that had difficulties with the new website:

Lily O'Briens Cloaking

That’s what the server was returning to the GoogleBot UA. Absolutely one of the worst cases of arsing up UA sniffing I have ever seen.

Thankfully they seem to have fixed the issue. (Hello Magico :grin:)

Can A Short-lived Mistake Cost Dearly?

I’m not sure how long the site was returning that response to GoogleBot. It was a site-wide result so if it does get picked up it will likely be site-wide. Michele already alluded to the changed page URLs, but if they were very unlucky, and GoogleBot tried to crawl the site while the server was parsing that crappy message, the site may find all of their pages quickly entering supplemental hell.

I hope they weren’t this unlucky – getting out of the supps can be a nightmare of horrible proportions.

Oh, and a hat-tip to eagle-eyed David Doran 😀

Browsers CSS JavaScript

Blind People Can’t Eat Chocolate

Well not Lily O’Briens chocolate anyway:

Lily O'Briens Lynx View

That’s the view you get if you visit their site in Lynx.

But as if that’s not bad enough, there is an even worse issue with this site which will seriously affect Lily O’Briens at the business level.

Being at SES London, and meeting guys who are literally making millions from SEO (some of these guys are billing €1k per hour), is teaching me that I shouldn’t be just giving away my knowledge. So in this case I’m going to keep my powder dry for the moment.

Blogs Browsers Google Marketing RSS Search Engine Optimisation Search Engines Standards Technology WebDev

Really Simple Guide to RSS

After Missing Sinn Fein’s RSS feed for my eGovernment Study I thought it might be a good idea to take a look at RSS – what it is and how to use it.

What is RSS?

Really Simple Syndication is a format for publishing web pages and other content.

In essence RSS is very similar to the content you would find on any website, with a few differences. RSS does not include any styling information that would give the ‘page’ a custom design or layout. If you can imagine reading this page without the header up top, the sidebar on the right or anything else that is superfluous to the viewing this story.

An RSS ‘feed’ can also contain more than one ‘page’ in a single file. That’s the real beauty of RSS – you can look at many stories or pages from a website without leaving the RSS ‘page’ or feed.

But perhaps the biggest difference between RSS and a regular web page is the ability to aggregate or combine multiple RSS ‘feeds’ (published RSS files are often referred to as a ‘feeds’) in your ‘reader’. A ‘reader’ is a program used to read and display the ‘feeds’ or RSS pages. Here’s what mine looks like:

Really Simple Guide to RSS - Google Reader

I read the feeds from over 100 websites just about most days. Now if I was to visit all those sites it might take me 3 or 4 hours, but my reader shows me the feeds fom all those sites on one page. I can view the website name, the title and a snippet of each item. When I click on a story title I can read the content of that ‘page’:

Google Reader open story

Using my reader to aggregate thee feeds I can keep track of many, many blogs and websites.

RSS Readers

I use Google Reader. It’s free and rather than sit on my computer it sits on the Internet so I can access my feeds from any computer with Internet access.

The main web browsers and email clients now incorporate RSS features also. Firefox, Internet Explorer, Safari and Opera allow you to track and read feeds right in your browser.

So how can you tell if a site publishes a feed?

When you visit a website you might see the following icon appear in your address bar:

RSS auto discovery through META

That icon has been adopted by all the major browsers for the purpose of depicting RSS feeds. It is available for download at Feed Icons. Older feed icons might look like this:

RSS icon XML icon Feed icon

You can see that orange is the predominant colour used to depict RSS.

Making your feed icon appear in the address bar

Since most of the major browsers now support RSS it is a good idea to notify the browser that you have a feed so that the RSS icon appears in the address bar. To make your feed visible to agents you should include something similar to the following META in the head section of your page:

<link rel="alternate" type="application/rss+xml" title="RSS 2.0" href="" />
<link rel="alternate" type="text/xml" title="RSS .92" href="" />
<link rel="alternate" type="application/atom+xml" title="Atom 0.3" href="" />

This auto discovery technique is also used by most readers and blog aggregators so it is a good idea to include it.

RSS features and uses

RSS can be used for many purposes. E-commerce stores can publish their products via RSS. Employment sites often offer customised search feeds so users can keep tabs on particular job-type vacancies. Many large sites offer multiple feeds so you can track only the information of interest to you.

Search engines and RSS

Search engines love RSS. They just devour feeds because they are very machine readable. Feeds also contain something search engines love: TEXT. And lots of it.

Very often my feed will rank well for specific search phrases and my site might have 2 or 3 pages ranking on the first SERP (Search Engine Result Page) – the post, my homepage and my feed . When multiple results from my site appear on a results page the probability of receiving a referral increase dramatically.

So does RSS matter?

RSS is here. It has not reached the tipping-point just yet, but the integration of RSS into the major browsers during 2006 means that RSS should become more and more mainstream over time.

And just as I finish this what appears in my reader?

the latest research done by and goo Research shows that RSS’s bringing more accesses to the sites.

Q1: Do you visit more sites due to RSS feeds?
– More, 34.6%
– Hasn’t changed, 59.5%
– Less, 5.8%

Q2: Do you visit sites you read on RSS feeds?
– Always, 23.5%
– Sometimes, 58.1%

From Multilingual-Search.

Perfect :mrgreen:

Browsers CSS JavaScript Standards Usability WebDev

eGovernment Accessibility Analysis

  1. Summary
  2. Download Report (.pdf)
  3. Introduction
  4. eGovernment
  5. National Disability Authority
  6. Accessibility
  7. New Internet Technologies
  8. Detailed Results
  9. Is the eGovernment interface accessible?
  10. Is it all Bad News?
  11. Lynx Browser Results
  12. Notes
  13. Errors, Ommissions & Corrections


The websites of a number of Government Departments, Agencies and Political Parties were tested for accessibility and coding standards. The sites were also checked for contemporary web technologies such as RSS.

Results Overview:

Government Department websites tested: 16
Valid CSS, (X)HTML & passing WCAG 1.0 Level A: 4 (25%)
Sites passing WCAG 1.0 Level A: 12 (75%)
Sites utilising RSS: 4 (25%)

Other Public websites tested: 18
Valid CSS, (X)HTML and passing WCAG 1.0 Level A: 0 (0%)
Sites passing WCAG 1.0 Level A: 12 (67%)
Sites utilising RSS: 2 (11%)

Political Party websites tested: 7
Valid CSS, (X)HTML and passing WCAG 1.0 Level A: 0 (0%)
Sites passing WCAG 1.0 Level A: 3 (43%)
Sites utilising RSS: 3 (43%)


There is one entity that impacts daily on each of our lives. That entity is the Government.

The Irish government is the body tasked with the administration of the land of Ireland. As such the government is responsible for making the law, enforcing the law, and maintaining the welfare of the citizens. It is no surprise that the interface of citizen and government is one of the most important elements of any political system.

Technology is the new interface

The first Information Society Action Plan was published in January 1999 and in November 2001 Ireland had become the top performer in an EU benchmarking report on public service on-line delivery.

In March 2002 the Irish Government published “New Connections – A strategy to realise the potential of the Information Society”. The document set forth an action plan identifying key infrastructures that required development, one of which was eGovernment.


eGovernment refers to the use of information and communication technology (ICT) as an interface between the citizens and government of a nation. Most often the term refers to the use of the Internet as a communication platform to allow the exchange of information and the execution of processes that had previously been undertaken via direct human interaction.

Introduction of eGovernment is an EU-level policy, and part of a broader EU strategy to make Europe the most dynamic and efficient economic block in the world. ICT is seen as the key facilitator of this strategy:

The successes and potential of eGovernment are already clearly visible with several EU countries ranking amongst the world leaders. Electronic invoicing in Denmark saves taxpayers €150 million and businesses €50 million a year. If introduced all over the EU, annual savings could add up to over €50 billion. Disabled people in Belgium can now obtain benefits over the Internet in seconds, whereas previously this took 3 or 4 weeks. Such time savings and convenience can become widespread and benefit all citizens in Europe in many public services. (Source: COM(2006) 173 final)

ICT is also seen as an enabler and facilitator of inclusive strategies as set out by the EU.

The 2002 document makes a number of references to the availability and accessibility of government websites:

  • 3.2.1 Website standards – Guidelines and standards for all public sector websites were produced in November 1999, building on best practice in relation to design, search facilities and accessibility guidelines.
  • 7.2.7 Accessibility – Under the eEurope Action Plan, all public sector websites are required to be WAI18 (level 2) compliant by end-2001.

National Disability Authority

The National Disability Authority is a statutory agency tasked with policy development, research and advice on standards designed to safeguard the rights of people with disabilities.

Is the eGovernment interface accessible?

The purpose of the study is to measure the accessibility of the primary government agency websites. The websites of the main political parties were also tested as those organisations are inherently connected to the administration of a democracy through their stated goals and policies.

The following tests were conducted to ascertain a measure of web standards and accessibility:

  1. W3C CSS validation service (here);
  2. Visual inspection for W3C badges;
  3. W3C Markup Validation Service v0.7.3 (here);
  4. HiSoftware Cynthia Says Section 508 Validation service (here);
  5. HiSoftware Cynthia Says WCAG 1.0 Priority 1 Validation service (here);
  6. HiSoftware Cynthia Says WCAG 1.0 Priorities 1&2 Validation service (here);
  7. HiSoftware Cynthia Says WCAG 1.0 Priorities 1&2&3 Validation service (here);
  8. Total Validator Professional desktop HTML & Accessibility validation tool (available here);
  9. WAVE WCAG 1.0 and Section 508 visual site overlay tool (here);
  10. Usability analysis of page in text browser (Lynx);
  11. Manual inspection of the mark-up to identify ‘cut-and-paste’ coding;
  12. Visual inspection for RSS feed, search for auto-discovery of RSS feed (Firefox);
  13. Visual inspection for blog;
  14. Visual inspection for real-time chat function.

In this study the 3 automated accessibility validators were used and in some case supplemented by manual evaluation in the Lynx text browser. Tests were limited to the homepage of each site (in some cases an inner page was tested – e.g. where splash pages were used and the home page was therefore an inner page). All tests were conducted during the period 20-31 November 2006.

While these tests cannot be guaranteed to properly ascertain the accessibility of any webpage, they do serve to highlight a number of flaws that would ordinarily render a page inaccessible via screen-reading technology.

Why search for RSS, blogs, real-time chat?

The Internet is evolving. Buzzwords such as web2.0 are common place. In my view what we are seeing is not a change but a natural progression. Today’s Internet is about interaction, multiple-way dialogue, and innovative communication channels.

This study therefore includes tests for interactive techniques and alternative distribution channels.


Homepages were checked for RSS (Really Simple Syndication) feeds. RSS is fast becoming the de-facto transport for on-line information syndication (note the recent integration of RSS into the latest browsers from both Microsoft and Mozilla). In cases where a feed was not apparent on a homepage the Press section (or similar) was also checked.

It would seem both appropriate and desirable that any entity which relies on news agencies to broadcast their message would utilise RSS.


While not appropriate for every context, blogs have been found to add transparency and openness within a political setting. Blogs also allow for meaningful dialogue between writer and audience.

Real-Time Chat

Used by the software industry for many years, real-time chat facilities allow Internet users to ‘chat’ with a support agent through a real-time messaging system.

Detailed Results

eGovernment Accessibility Study
[NOTE: Please click on the above image for a larger resolution and an alternative accessible version.]

Is the eGovernment interface accessible?

The study tested a total of 41 websites: 27 sites passed the automated WCAG 1.0 Priority 1 (A) validation tests.

Of the Government Department websites tested 12 from a total of 16 were compliant with WCAG 1.0 Priority 1 (A).

The lack of RSS feeds on 12 department websites was a particularly odd result given the relationship of Government with the public and press, and the Government’s need to shape public perception through the news channels.

The websites of the main political parties were found to be lacking in terms of contemporary Internet technologies: Only 3 of the 7 party websites included an RSS feed and none offered multiple feeds targeting different content and audiences.

4 of the 7 party websites tested failed WCAG 1.0 Priority 1 (A), and none validated for valid CSS/HTML coding standards.

Is it all Bad News?

A positive feature of this survey was the number of Government websites that aspired for a higher standard of validation than the basic WAI WCAG 1.0 Priority 1 (A).

At least 4 sites displayed WCAG Priority 2 (AA) badges on their homepages. Unfortunately only 1 actually attained that level of Accessibility.

At least 2 sites displayed or made claim to WCAG Priority 3 (AAA) Accessibility, the highest level of accessibility, however none did validate to this standard.

Some websites tested stated a clear aspiration to achieve high accessibility and informed visitors of the ongoing effort toward attaining that goal.

Validation is a binary test – a site either validates or it does not. In some cases failure can be remedied with minimal effort, while in others achieving compliance with both WAI WCAG 1.0 and W3C coding standards will require a substantial undertaking.

Creating a website that complies with WCAG is perhaps the easier phase of providing an accessible website. Maintaining WCAG compliance is by far the most difficult area of website accessibility, even more so given the dynamic nature of many of the sites tested.

Web standards, such as those developed by W3C and WAI, are the foundation of the ‘Inclusive Web’. Websites which comply with these standards will ensure that the broadest spectrum of visitors can access their information and benefit from the full potential the Internet has to offer.

Lynx Browser Results

In cases where accessibility anomalies were flagged by automated evaluation tools the site in question was manually evaluated in the Lynx text-browser.

The search facility on a number of Government sites was found to cause practical accessibility issues:

1. Department of the Taoiseach:

Department of the Taoiseach homepage view in Lynx browser.

Here is the mark-up for the search feature:

<form id="basicSearch" action="search.asp" method="get">
<div class="searchTop"><label for="searchWord" accesskey="4" /></div>
<div class="searchMiddle"><input class="searchFormInput" type="text" name="searchWord" id="searchWord" size="16" value="Enter keyword" /></div>
<div class="searchBottom"><input type="image" value="submit" name="search_go" id="search_go" src="/images/search/button_search.gif" alt="Search" /></div>

This is Andy Harold’s opinion on the above code:

This is an attempt to resolve the need to have a label tag and to put some default text in the text field. But appears to be done purely to satisfy accessibility checkers than real life requirements, and may even upset some screen readers. I’d say this is poor practice. The label should have some text within it and there shouldn’t be a ‘value’ attribute in the text field.

Putting default text in comes from 10.4 (Priority 3): Until user agents handle empty controls correctly, include default, place-holding characters in edit boxes and text areas. But this became outdated almost as soon as it was written, because all the user agents used by people with sight difficulties can handle empty controls. So the use of label tags meets all needs.

2. National Disability Authority

The mark-up powering the search facility:

<label for="query" accesskey="4">
<input name="q" id="query" title="Enter keywords to search for" value="" size="30" type="text">
<input title="Submit your search and view results" value="Search" type="submit">

Andy Harold’s opinion:

Enclosing input’s within a label is allowed by the standards so that you don’t have to supply a ‘for’ attribute as it the label implicitly refers to the enclosed input. Having the two inputs enclosed by the label, as in your example, makes this confusing. The fact that there is no text in the label tag makes this more confusing still. So although technically you can do this – ie passes automatic validation tests – it’s not the correct use of the label element and so wouldn’t be what a user agent (eg a screen reader) would be expecting and so may cause it problems. So, on that basis I wouldn’t pass it as P3 simply because it makes little sense.

Remember that the standards can’t cover every situation and so are purely there to guide you into making good decisions. In this case you could put some text in the label (and take the input elements out of it) if you really want it to be passed as P3. But if this makes the search facility too visually unappealing, just drop the label altogether. This may not make it technically ‘P3’ but more importantly it will still be accessible because of the title attribute, so it shouldn’t matter.

3. Pobail

Here is the Lynx view of the English version Pobail homepage:

Pobail English homepage view in Lynx browser.

And here is the underlying mark-up:

<label for="search">
<input type="text" name="qt" id="search" value="" maxlength="1991" />
<input type="submit" value="Go" class="submit" />

While the search element may pass automated validators, the form itself has little value to users of screen reading technologies. The ‘Advanced search options’ link is in another div.

[NOTE: Andy Harold is the developer of Total Validator. The tool is available as either a free Firefox plug-in or a professional desktop application.]

Study Notes:

  1. Strange use of JavaScript that depreciates in Lynx but prohibits access to links in non-JavaScript enabled browsers.
  2. The Department of Arts, Sport and Tourism displayed a WCAG 1.0 AA Badge.
  3. The Department of Arts, Sport and Tourism did not validate WCAG 1.0 AA
  4. RSS feed not included in META section and was not auto-discovered by browser. Auto-discovery allows browsers to display and bookmark RSS feeds.
  5. The Department of Enterprise, Trade and Employment displayed a WCAG 1.0 AA Badge and passed that standard.
  6. RSS feed found on inner page with no META auto-detection.
  7. Address in footer is an image – ALT=”Department Address”. This is a particularly poor implementation as the address can neither be read by screen-reading technologies or copy-pasted form the browser.
  8. uses a splash homepage. Inner English language homepage tested.
  9. Empty LABEL (no text node) in page’s search form.
  10. Empty LABEL (no text node) in page’s search form.
  11. BASIS carried a WCAG 1.0 AA Badge and failed that standard.
  12. Framed site – each frameset was validated individually.
  13. WAVE cannot validate framed sites.
  14. In-line style attributes – no CSS file to validate.
  15. claims site is WCAG1.0 AAA compliant with timestamp. That page, which is unlikely to have been updated, failed AAA validation.
  16. RSS feed found on inner page with no META auto-detection.
  17. RSS via auto-discovery, but no mention on page.
  18. RSS feed found on inner page with no META detection.
  19. FAS Ireland carried a WCAG 1.0 AAA Badge but failed AAA validation.
  20. FAS Ireland homepage contained 6 errors when tested for WCAG 1.0 AAA.
  21. carried a WCAG 1.0 AA Badge.
  22. contained 30 errors when tested for WCAG 1.0 AA.
  23. In-line style attributes, framed site.
  24. No publicly published link was found.
  25. Resolved to the website of Clare County Development Board.
  26. uses JavaScript links to popup new pages – blocked in FF and IE7. The site was virtually unusable.
  27. Server not found error.
  28. Website did not respond for – this could cause problems for many visitors. WAVE validator was served the login page so WAVE analysis could not be performed. There were also some issues with the search form which are discussed toward the end of this document.

Page URLs

Government Departments
Foreign Affairs, Dept. of
Agriculture and Food, Dept. of
Arts, Sport and Tourism, Dept. of
Communications, Marine and Natural Resources, Dept. of
Health and Children, Dept. of
Education and Science, Dept. of
Enterprise, Trade and Employment, Dept. of
Environment, Heritage and Local Government, Dept. of
Finance, Dept. of
Defence, Dept. of
Justice, Equality and Law Reform, Dept. of
Community, Rural and Gaeltacht Affairs, Dept. of
Community, Rural and Gaeltacht Affairs, Dept. of
Taoiseach, Dept. of the
Transport, Dept. of
Social and Family Affairs, Dept. of

Government Informational Portals
Business Access to State Information and Services
Public Service Information for Ireland

Other Government Websites
Office of the Revenue Commissioner
Official website of the President of Ireland
The Courts Service of Ireland
The Office of Public Works
Central Statistics Office

Political Party Websites
Fianna Fail
Fianna Geil
The Labour Party
The Green Party
Progressive Democrats
Socialist and Workers Part
Sinn Fein

Websites Highlighted in Society Action Plan
Revenue Online Service (ROS)
FÁS e-recruitment
Land Registry
Examination results
CAO (Central Applications Office)
Driving tests
Government Contracts
Public Service Recruitment
National Sheep Identification System (NSIS)
Farmer IT Training

National Disability Authority

Errors, Ommissions & Corrections:

  1. 11-December-2006 12.01PM Since publishing the report it has been brought to my attention that the Sinn Fein website does indeed have an RSS feed. The feed is available at in the side-bar. My apologies to Sinn Fein for any inconvenience caused by this ommission.
Browsers Marketing Security Technology

Dublin Might be Ready for Vista, But is Microsoft?

There was razzmatazz. There was an astronaut. And amongst countless techies and a bunch of promotion girls there was Microsoft’s biggest product launch ever. Oh yes, and I was there also.

“ready for a new day”

Well perhaps Dublin was, but I’m not so sure about Microsoft.

My day out at Croagh Park

Getting to Croagh Park isn’t the easiest of feats. I arrived after 11am and caught the end of the opening keynote. After a few minutes standing at the back my curiosity got the better of me and I headed to the demo area on the fourth floor. This was where things started to become unstuck.

The Search room

As I am moderately interested in search I headed straight for the Search room. I found a seat (not difficult because everyone else was still upstairs) and a nice MS guy offered to show me the ropes.

The first point to note was that the demo machine seemed a bit temperamental. A few glitches appeared when tabbing through applications – the screen just went dead. My guide mentioned that the demo machines weren’t up to spec for Vista (they certainly weren’t new computers).

He was a knowledgeable and talented guy, but unfortunately he couldn’t tell me if Vista’s new search function would index my web browsing. Nor could he tell me how search behaved across a network.

I do like some features of the new search interface. For instance, if you hover over a search result the related META data appears in a pop-up.

As I was early for the actual demo I went and grabbed a soggy roll and a cup of coffee.

So much attention, so little knowledge

I returned for the search demo proper and found my way to one of the few remaining clients. The demo was of a web-based reporting application that pulled data from a whole bunch of MS products. I’m still not sure how it tied in with search to be honest.

There was one Microsoft person for every four guests in the room, and I asked the nearest rep if I could pull up the application the presenter was showing on my client. After some discussion between Microsoft people I received a response in the negative – the application was running on a server and only available to the presenter. So I carried on watching.

Why demo in Windows 2003?

Strangely, the presentation appeared to be running on a Windows 2003 machine. Now I could be wrong, and it might simply have been a theme, but I still found it odd that Microsoft would promote Vista using a Server 2003 theme?

When the presentation was finished the speaker happened to walk by. I asked him if the web application was platform agnostic and he confirmed it was – it would run on Firefox and other browsers. He also gave me the URL to access the application where I sat. Pity the first couple of fellas hadn’t known that.

As the search presentation was recycling I headed away and caught about half an hour of a very animated and knowledgeable speaker on encryption and Vista’s built-in security features.

Fly me to the moon

Neil Armstrong was a very good speaker, receiving a standing ovation both on arrival and exit. He spoke extremely well and was thoroughly interesting to listen to.

I’m not sure if it’s just me (and Google hasn’t been doing me any favours recently with my tin-hat syndrome), but I felt some of his speech was debunking the debunkers. Maybe he’s just tired of all the naysayers who claim he never got any further than some desert in the US mid-west.

So was I enlightened?

I’ve got to be honest and say no. The welcome package contained two publications, one on the knowledge economy, the other an overview of the Irish case-studies profiled during the day.

I’m really quite surprised there was nothing in the pack about Vista. In fact there was nothing about any of Microsoft’s products. The two publications had a lot about benefits but absolutely no details on the products. I have to say I’m not really any the wiser apart from actually trying out the new Vista UI.

Did I miss something or was I just expecting too much? Or was Micorsoft ready for today?

Browsers CSS Marketing Standards Usability WebDev

Golden Spider Awards – The Results

Last night the ‘Internet Oscars’ took place in Dublin. I’ve been trying to find out the results. The Golden Spider website is still selling tickets :mrgreen:

So i found the results over at Silicon Republic:

  1. AOL Best Financial Website
  2. Best Travel, Tourism & Hospitality Website
  3. Best News, Media & Entertainment Website
  4. Comreg Best Sports, Health and Leisure Website
  5. FÁS Best Social Networking, Community & Not For Profit Website
  6. RTÉ Best Education Website
  7. Best Marketing Campaign
  8. Department of Communications Best Web Design Agency
  9. IEDR Best Technology Innovation Award
  10. Allianz Best Retail Website
  11. Cash Collector Best Professional Services Website
  12. Irish Jobs Best e-Business Website
  13. ArgusCarHire.Com Best New Website Launched in 2006
  14. Best Broadband Application Award
  15. Arekibo Best Public Sector Website
  16. Red Best HR, Training and Recruitment Website
  17. Internet Hero 2006 Award
    Winner: Cormac Callanan
  18. Eircom 2006 Golden Spiders Grand Prix Award

Well done to all the winners.

Cathal Magee, managing director of eircom commented:

The entries this year were outstanding and testament to the strength of the Irish internet industry. These awards provide an important opportunity to recognise and showcase online excellence.

[Emphasis mine]

I have to say that my research begs to differ with you Mr. Magee.

Browsers Google JavaScript Search Engines Security

Does Google Know Your MSN & Y! Searches?

When it comes to Search Engines, it pays to know how they tick and what tickles their fancy. Of course, the majors tend not to broadcast their techniques too loudly lest all those kindly spammers hear about it.

Patents can reveal a lot

It is important to follow the technical aspect of search engines. There is undoubtedly one person who is the authority on both today’s technology and the technology the search engines are currently building to serve us tomorrow. He is Bill Slawski of SEO by the Sea.

Patent watching

SEObytheSEA specialises in patent watching. Yesterday I saw Bill Slawski’s post about Microsoft snooping Google search history. It’s quite interesting from a number of perspectives. But first a little background on what’s going on.

Firefox search.suggest

It appears that Firefox has a little known service called search suggest. Search suggest is controlled via the parameter and basically allows third party access to the search history of your search bar.

So whenever you use the built in search bar of Firefox the search query is added to your history so that suggestions can be made based on your prior behaviour.

Now this is where it gets interesting. Apparently Firefox allows third party search plug-ins access to your history so that they too can offer suggestions based on your previous searches. But whereas you might presume that one search engine wouldn’t, or shouldn’t, have access to searches executed on another, well, you’d be wrong.

Microsoft Live sniffing around Google searches?

Apparently Microsoft Live suggested some of Bill’s previous Google queries. Bill then saw that his search history was being sent to Microsoft Live via the feature of Firefox. That feature transports your history via a JSON encoded file when this feature is turned on.

The Microsoft Patent

Of course SEObytheSEA is renowned for its coverage of search engine patents. Low and behold, haven’t Microsoft a patent (published November 16) entitled ‘System and method for automatic generation of suggested inline search terms’.

Privacy Ramifications

The default setting of is TRUE in the latest version of Firefox (2.0). (This can be changed via about:config.)

This means that if you are using the built in search bar, a search engine can see your query history regardless of whether it executed those queries. From the SEO by the SEA post:

I performed a search in Windows Live for a term that I don’t believe I ever searched for before on a search engine. I then went to Google Suggest, and started typing in the first couple of letters of the that word to see if it would suggest my Windows Live search term.

It did.

While most people understand that additional toolbars (e.g. Google Toolbar) commonly track your behaviour, it may not be apparent that your search history is made available via this relatively unknown feature of Firefox 2.0.

Of course it’s not as if the major search engines aren’t already collecting enough data on us….

[Some concerned viewers might be interested in CustomizeGoogle plugin for Firefox.]

Browsers CSS Standards WebDev

If I Had Taken My Own Advice….

I am now the (proud?) owner of Internet Explorer 7. Had I taken my own advice I’d still be able to view the Cork City Council website. But, hey, I have automatic updates to thank for this post :mrgreen:

Ok, so the story about Cork City Council winning a 2005 Golden Spider is old news. And everyone knows that their site’s only happy in IE: Firfox 2.0

The text is a wee bit small in that shot, so let me quote the warning in the footer:

Due to browser compatibility issues at present, this site will only operate correctly by using Microsoft Internet Explorer 5.5 or greater

and then I’m offered a link to ‘upgrade?’.

Well that’s all fine and dandy. So how about the site in Internet Explorer 7: IE7

I think they need to rewrite their warning to:

Due to browser compatibility issues at present, this site will only operate correctly by using Microsoft Internet Explorer 5.5 to 6.0

*If* they are sniffing the browser, they must be doing it server-side. Actually, requesting with an IE6 user-agent still returns the upgrade link.

And considering that IE7 is now a high priority update, they might want to try and sort this out.

Honestly, how can a site that only works in IE 5.5 – 6.0 be given an award for ‘Best Public Sector Website – Local’?

I am now relatively sure that standards and compatibility are not criteria in any part of the Golden Spiders judging process.

Does anyone know if was built in-house or contracted out?