Nifty little encoder decoder utility

Found a nice website tool that can used to encode or decode text strings as you like. I found this particularly useful for decrypting a SharePoint GUID, however it could be handy for turning encoded JavaScript URLs from complete gibberish into readable gibberish.

The URL is:


A classic Dan Champion Website Workshop

Dug out these notes from a conference a couple of years back, directed by Dan Champion…

Dan Champion Website Workshop

Wednesday 24th January 2007, Birmingham

A workshop last week with much-talked-of webbie Dan Champion, the brains behind the award-winning Clacksweb might yet prove a landmark event.  With insight, inspiration, expertise and leadership from central government being what it is – i.e. virtually non-existent (one has only to think of local direct gov and the take-up campaign fully to appreciate this) creating great council websites, and getting them well-used, will inevitably now fall to the collective endeavours of council web managers themselves.   Assuming this indeed is the case (and we strongly believe it is; where websites are concerned even Socitm’s input of late  has been about as much use as the proverbial chocolate fireguard) our hope is that Wednesday’s ‘happening’ was the start of something much bigger and longer lasting.

Over fifty web and web-related council people (with one or two others thrown in for good measure) gathered to listen, learn and contribute to discussions on a number of basic topics, to wit:

Developing an effective web strategy
Search Engine Optimisation
User interactions & transactions
Quality assurance
Information architecture for the web

All of which were, in their respective ways, highly rewarding and useful sessions.  Teddie Cowell, an internet industry insider and expert in the dark art of search engine optimisation was joined by Jack Pickard to assist Dan in the course of the day.   Write-ups (Dan’s, Jack’s and Teddie’s) can be found at the links provided at the bottom of this page but – more significantly – all presentations are available at a newly-created wiki aimed at public sector web managers which it is hoped will grow in size and stature over the course of time.

Other than the earlier point – that LA staffers are probably going to have to innovate and take the lead for themselves, abetted we trust by initiatives such as the wiki – a number of other key issues emerged which are well worth noting here, albeit in brief.

What good is IPSV? (Clue: not much at all)
Most present appeared to conclude that IPSV, now the official Government Metadata Standard served no useful purpose and should be ignored and not implemented.  As Teddie Cowell pointed out, search engines don’t use government metadata. At all. IPSV is not used. So really then, what is the point?  Others, including Dan, say they’re not sold on formal taxonomies for websites, and definitely not centralised taxonomies like this Vocabulary. The ability to produce a site map from a taxonomy is a benefit, but a fringe benefit at best. Current thinking appears to be arriving at a much less prescriptive model of metadata, in which, rather than forcing editors to select a term from a taxonomy for the area of business the content relates to, they’re provided a ‘finger buffet’ of metadata to choose from, including schemes for geographic, demographic, subject (i.e. topic) and business tags.

True this may not result in the same tidy hierarchical view made possible by a formal taxonomy, but it does provide vastly richer possibilities in terms of interrogating and presenting content.  Dan touched on this briefly last Wednesday, but time prevented him developing it further.  More discussion, including perhaps some input from those behind IPSV would doubtless prove very useful.

One of the central selling points of LGCL and now IPSV was the broad view of government services relating to a given term (e.g. see everything that all government agencies and local authorities have on animal welfare), but the reality is that’s both a pipe-dream and of absolutely no use to the vast majority of users.  A broad view of government services however that relate to, say, a single mother, under 30, recently made redundant, with children under 5 living in Birmingham would be of real value. This is only possible though if one gets away from thinking like librarians and stops trying neatly to categorise every single one of a council’s services and information nodes based on universally understood terms.

Search Engine Optimisation – A public sector blind spot
Teddie’s presentation is a must, the public sector is failing dismally here at its undoubted cost and to the obvious astonishment and amusement of the industry.  Here’s an extract from his blog entry:

‘…..I did a hands in the air, out of a room of 50 public sector webmaster / web managers / communications people from all across the UK:

How many of you have an SEO strategy?

….deafening silence….

or search anywhere in your strategy?

….a lone hand goes up at the back….


1 out of 50. I nearly fainted. Given that one of their primary remits is making information easier for us to access it’s unbelievable….’

Teddie laid to rest a number of popular myths where SEO is concerned and unwittingly (but vividly) illustrated just how and why the take-up campaign was such a joke and, more importantly why search engine work should be a MUST for any council web strategy (if CEOs of multi-national corporations take a personal interest in the matter, which they do, there has to be something in it).

Much was made of at the time of redirected links from Directgov and Local Directgov which, it turns out actually exist to the detriment of councils and others where search engines are concerned, although this ‘problem’ disappeared even as the workshop was taking place with the migration of Directgov from the government-commissioned DotP system to another CMS. (For anyone interested this is discussed by Alan Mather, ex of the eEnvoy’s Office as was and the individual behind DotP in a recent blog entry here.  Alan later exlpained the ‘redirects’ were created to enable tracking of clicks and was implemented before Google became as powerful and prominent as it now is).

All in all the day was a roaring success. It could and should provide a launchpad for web managers elsewhere to begin collaborating and – where necessary – even speaking with a collective voice.   Its first message might be to plead with the Government to talk to Google.

The wiki

This is located at:

Access details are

username: psf
password: orange

Registration is required to contribute – just follow the Login/create account link in the top menu. Browsing does not require registration.

At present the content consists of all material from the, including the presentations, audits and Quick Topics (two of which we didn’t have time for on the day – Microformats and Custom 404 pages), and a large collection of links broken into subject areas.

These are links to resources, articles and other reference material that has been checked and found to be authoritative or of genuine utility.

During discussions at the workshop we identified other areas that might be of interest, including web governance models, web strategies and quality assurance schemes. If you have any material or experience you can share in these areas please consider contributing to the wiki. If you haven’t used a wiki before and would like some help please let us know and we’ll put together a short guide covering the basics.

Other write-ups are here

Teddie Cowell:

Jack Pickard:

Dan Champion: