So apparently “the idea that doing linked data is really hard is a myth” according to attendees at last weeks LinkedDataLondon event. I have to admit to wondering what they are comparing it to then as it seems to me that it is anything but easy.
I am a little bit of a Linked Data skeptic I will admit. It still has too many undertones of the Semantic Web for me ever to be entirely comfortable with it and I’ll be honest and say I am not quite sure how the Open Data movement became so overtly associated with Linked Data. It seemed to me that it was only one option amongst many and now is increasingly pushed as THE solution. I guess the involvement of Tim Berners-Lee in the Governments Open Data programme was also going to lead to things having this spin as it is a direction he has pursued for many years.
The thing is I am willing to be convinced – most of the cleverest web folk I know are putting their intellectual and professional weight behind Linked Data and JISC is putting a not insignificant chunk of money into it as well so I have to keep an open mind.
I am interested to see the work the BBC have been putting into their Wildlife site in various areas and the concept of your ‘site as your API’ is compelling but the BBC is not your typical website nor web team and very few of the people talking about this stuff are actually the people who have to run big information rich websites on a day-to-day basis and deal with all the issues that brings up. Amongst those issues dealing with web publishers who do not even know HTML let alone understand RDFa.
Maybe the work Drupal are doing with Linked Data will make this all easier.
So does this mean a return to the old ‘webmaster’ model of running websites – where content was pushed to a central person/team who took care of mark-up, QA and publishing? Not saying this is a bad thing but in an era of job cuts I can’t see many teams being given the resources to achieve that.
A couple of other ideas came out of the discussions that made me wince a little.
The first was that it is more important for URIs to be persistent than to be readable by people (the machine vs human debate). I’m never going to be happy with this – I have spent half my career fighting to get away from dodgy, database generated URIs that make no sense to a web with readable URIs that are logical (I will forever use Traintimes.Org as an example here..) It seems to me that cleverer people than me have made the case that the tools exist within the HTTP/DNS world to achieve both persistence and readability. That said if it comes to a choice I know which side I’ll be on.
The other concept was one that I need to understand better but wince I did. Getting away from the file/folder metaphor as it is too limiting for the Linked Data web of ‘things’. This might well be true but it is a useful and understood way in explaining things on the web and unless people come come up with an equally understandable way of explaining things (and not just from one web scientist to another!) then that is a problem.
Maybe I am simply getting the wrong end of the stick on a consistent basis or maybe my original prejudices about the semantic web are too ingrained for anything similar to stand a chance with me. I do however hope that I do get by the lightning bolt soon and for it all to become clear.
I am in the process of reading Paul Millers’ Linked Data Horizon Scan that JISC funded and hopefully this will start to answer some of my questions. I’ll buy Paul a beer next time I see him if it does.
Updated: 09:42 01/03/2010 with link to http://www.frankieroberto.com/weblog/1621