Google Prediction API and OER Discovery?

Last week at the Google I/O event Google released all manner of geeky toys for developers to play with and the tech blogosphere is doing the usual job of picking them apart and focusing on opportunities to make a dollar (or $50million).

The one that struck me as interesting was the Google Prediction API. To be honest I have only understood about 50% of what I have read about it so far but it seems that amongst its various capabilities would be the ability to take large data sets about resource usage and generate an Amazon like recommender system of some kind? I could be getting entirely the wrong end of the stick here but that is what it seems to be saying?

This reminds me of the MOSIAC work JISC funded around ‘serendipity’ and recommender systems around Library data and is probably of major interest to the Usage Data event we are running in July.

What I was wondering is what could be achieved if we combined usage data from various big OER repositories (assuming this could be done in some kind of safe way re privacy etc) like JorumOpen, OpenLearn, OER Commons, OCW Consortium, MIT etc and use this tool to create some kind of (almost) intelligent filtering for OER resource discovery.

Now of course event the Amazon filtering throws up an awful lot of nonsense in its recommendations but it also gets an awful lot correct and we have to start somewhere.

Is this possible? I think that given the lightweight approach to metadata that most OERs have then there needs to be experiments in more innovative ways of improving discovery and we need to be trying multiple things to see which ones give the best experience.

Anyway this is just another braindump which might be complete nonsense but hopefully someone will tell me if it is and if not at least I can refer to it later when someone actually goes ahead and does something similar🙂