Internet of Public Service Jobs —15th May 2016

The_Nine_intro

 

Just the nine brilliant jobs this week – including a pretty special product role at GDS and one of those once in a lifetime opportunities up in Edinburgh. The other seven aren’t bad either! As usual good luck if you are tempted!



Digital Product Manager
Advisory, Conciliation and Arbitration Service



Director of Digital Technologies
Nottingham Trent University


Digital Product Manager
Royal Academy of Arts



Head Of Digital
National Library of Scotland
Edinburgh


Head Of Digital
Mind
Stratford-upon-Avon


Chief Digital Officer for Social Security
The Scottish Government
Edinburgh

Internet of Public Service Jobs —7th May 2016

Bakers Dozen

A brilliant baker’s dozen of jobs this week. Government of all kinds, academia, charities and the NHS all represented with great roles with a pretty decent geographical spread – nice to see interesting opportunities outside of London.

Good luck if you do go for any of these!

Head of Digital
Coventry University


DCO Team Head Digital of Technology (South)
NHS England
(Bristol or Plymouth)



Digital Editor(PDF)
British Film Institute


Chief Digital and Information Officer
Camden, Haringey and Islington Councils


Digital and Engagement Manager
Together for Short Lives
Bristol


Digital Delivery Director
NHS Digital
Leeds


Digital Solutions Lead
The National Archives


Data Scientist
Government Statistical Service
London, Manchester, Sheffield, Newcastle, Newport, Fareham



Head of Service Design — Digital
Home Office
London or Sheffield


Transformation Lead
Scottish Government
Edinburgh


Head of digital content services for further education and skills
Jisc
Bristol, Harwell, London or Manchester

Internet of Public Service Jobs — 24th April 2016

Top-Ten-list

Another amazing week with opportunities to work in brilliant teams doing important and interesting work – what more good you want? A decent spread around the UK this week as well with vacancies in Edinburgh, Yorkshire, Cambridge and Brighton as well as the usual London-centric offerings.

Go forth and apply!

 

Programme Manager for Code Club International
Code Club


Digital Technology Manager
Institute of Physics


Digital Innovation Lead
HM Revenue and Customs


Innovation Delivery Manager
University of Surrey


Digital Product Manager
Citizens Advice


UKTI Chief Digital Officer
UK Trade and Investment


Deputy Head of Content
Government Digital Service


Programme Manager
Parliamentary Digital Service


Head of Product
mygov.scot


Service Group Manager – Digital Services
Lewisham Council


 

API Days

In my last post I at least tried to make the case for;

Publish for humans, all of the humans. But don’t forget the machines.

This time I’m going to talk a little bit about what we might get from those machines — because I’m not convinced it is always what people are expecting.

While it can be easy to compare our website to something like GOV.UK or the other statistical institutes around the world I often find it more helpful to compare it to something like the Guardian website. Functionally we are essentially a publisher of multiple story/report formats each made up of multiple components (words, tables, charts, interactive tools, maps, images, spreadsheets — lots and lots of spreadsheets) with collaborative, multidisciplinary teams working to strict deadlines.

So when I came across a report about the use of open APIs by news organisations (primarily the Guardian, New York Times and NPR) by one of the original authors of the Cluetrain Manifesto — David Weinberger — I settled down to read and learn.

After all the ONS Beta site is essentially a set of APIs with a user interface (albeit one where we have sweated over every button, label and interaction) and Florence, our publishing application, is the same. We have a commitment, maybe even a responsibility, to encourage the use of our (open) data and providing open, public APIs have long been held up as a way of achieving this. We have made the underlying JSON available from day one (visible by appending /data to any URI) and documenting what is possible/available is task fighting its way up the backlog.

“It was a success in every dimension except the one we thought it would be.” Daniel Jacobson, former Director of Application Development at NPR

One of the stand out findings from the report is that when they released their APIs (all within months of each other back in 2008) the big motivator was that ‘it would let a thousand flowers bloom’. Developers would see this as something on which to build. Like Rufus Pollock once said;

“The best thing to do with your data will be thought of by someone else.”

The reality however was somewhat sobering. Despite an initial burst of development and innovation those thousand flowers never really materialised. However what it did do was almost provide an outsourced R&D function — they could all see what ideas people had even if they weren’t really fully formed and this influenced the direction of internal development.

That is important as where the focus on APIs absolutely proved its worth was in supporting internal development. All the teams spoken to found themselves able to react much more agilely to development demands (the most obvious for all of them being the release of the iPad) where they had APIs to build on. The embodiment of ‘eating your own dog food’.

There were other wins that are interesting to us — the ease and flexibility of syndicating stories and assets improved, it became easier and quicker to experiment and prototype new features and it was possible to constantly improve their CMSs.

Now obviously these lessons might not transfer to us but it is worth considering. I think there is still an expectation that if we can get the API right there will be an explosion of apps using our data.

Robert L. Read, one of the founders of 18F in the US, certainly seems to think there is still a built in audience for Government APIs and that apriority should be to ‘democratize the data’ first and foremost because technologists will provide expert interfaces* to that data/service faster than Government will create the UI. Hhhmmmm.

The more likely, to me, ‘customer’ is likely to be more enterprise in scale and be looking to hook up to their own systems — people like Bloomberg, the Financial Times and Local Authorities spring to mind. This would/will be important but doesn’t really do much for supporting our open data agenda as such — but good set of APIs, with useful documentation and solid performance should make everybody happy — so if there is a chance for those thousand flowers to bloom we need to be ready.

*he seems to suggest that this interface could simply be an expert intermediary.

Show your workings: a digital statistical publication

Russell has written a post that touches on some of his thinking about what a ‘digital white paper’ might look like and in doing so draws attention to Bret Victor’s tour de force of a ‘longread’ about climate change. The real brilliance of the work by Victor is that not only is it wonderfully interactive but it also fulfils that old staple of maths classes; ‘show your workings’.

Given where I work, my primary project and my recent reading & writing it probably isn’t a surprise that I have found this interesting.

One of the things I keep noodling with in my spare moments is what might a truly digital statistical publication look like. To be honest other, better qualified people, are looking at more immediate, practical responses to that question whereas I am really using it as something on which to hang various ideas and hunches about the future of digital publishing to give things some kind of structure.

So the ability to expose the methodology behind a particular statistic and make that explorable in place might make for an interesting experiment. Our user research has identified that there is an expectation that our statistics are methodologically sound above and beyond what is perhaps expected elsewhere and making that visible (it is always available and on our new site much more obvious) would provide pretty radical levels of transparency.

There is almost certainly something that can be learned from ‘open science’ here and in particular ideas about ‘open notebooks’. The more transparent you are the more trust you build in the results. That said we have very important disclosure rules to consider at all times so it isn’t as simple as providing all the underlying data to allow truly replicable ‘experiments’.

Our QMI documents (for example) provide a great source of information already but they are far from ‘digital first’ with most of the pertinent information locked away in a PDF. The challenge would be surfacing that in an ‘of the web’ rather than ‘on the web’ sort of way.

We already do a better job than Russell’s complaint about white papers;

“tables and the diagrams you get are included for the rhetorical power of their presence rather than any explanatory work they might do”

..and every report (we actually call them Bulletins but that is another blogpost) comes with a whole supporting ‘reference tables’ in Excel but it still feels a bit disjointed and the real power (I think) would be presenting the combined narrative and the data seamlessly and in a way where it can be queried and explored (while still providing the data free from words from those who like their statistics straight with no mixer.)

Given my role the big thing I am always thinking about with these ideas is are they repeatable. I have no interest in trying to provide a system that can support a thousand unique snowflakes (or god help us Snowfalls) so an additional challenge would be creating something that could work across multiple outputs.

Maybe. Anyway.

Pretty much at the same time as my writing this Leigh Dodds wrote a complimentary post that shows just how this kind of development could make things better.