Assessment evolution


Over the last few weeks there have been a few blogposts about new approaches to the Digital by Default Service Assessments. Not to mention the approach our Aussie colleagues are taking.

I’ve been following this all with real interest — I’ve written before about my feelings about Service Assessments and what I have perceived as some of their flaws but also to be clear I continue to think that the core idea is vital.

Assurance via independent, peer review according to a set of sensible standards remains a brilliant and important concept. It raises the bar (and god knows it needed raising.)

Also we are a long way from good practice being so embedded it is second nature — there is absolutely still a need to provide assurance.

The Department of Health and Ministry of Justice are both trialling new approaches that are designed to be more integrated and less adversarial (which despite every effort I think the assessments in their current format become). GDS themselves are also in the alpha stage of a new approach themselves (and have been working closely with the Health and Justice teams).

Health have boiled down the service standard to three broad themes;

• How does the service meet user needs?

• Is it safe and secure?

• Can the service be quickly improved?

They then take a very ‘agile’ approach to delving into those questions with the team via a show and tell, discussion with assessors and retrospective — all much more collaborative but with concrete actions at the end.

I have to be honest I also tend to think of the ‘standards’ as something other than the list of 18. For me it is always more about;

Being user-centric
Are we really sure about the user needs for this product and do we have data to inform those needs as well? Are we prepared to keep learning about user needs and whether we are meeting them?

Being agile and iterative
Is the team, our ways of working and technology ready and able to quickly react to user needs and iterate painlessly without compromising security?

Being inclusive
Have we made sure things are accessible in the widest sense — for example do things work on text browsers, is the content clear and easy to follow, does everything work as expected on different devices and older browsers? What are the fall backs?

My feeling is if you are thinking about these things from day one passing the assessment is simply a side effect -the difficulty comes when you try to bolt on these things later on.

Which brings us to the MoJ approach — which as I understand it isn’t a million miles from the direction GDS themselves are moving in and the Aussies operate as their ‘in-flight’ process.

Continuous service reviews’ removes the milestone element and replaces it with an ongoing peer review, a critical friend empowered to challenge the team as part of their regularly scheduled meetings (sprint reviews in MoJ case) and also act as a facilitator to wider support if a need is identified.

I think there is a lot to like in this approach as well — it doubles down on the ‘peer’ aspect as the reviewers are pulling double duty from their own teams. This does start to ask a lot from people though and somewhat waters down the independent nature of the peer review if you are all from one org.

I wonder if it lessens the capacity for the reviewer to bring a fresh pair of eyes to things and ask the hard questions. Maybe.

There is definitely something between these approaches that is going to improve things I think — a more holistic understanding of the point of the standards (rather than a tick box exercise) plus an ongoing assessment and relationship (rather than one off ‘viva voce’ meeting).

Currently my role includes a lot of thinking about this stuff and you can probably expect more thinking out loud in the weeks to come as I contemplate how to take the best lessons from other teams and see if there is a way to introduce them in Defra-land.