What I'm Learning from Shutting Up

I first posted this on Medium here.

A few weeks ago I decided to try a little experiment and consciously shut up during meetings and other interactions with people at work. I wanted to interrogate some of my unconscious biases I have in an environment of people who are predominately like me. These are worth reading for more info thisthis and this. Also, this talk by Caroline Drucker is fantastic.

I wanted to check my urges to interrupt during any meeting (from one to one chats to large groups) at work and only speak when someone else had completely finished. I also wanted to observe how often other men speak/interrupt.

Before this I thought I was fairly well behaved and didn’t interrupt too often and was pretty good at listening as well. I’m a user researcher by profession and one of the the most important things we do is listen and observe. But, this has made me realise that actually I’m pretty shitty at these things out of a research environment. The urge to jump in or to suggest the conclusion to someone else’s point is alarmingly strong (this is embarrassing to write). The other thing I’ve realised a lot more is the one-upmanship that goes on, men wanting to score the killer insight or have the final comment to a conversation. I knew both of these existed, but I am shocked by how much I have observed.

I’m still learning about this, but here’s a some thoughts I’ve had about meetings and how men can be more aware. None of this is profound new advice, but when you shutup you could learn something.

  • Shutup and listen. I mean really, listen.
  • Wait. Let someone finish.
  • Don’t assume you know more than others.
  • Strive to make the workplace safe for everyone to feel like they can contribute, through whatever means suits them.

It’s only been a few weeks, but this little experiment has taught me a lot.

(The header image is taken from this brilliant Toast article ‘Women Listening To Men In Western Art History’)

Simple experience improvement

I came across this video on Twitter yesterday, I love it. Such a simple way to improve someone's experience. 

'We'll A/B test that later'

The other day I tweeted this about A/B testing.

I got a few responses saying A/B testing is better than nothing. Which I completely agree with, it shows that the team cares about what is the best thing for users, good, keep going. 
It’s just that in the early stages of a project we are trying to answer questions like

  • who are our users?
  • what problems are we trying to solve?
  • how do we measure success?
  • what will people get out of this service?
  • how can we constrain our reckons and turn them into evidence based decisions?

These are the hard things that then help us understand why people prefer option A/B, not that they prefer A/B. Having this deeper understanding makes things much easier later on. Doing the hard work to make it simple etc. Saying “don’t worry about that, we’ll A/B test it” feels like a cop out to me. The best thing for a team to be doing is combining contextual research so we have that deeper understanding and then using quantitive tools like A/B testing when we want data to test the hypothesis we have created. 

As a cynical side note™ very often those things that we were going to A/B test, actually never get tested. They are thrown upon the great pile of ‘oh yeh, didn’t we talk about that’ actions. 

I Guess We Learned Not to Do It Again

I just re-watched Burn After Reading. The final scene (spoiler alert) is JK Simmons's character, a high ranking CIA official and one of his subordinates discussing the clusterfuck of events that has unfurled in the film. The exchange is how I imagine senior people working on failed 'Innovation' or 'Transformation' projects at large organisations have when the budget has run out, the presentations have been forgotten and the team disbanded.

Organisations not collectively learning and sharing from mistakes. Oh also, unaccountability.

 

 

Knowing when to call it

I recently turned down a project that had some the hallmarks of things I know to avoid. 

  • Doing 'research' after most big decisions have been made
  • Poor access to the people who are in control of sign off decisions
  • All decisions going up to the CEO for sign off or approval
  • Marketing department doing the content
  • Doing most of the interaction design before any content has been thought of
  • A roadmap that went on for over a year

I could go on. 

They had a decent amount of money behind them and were assembling a team of designers and developers that appeared to know what they were doing. But, the management team had a roadmap and the main goal for them and to investors was just to stick to it and deliver as much as possible as quickly as possible. So I said, thanks, but no thanks. I've had my fingers burned before by working on bad projects.

Asking the questions on how the team is setup, what success looks like, what the vision is all now play a pivotal part not just in accepting work, but rejecting it as well. Seems obvious now. But it's one of those things that I would have heard and accepted, but not understood until having to do it.  

I used some of the stuff from this really good post to help with the sort of questions I ask before i do projects these days.

 

 

What we've learned performing Product Audits

I first posted this on the GDS Data blog here.

At the Performance Platform we have completed a round of Product Audits with our users. Des Traynor at Intercom (who we got the idea from) describes them as

A simple way to visualise this is to plot out all your features on two axes: how many people use a feature, and how often.

Matt wrote about our first one here.

For the audits we printed off all the modules the service had and asked the team to map them on an axis of importance versus frequency used. As they were doing this we asked them to explain their thinking. Here is a picture of the one we did with the Register to Vote team.

Register to Vote Product Audit

We've completed eight audits with seven different teams, responsible for the digital transformation of the following services.

  • Prison Visits Booking
  • Lasting Power of Attorney
  • Booking a Practical Driving Test
  • View Driving License
  • Renew Tax Disc
  • Register to Vote
  • Carer's Allowance Applications
  • Patent Renewals

We also asked participants to map modules that we have but they aren’t currently using.

Matt, our Product Owner, and I were present at all the audits and we tried to ensure at least one other team member was there as well. Once we completed each audit we mapped its findings onto a large white-board next to where our team sits. Blue labels represent modules people have, red show ones people don't have.

Our Product Audit Board

We also wrote up our interview notes and put them into a shared folder so that those who weren't present at the audit could see what was said.

As you can see the board got pretty cluttered, but it was useful to see where our 4 KPIs fared and if there were patterns with other modules.

I used Keynote presentation software to create the following visualisations. Firstly, the KPIs:

All the KPIs visualised

User satisfaction and Digital Take Up were the two highest ranked modules. Our users said they put these there because this data isn't available in other analytics tools. It’s also a good litmus test for seeing if your software releases are having their intended impact.

User Satisfaction

Completion rate

Completion Rate scored well except in one case, Lasting Power of Attorney applications. This is because there are two types of completion - we’re trying to solve this as Matt has written here.

Completion rate

 Cost Per Transaction didn't score highly as it only updates every quarter. Also it’s influenced by many factors, some outside the remit of the digital teams we spoke with.

Cost per transaction

As the Performance Platform offers many more modules than just the 4 KPIs, we wanted to see if there were patterns elsewhere. Two that didn’t fare as well are ‘Uptime’ and ‘Page Load Time’.

Uptime

  f a module is consistently placed in the bottom left we found that there’s either:

  • No user need for it. Do we remove it, as it is a distraction?
  • there’s a user need for it, but it’s not done well enough or contains errors. Do we prioritise working on it to improve it?

Another really interesting discovery we made was the desire to find out how long things take to do. These two pictures show where service teams put ‘Time on transaction’ (how long it takes a person to complete the online part of a service

Processing time - how long it takes for an application to be completed

Time on transaction

Processing times

This is compelling because one of the biggest user needs for government services is 'how quickly can i get my thing'. This can be benefits, a registration, an application etc.  We already show ‘Time on Transaction’ for Application for Carer's Allowance and it's something we’d like to start rolling out for more services. Later we could add processing times, and ultimately we could combine both metrics to show how long the process takes from a problem state to a resolution state. For example

My father died, he was looking after my mum. I need to look after her now, I need some money to do this.

Resolution state

I have money in my bank account from a Carer's Allowance.

Once we know these timings we can help service teams work to reduce them to meet these needs.

Product audits have been really beneficial for us to identify why and how our users are (or are not) using the things we are building. It's also a massive help in prioritising future development work.

You can download a PDF of my playback here.