Consistently building in reflection time at the end (or indeed in the middle) of projects is something that most companies rarely do well and often end up considering as something of a luxury. In the rush of the day-to-day we get very good at being relentlessly forward focused, immediately moving on to the next thing, seldom taking the time out to pause and really understand what happened and why, and how it might be done better next time.
And yet given how important developing a learning culture is now for just about every business, it’s surely something we should all be doing more of. There's some well known examples of companies that have been able to create the space for employees to explore new ideas (Google 20% time of-course, 3M's Time to Think, GDS's Firebreak, Facebook's Hackathons, Spotify's regular hack days etc) but in the age of continuous experimentation it's also about reflecting on what we're learning as we go along. I like the way that Pinterest, for example, go to great efforts to embed reflection time in their culture and practice so that it becomes a habitual way of gathering learning as they go.
One of the simplest, and therefore the best, frameworks that I've come across for this is the so-called ‘after action review’. It originated in the US Military who would use it in their de-briefs as a way of improving performance, and features four simple questions that can be answered after an action of some kind:
1. What did we expect to happen? Knowing that you have to answer this question afterwards means that you go in with a greater clarity of objective and desired outcome.
2. What actually happened? A blameless analysis, that identifies key events, actions and influences, and creates a consensus
3. Why was or wasn't there a difference? What were the differences (if any) between desired and actual outcomes, and why did this difference occur?
4. What can you do next time to improve or ensure these results? What (if anything) are you going to do different next time? What should you do more of/the same/less of? What needs fixing? What worked and is repeatable or scalable? The idea is that at least half of the time of the review should be spent answering this question.
Sounds obvious. But then the most useful things often do.