'If you want an answer in the future you will ask a machine. The role of humans will be to ask questions. Great question creators will generate new industries, new brands and new possibilities.' Kevin Kelly
Monday evening saw the latest Google Firestarters event and for this one we brought together the planning and performance marketing communities (which we've done only once before) to tackle what is surely an increasingly critical cross-industry question - how (with the growing ubiquity and integration of automation, machine learning and algorithms into industry practice) we can achieve the optimal balance between human and machine capability. To help us tackle this thorny topic we had a variety of excellent perspectives from four amazing speakers.
British author, broadcaster and technology philosopher Dr Tom Chatfield kicked us off with an outside-of-the-industry view focusing on three key themes: Time, Tests and Questions. We are thankfully now moving away from the 'technology as magic' mindset, he said, to more focused questions around how the application of technology can support quality. In the current age of collaboration there is less and less that we accomplish individually and more and more that we achieve together, working with technology as the enabler. His focus on time led him to talk about how human attention is under continuous assault - there has been a horrific undervaluing of attention ('your mission is to be part of the signal rather than add to the noise') and the correction that we're now starting to see in this is inevitably leading to more machine-driven filters between people and data. There are fundamental differences between machines and humans - the former thrives in dealing with constant feeds of huge amounts of data through ultra-high bandwidth, the latter through a narrow cast focus on depth and understanding - and it is recognition of these elemental and increasingly divergent differences that enables us to put them together in the best ways.
Tom then talked about tests, and the universal psychological challenge that comes from the natural biases we have in dealing with data and developing understanding. We can be brilliant at interpretation but are also particularly good at looking for things that confirm our existing world view and this ingrained confirmation bias is compounded when lots of data meets our narrow-beam focus, creating a cognitive recipe for continuous self-deceipt. The curse of data is that we can more or less prove anything. Scientific method is the way in which we can overcome confirmation bias since we are inherently trying to disprove, rather than prove, a theory or hypothesis. The challenge here then, is that a test cannot be a good one unless it can be failed. Every test should have a definition of failure, and that's not always easy to accept in many organisational cultures.
Data and technology is amazing at giving us answers but terrible at giving us questions (Tom gave a lovely example here of how the commercial success of a restaurant is not only down to the food, the atmosphere and setting, but how the design of the menu helps frame the choices for us). If machines are big data miracles but humans are small bandwidth but big imagination, then the right questions can help to feed the creativity and imagination that we are so good at. We need more 'wait, but why' to develop better understanding, design better algorithms and capitalise on the real opportunity that exists in bringing the best of both human and machine together. It was an amazingly insightful twenty minute talk.
Ann Wixley, Exec Creative Director at Wavemaker, built on this by drawing out some key differences in the way that humans and machines think ('an algorithm will never win a Pulitzer but a data scientist might') and how we seldom think of algorithms as something that has been created and designed (machines on the left, humans on the right in the visual below). Machines are fundamentally characterised by logic and pattern recognition but poor at making lateral connections.
When outputs go wrong it's often a fault of the data set rather than the algorithm (personal example - my daughters 'polluting' my perfectly crafted Spotify personalisation with teen pop tunes). People create problems which is why people are good at solving them but in order to do this well we need to spend more time and focus on problem definition and understanding (there was an interesting discussion on the panel afterwards about how short-termism is undermining agencies ability to do this well). Ann used a Kevin Kelly quote to describe how this can mean that we need to adopt a different, new type of thinking:
'To make humans fly, we had to invent a different type of flying. Through AI, we’re inventing new types of thinking unlike human thinking. This intelligence does not replace human thinking, but augments it'
The challenge comes when we have black boxes that can obscure our understanding, or unsupervised learning that can lead us to unforeseen places, or under-appreciated biases that can skew outputs - she used a wonderful example here of how Volvo highlighted the flaws in data collection and application from crash tests. The reason that women are 17% more likely to die from a car crash than men has been because the default crash test data was based on a 70Kg, 20-30 year old male who was always sat in the driving seat.
Ann went on to use a number of other Creative Data winners from Cannes to demonstrate what can happen when humans and machines are brought together in compelling ways. As an industry, said Ann, we need to get better at fusing the two religions of brand strategy and performance marketing and holding two thoughts in our head at the same time. Adept application of technology gives us access to greater diversity of ideas but also the wherewithall to deliver them at scale.
Rob Estreitinho, Senior Strategist at VCCP partnership and curator of the weekly Salmon Theory, brought together philosophy and strategy quite brilliantly in his talk which gave us learning around three pillars of judgement, speed and cycles. Echoing some themes from Tom and Ann he described how we need to get better at debate and discussion (fallibilists believe that truth and debate are closely related and that we can never be absolutely certain that what we believe and value is right). We need to be less binary about the impact of new technology. We're all experts in something but expertise can often lead to assumptions that we already know what the answer is and so we should cultivate more of a beginner's mindset:
'In the beginner’s mind there are many possibilities, but in the expert's there are few.' Shunryu Suzuki
Simply following best practice can restrict thinking and limit results so a beginner's mindset opens us to better learning.
It's easy to assume a handover relationship between judgement and automation but really its more of a symbiotic relationship where judgement informs automation which informs judgement which informs more/different automation and so on. Numbers are objective but how we choose them isn't. In his point about speed, Rob highlighted the difference between speed and velocity - the former is simply being fast but the latter combines speed with direction.
“Fast learns, slow remembers. Fast proposes, slow disposes. Fast is discontinuous, slow is continuous. Fast gets all our attention, slow has all the power.” Stewart Brand
Whilst pure automation without judgement can be dangerous, fast data can be seductive but can also limit impact. These tensions between machines and humans will be with us for the long-term, but the balance will come from managing those tensions rather than ignoring them. We need more debate, less of culture of being right, and less tribalism around the impact of technology.
Our final speaker Jon Fisher, Paid Search Associate Director at iProspect, picked up on some of these themes and gave us a real world application in describing some of the benefits but also trade offs that can happen when we apply automation and algorithms to improve marketing and advertising performance. He talked about how performance marketing has become far more complex over the past decade but also how automation has effectively simplified the management of that complexity. But this hasn't come without potential downside, mainly in the form of a potential reduction in specific controls and understanding (the idea of setting determined outputs which are then delivered by a black box). Being aware of these trade offs is key to managing growing complexity well. but also in continuing to understand what's really working and what can lead to better results. He used the example of the British rowing team who went last place in the 1996 Olympics to winning Gold in Sydney four years later by ensuring that everything the team did focused on answering one simple question: will it make the boat go faster? This level of focus can help us to navigate the complexities of human/machine by keeping the fundamental questions at the centre. Alongside this we need to recognise the need to find more time to craft the right questions, to remember that perfect solutions don't exist, and know that ultimately it's about 'being less wrong'.
That point about asking the right questions was a consistent strand throughout each of the talks which were all different but combined to create a compelling narrative around how we can make smarter decisions on the optimal combination of human and machine capability. It was a truly fascinating event. Group Think compiled a good Twitter thread which also details a number of the key takeouts, and as always Scriberia did an excellent job of visualising the talks - you can see the final visualisation in all its glory here. My thanks to Google for hosting, to all who came on the night, and of-course to our wonderful speakers. The next Google Firestarters will be in late November so look out for that.