Last week 300 planners were packed into Google HQ in London to talk all things AI, robotics and machine learning. This whole area, despite being with us for a long time, has suddenly become a huge area of focus for the industry not least because of the huge shifts in behaviour, advantage and opportunity that it will bring. We wanted to take as broad a perspective as possible with this event, and so we had five exceptional speakers, each with a unique point of view on the theme.
Author and designer Tom Chatfield kicked us off by talking about how much of the recent focus on AI has been around the ‘usurpation narrative of human-machine interactions…a creation is pitted against its creators, aspiring ultimately to supplant them’. Science fiction is full of machines that are out to eliminate or hoodwink us, and the application of AI in game scenarios (Go, Chess) always means that there's a winner (usually the machine) and a loser (the human creator). Yet since the function or faculties of machines are nothing like us the reality is that rather than being in competition with information technology, we are instead 'busily adapting the fabric of our world into something machines can comprehend'. Tom went on to make some fascinating points about how even the smartest AI is far more inflexible than the most intransigent human ('we either do things the way the system understands, or we don’t get to do things at all') which can potentially lead to what is sometimes known in social science as 'minority rule' - the ability of a small proportion to influence a much broader context through intransigence - in this case the minority whose rules apply are those that are designing machines and algorithms:
'Even the smartest AI will relentlessly follow its code once set in motion – and this means that, if we are meaningfully to debate the adaptation of a human world into a machine-mediated one, this debate must take place at the design stage.'
By the time it gets to “Computer Says No”, said Tom, it’s too late. The setting of rules increasingly needs to take account of not only productivity and profit but softer values such as compassion, freedom, opportunity, justice. What happens, for example, if a business assimilates huge amounts of interaction data into training an AI to deliver improved services, and a user requests that their data is deleted? The data may then be gone, but the system has still learned to identify that individual and model their behaviour based on a unique combination of factors:
'The AI is a black box, immune to reverse-engineering—neither you nor anybody else can tell exactly how it comes up with the wonderful propositions it produces.'
This means that we need to sharpen our thinking around the regulation, translation, and accountability of human-machine relationships, and traditional corporate and regulatory structures (limited by human-related time and attention factors) are not fit for this purpose. Machine speed and scale is required to hold machine learning accountable. We'll need certified, transparent AIs on boards. AIs that are designed to help us think better about other AIs. A successful collaboration between humans and machines is one in which humans are able to transparently assess a system's incentives and either to influence its direction or debate its alteration.
Paul Chong from IBM Watson, echoed the responsibility that corporates and institutions have to debate how this technology is shaping our future. The discussion that has until now sat mostly in academia needs to become far broader, especially when the rate of change in the area is speeding up in the way that it is. Paul touched on some of the key accelerating factors in AI (echoing some of the accelerants that Azeem Azhar discusses here including the data explosion, new forms of collaboration and flows of information), and also some of the key challenges. The latter including natural language processing - the ability of machines to successfully interpret the myriad complexities and nuances of human language when they are essentially binary. So it is critical, he said, that we concentrate on understanding this difference between what machines are good at, and that which is uniquely human. Systems that are trained today are typically focused on narrow domains of expertise, but the real potential comes with the extension and democratisation of AI technologies.
Dr Nicola Millard from BT followed Paul, framing her talk within the context of machine learning's role in effective service delivery. An AI veteran, Nicola passionately argued for the importance of working with users in the design of systems that actually make sense to customers ('any system is only useful if people use it'), and for quality input of data ('AI is only as good as the data you give it'). Building on what Paul had talked about, she spoke of how machines are good at filtering, assimilating, aggregating and deciphering at scale, but still lack critical human qualities like empathy, emotion, creativity, the ability to negotiate, to innovate, to care. So whilst we can automate the transactional stuff, we still need human input to design, interpret, feel.
Rushi Bhavsar built on an excellent talk that I saw him give at the recent APG conference which I've also embedded here:
Rushi focused on the relationship between AI and culture, and how there is too much discussion around team human and team machine when in reality we are augmented already and machine learning is the only way to deal with the level of complexity and scope of the data that we already have access to. Culture is essentially self-referential, with so many in built feedback loops that machine learning has a potentially useful role to play in navigating this complexity ('to sample the zeitgeist'). There are already many data sets focused on arts and culture in existence that we can use to train neural networks in intangible qualities like style. This allows us to play with style, to combine elements together in new ways (leading to a combinatorial explosion) leading to new opportunities to use AI in the creative process. Despite all the doomsday scenarios, we forget the immense possibilities that are open to use right now, and we forget that the true purpose of technology is to augment human capability.
This thought led nicely on to our final talk from robotics expert Stuart Turner of Robots and Cake. Stuart spoke at last year's Dots conference and he built on his amazing talk there, talking about the transformational power of robotics. In 2004 he began losing function in his arms and legs due to cervical spina bifida and eventually became a quadriplegic. But he refused to accept the limitations of his shrinking world, and has since developed a whole series of technologies to augment his capabilities and enable him to lead as 'normal' a life as possible:
Stuart talked about the potential that comes from hacking together different tools and technologies to augment what humans can do, and the possibilities that come from leaving enough 'wiggle room' within technology so that people can adapt it, shape it, and recombine it to address new needs. Stuart has flown drones remotely using micro-movements, has toured museums of the world through technical interfaces, has made his own smart home. Technology has transformed his life, but also shown how a critical mass of interacting technologies can augment human activity in new and transformational ways. He captured this in his concept of the 'extensible self', or how we use technology to extend ourselves, view or interact with parts of the world that we wouldn't otherwise be able to but through machines. Designing technologies with enough 'wiggle room' can lead to an explosion of possibility.
It was a wide-ranging but truly thought-provoking event so my thanks to our amazing speakers, and as always to Google for hosting, and to those who came along on the night. As usual Sciberia did an excellent job of visualising the event talks, which you can see below (and in all it's glory here). Our next Firestarters will be in September so watch this space or sign up to my weekly email for more news on that