For a feature-length piece in York University's new research magazine called Brainstorm, Megan Mueller from York's Research Communications Team and I sat down last week to talk a bit about the consumption of AI and the challenges in designing captivating AI experiences. The next issue of Brainstorm will cover different perspectives on the topic from computer science, robotics and health to philosophy and marketing. And I really look forward to learning from my York colleagues. In the meantime, I thought I'd share some raw bids from our own conversation.
AI-driven personal devices or ‘assistants’ including Amazon’s Alexa, Apple’s Siri and Microsoft’s Cortana are successful in the marketplace. What makes consumers buy or not buy into artificial intelligence solutions?
In one word, stories. For me as a marketing expert specializing in technology experiences, the success of AI rises and falls with clever storytelling. Does Amazon Alexa give me an opportunity to question and re-discover what it means to be human? If so, I want to try it.
Just think about two very simple types of AI stories that we are consuming every day. Let's call them magical tales and lab tales. Magical tales highlight a technology’s magical properties, the idea that this or that device seemingly magically acts and responds to us in human-like terms. These tales are beautifully illustrated at the moment in IBM’s Watson commercials that raise the prospect that Watson can actually teach a human genius like Bob Dylan something about romantic song writing, for instance.
Lab tales, on the other hand, highlight human ingenuity and educational aspects. In IBM’s case, we can see this level in the countless, documentary-like heroic engineering and programming stories told around the Watson. This second level of storytelling is as important as the first because it reassures the human audience that all of this magical stuff is still a straightforward human creation and that, despite all of these amazing abilities, humans are still in charge. For this reason, companies must carefully tailor their storytelling to these two genres to address these perceived imbalances.
What are some examples of the negative implications where machines too closely mimic humans? And what kind of marketing lessons are to be learned in these cases?
Understandably, a lot of managerial attention goes into designing an AI object so that consumers don't fear how it looks or what it does. The Amazon Alexa team, for instance, knows very well that abstract light movements help make Alexa's acts of speaking and listening human enough but not too human. This comes from Freud's notion of the uncanny and the idea that the relationship between the degree of an object's resemblance to a human being and the emotional response to such an object produces a dip in the observer's affinity, that so-called uncanny valley. But that's not all there is to it I think.
One of the biggest mistake marketers can make is thinking that the product IS the experience. But it’s not. In order to be successful, AI managers also need to be good sociologists who understand that power never resides in an object. Power, and many companies get that wrong, is rather distributed across networks of people, things, and institutions. So it's one thing to argue that we have an incredibly powerful AI technology, and another to design a society of people and institutions that agree. For this reason, IBM is spending enormous resources on naturalizing cognitive computing and cognitive reasoning. That's an act of changing how power is distributed across social and technological networks, in a manner intended to make Watson indispensable to multiple social and business interests and agendas.
What’s the tipping point between power-enhancing and power-stealing?
There is not one tipping point. Instead, the tipping point is slightly different for each category of AI and each cultural context – and it is constantly in motion as we debate the benefits and risks of these technologies. What is fascinating to observe as a researcher is that the negotiation of where this tipping point lies is happening in multiple domains right now – from self-driving cars to the question of whether I should automatically grant home access to to the mailman through Amazon’s smart lock initiative.
What is important to remember is that marketers have used storytelling to push the tipping point further and further. I would have never dreamed that Apple can now be in possession of my health data through the AppleWatch or that Amazon can know at what time I typically go to bed and what music works best to make me fall asleep.
What are the key marketing-related questions when it comes to AI?
On the research side, we see a lot of activity at the moment in the area of relationship styles, that is, work that asks what kind of relationships consumers can forge with AI and how those relationships really differ from the relationships we forge with brands, for instance. Two colleagues of mine, Donna Hoffman and Tom Novak, have recently published some groundbreaking research in this domain. Another important question is this: how do we as managers, consumers and researchers really think about AI? My colleague Eileen Fischer and I are currently working on the mythology behind AI and the Internet of Things, how it can inspire consumers and managers but also constrain their perspective.
On the practitioner side, the companies I work with are interested in integration but also in naturalization. How can we make AI as natural and as invisible as possible? This includes asking not only what this or that device can do, as software engineers like to do, but how and why this or that capability would matters to my identity, my family, my home, and the societies we live in? Companies and also policy makers are asking these how and why questions, and many come to York for answers.
What happens to real relationships with real people if we become too attached to this software? Will real people not be able to stack up to our ‘perfect’ AI ‘assistants,’ and why should the manufacturers of these devices care about this?
In my research lab, the Big Design Lab, I work with both large and small technology companies on how we can help avoid that AI technology perpetuates problematic physical or socio-economic ideals and inequalities. For instance, how can we prevent that fitness trackers foster a normalized physique? Or how would we design a sleeping apps that sustain the idea of sleep as a domain of recovery and relaxation that is outside the market rather than as a domain of competition and performance? My research helps companies minimize the risk that AI creates even more pressures and uncertainties for consumers.
What is York’s unique contribution to the AI discussion?
The main contribution that I see is that York University researchers like myself study AI and other transformative technologies not in a vacuum but in their specific economic and socio-cultural contexts - at work, on dates, in education, at home, and in the family. Such research is generally harder to conduct because of it involves dealing with a whole lot of complexity but it yields more interesting results.
The second benefit of the research that I and others do at York is that we study the influence of these technologies systematically over time. This matters greatly because AI constantly changes how we work, eat, sleep, solve problems, and ultimately, who we are. Such contextualized and longitudinal insight is extremely hard to find today and in high demand among companies, policy makers, and ultimately, consumers.