Open the Pod Bay Doors, Designer

Ben Sauer speaking at Front-End London in October, 2016
44Views
 
Great talks, fired to your inbox 👌
No junk, no spam, just great talks. Unsubscribe any time.

About this talk

In this talk we will look at the current capabilities of Voice UIs, how the APIs are connecting to apps, what’s next, and how this might change our design process and products.


Transcript


I'm going to talk to you today about voice interfaces. I'm going to start with this question that was bothering me for a few years when Siri came out. It's, "When might a VUI make a GUI redundant?" You can tell I'm really jet lagged. Ben: [00:27] You're going to get an unhinged Ben this evening. I'm going to be slightly incoherent and a bit sweary. Why was this question so important? We're talking about front end. We're trying to relate these two things together. When will certain interactions go away because voice interactions are the best place instead of a screen? [00:49] I'm not the only person who's been thinking about this. Dan Saffer, ex Adaptive Path, said, "Future-proof yourself by ensuring that the kind of work you do cannot be easily be replicated by an algorithm." Which I think is really interesting. Quick show of hands, who has played with the Amazon Echo? [01:08] Not too many, but you've all seen a video of it or something? The honest truth is I wrote this talk about two years ago when the Amazon Echo got ready to be popular. I was ahead of the game because it wasn't in the UK, so nobody had seen it. I'm glad not too many of you have seen it. [01:26] What is VUI like today? Voice interfaces, excuse the phrase VUI, but GUI was once new and awkward to say, and now we say it. What is it really like today? For a lot of people, it feels a bit like this, if were comparing it to using a computer. To expand on that, I'm going to play you a clip from a film. It is called, "Curious Rituals." [01:50] The obsession here is not a future that's really shiny and Microsofty, where you can flick graphs onto your fridge -- like you'd ever do that. It's more like, what's the mundane reality of the future that we're entering? When does it go wrong? You'll see what I mean. [02:04] [beginning of film] Bonnie: [02:04] Call Gerardo. Automated voice: [02:05] Name unknown. Bonnie: [02:06] Call Gerardo. Automated voice: [02:07] Name unknown. Bonnie: [02:08] Call Ger-air-doo. Automated voice: [02:09] Calling Gerairdoo. [02:10] [phone ringing] Gerardo: [02:10] Hello. Bonnie: [02:10] Hey, Gerardo. Did you call? Gerardo: [02:52] Yeah, I was going for coffee. Bonnie: [02:53] I will be there. Gerardo: [02:55] I'm running a little late, but we'll be there by noon. Bonnie: [02:59] OK. Gerardo: [03:00] Bonnie? Bonnie: [03:00] Oh. Automated voice: [03:01] Call ended. Bonnie: [03:01] Call Ger-air-doo. Automated voice: [03:03] Calling Gerairdoo. [03:04] [phone rings] Gerardo: [03:04] Hello. [03:04] [end of film] Ben: [02:52] I actually highly recommend watching the rest of this. You saw with the gesturing, she was gesturing at somebody across the street, but then canceled the call. The film in its full length is full of these little moments, and it's really an interesting take. [03:04] While I was writing this talk, I started to think about, "Which apps are potentially more efficient using voice than screen?" A funny thing happened to me while I was thinking about that. I don't know if you know this...I've skipped something ahead, sorry. This is as much as I like. [03:25] Just a couple of definitions. Voice interfaces, humans' machines talking, ASR, turning speech into words, we're getting really, really good at that. This is where things start to get really interesting -- natural language, understanding, turning words into meaning. We'll talk about this cusp, this last one. This is where this interesting thing took a while to develop this. [03:47] Back to apps. I discovered that you can use Shazam through Siri while I was writing this talk. Somebody told me you can do that. I use Shazam all the time, and I actually didn't know that you could do that through Siri. [04:04] Why is it that I didn't know you could do that through Siri? It's because voice has no affordance. There's no way for you to discover what it can do, unlike a screen-based interface. [04:18] There's this pattern when it came to Siri. People would try it, and they would discover some cool things. They would very quickly hit the borderlines of like, "It can't do this. It can't do this. It can't recognize my mother's name." Then they would stop, and then they would stop experimenting on what it could do because there's nothing to see. [04:37] It's like a bit of a goldfish. There's no clues. Discovery is very hard with a voice interface. For this reason, I compare using a voice interface like dealing with Manuel, because you have this limited shared vocabulary, very poor skills and coordination. [04:56] It doesn't seem to have a very good capacity to learn, although that is somewhat changing. You're never really sure whether the job's going to get done. That's the big, big thing that people experience when they use voice interfaces. [05:07] Here's the most important one. It's not worth thinking about Manuel in this scenario. It's worth thinking about Basil because Basil turns into an asshole, doesn't he? Manuel is nice, but Basil turns into an asshole because of the misunderstanding. [05:27] Bill Buxton, who's now head of design at Microsoft, he once wrote this. It's long-winded, but he was talking about modes of input and he said, "Everything is best for something and worse for something else. The trick is knowing what is what, for what, when, for whom, where, and most importantly, why." [05:45] I get his point. Keyboards and then mouse came along, and then you realize a keyboard is great for this, but a mouse is rubbish for that and vice versa. The same is true for voice. A lot of people are thinking about this concept of conversational UI, and I wanted to just puncture the bubble a little bit. Because for me, the hype comes a lot from... [06:07] Oh my God. This is what comes from flying in from New York in the morning. The Chinese chat app, WeChat. WeChat. Everything is a conversation in WeChat. I was thinking about the fact that everyone's going, "Oh look, everything's chat-based. Everything's conversational." [06:27] If you actually go look at WeChat -- which was hard at first because obviously none of us understood Mandarin -- the UI is actually very menu driven. It's not really this thing where you're interacting with Pizza Hut, it's not really a conversation. It's actually menus. There's lots of cool sub menus here. [06:43] The idea of conversational UI taking over, I'm not convinced. Ted Livingston of Kik said, "People are increasingly spending their time in chat apps so we're building experiences inside chat that allow people to do more while they're there." That doesn't mean you're having a conversation. [07:02] On screen and in audio terms, VUIs today, they're mostly transactional. You're not actually having conversations. Bill Buxton also talked about this concept of the long nose. What he means here is that tech sneaks up on us. If you think about something like multi-touch interfaces, that was in 1973. [07:27] There was a system called PLATO IV that could do that. It took a long time until 2007, until that became a consumer application. The mouse, 1965 even, I think. Really not until the early '90s do we see that on millions of desktops. Capacitive multi-touch 1984. 2007, iPhone, and then billions of revenue. [07:54] The question is are we at that point with voice interfaces, for me anyway? Let's talk about where VUI is going. The short-term premise is to simplify complex interactions into directly expressed goals. If you think about some of the interactions that smart phones have taken over. Something simple like going into your kitchen and playing some music. [08:22] The price of a multi-functional touch screen is the volume of the required input because actually if you count the number of steps involved in doing that, with, let's say, a dock and a smart phone, it's hideous. It's really a lot of work. It comes out my pocket. I have to get past the home button and the security. [08:39] Then, I have to find the right app. Then, I go and connect to Bluetooth. Now, I'm going to find my music. It takes ages to actually just play some music. So multi-touch screens are great, but they actually mean that it's harder to get to the things you want. [08:52] Compare that to a '90s CD player, a reasonably well designed one. It would be off, and you could hit the play button. That would be it. Music would be playing. OK, yes, you couldn't choose with the same degree but still. VUI is an efficient shortcut to a lot of these interactions. [09:11] A lot of what I've been thinking about is beyond use cases. "Siri, remind my husband to put the toilet seat down." There's my contextual joke. Or this one, "If the doorbell rings while I'm in the garden, call me." You couldn't really design for this, is my point. [09:34] You couldn't design a product or a service, which is just about this. If all these things were interconnected and you had a way of expressing them clearly, you could create these on the fly. If this, then that. [09:47] All the big players are now heavily investing in voice right now. I have people in Silicon Valley that I know who won't tell me what they're doing, but I know it's voice-based. There's a hundred million dollar start up fund from Amazon to start building voice into your applications. [10:01] Apple have just opened a new lab for Siri in Cambridge. There's a lot of energy going into this. I know some other stuff that I can't talk about. Point being, the big players are just going, "This is it. This is the future, in some way." They're all working somewhat towards this idea of ubiquitous computing. [10:23] Anyone familiar with this idea? No. Ubicomp. This is really the idea, comes out of California as you might imagine. The future of computing is one where it will disappear into the backgrounds. That we don't have to give our attention to it. No more pulling funds out of our pocket, "The pressures, the pressures." [10:46] It's going to be just around us all the time. Then, there's this magical goal of having an assistant, which some of you may have seen in this movie. Everyone, a bunch of people seen this? OK. I'm going to play you a clip from this film. I think this gets the idea of Ubicomp really right. [11:05] It's also rare because most Hollywood films don't really respect our profession, really, do they? They make silly UIs, and they put junky shit when there's a Unix terminal up, right? Actually, this film, Spike Jonze has done a lot to really think about how technology works, and how we would build it, how we would design it. [11:26] It's worth watching just for that alone. In this scene, I'm going to show you the protagonist, Theodore, has recently upgraded his voice assistant, and he's at home playing a game. [11:40] [beginning of movie clip] Automated voice: [11:40] Hey, you just got an email from Mark Lumen. [11:44] Game Character: What are you talking about? Theodore: [11:46] Oh, read email. Automated voice: [11:48] OK, I will read email for Theodore. Theodore: [11:49] [laughs] I'm sorry, what's Lumen say? Automated voice: [11:54] Theodore, we missed you last night, buddy. Don't forget it's your goddaughter's birthday on the 29th. Also, Kevin and I had somebody we wanted you to meet. So, we took it upon ourselves to set you up on a date with her next Saturday. She's fun and beautiful, so don't back out. Here's her email. [12:11] Wow! This woman is gorgeous. She went to Harvard. She graduated magna cum laude in computer science, and she was on the Lampoon. That means she's funny and she's brainy. [12:25] [end of movie clip] Ben: [12:25] Just a few things to observe there. One, the piss-takey tone when he spoke to her like a robot. That's actually what happens. We adopt these personas, these voices as we saw earlier. My kids are now doing this to the Amazon Echo in my kitchen. Because his previous assistant, you had to speak to it robotically. [12:44] Another thing that is crazy here is that she was reading out his emails to him. If you think about that, when do you do that? That's the most inefficient way of processing an email. She could just have shown it to him if it was really important. [12:58] Then, the final most important thing is multimodal, this dance between the modes. Very, very gracefully portrayed in this film. Let's be realistic here, and I'm pointing towards some really awesome things that may or may not happen. At the moment, I really don't see Deep Learning solving some of this. [13:19] AIO for me is a little bit overhyped. I know it will produce a lot of societal change, self-driving cars, and so on. I don't really think that machines are ever going to have anything like common sense. It has to be trained in everything. [13:35] There's lots of stuff that we do and know that is not the result of training, that we have this sense of common sense, like a bunch of people are doing stuff, "Oh, that means this." We weren't trained in it explicitly. I'm not super convinced that we'll ever be able to talk to a machine in the way that the film describes, maybe in our lifetime, maybe not. I don't know. [13:55] Let's look at some promising products that are out now. This is, as you know, just been released in this country, three million plus sold in the US. Jeremy picked one up for me a long time ago in the US, because I was really fascinated, and brought it back for me. Thank you, Jeremy. [14:14] My parents now have one, and they're robotically shouting at it quite a lot. It's funny. When they saw it they were like, "We've got to have one. We've got to have one." It was fun to see. My kids absolutely love it. It's really changed the family dynamic in this space where we spend most of our time as a family, the kitchen. [14:33] Let's have a think about why it's so interesting. You can do managing as a shopping list, which is interesting. Then it sends it to your phone, which is really cool. Sorry, I was jumping ahead in my animations. You can ask for any music, from Spotify. My boy, Joey, he's five. He's always asking for AC/DC's "Back in Black," and then rocking around the kitchen. [15:02] Cooking timers, shopping lists, radio, podcasts, it's doing a lot for us. Why is it so good? They've really thought about, "What's the perfect context?" The kitchen, hands-free, they've really thought a lot about this. Its noise filtering and recognition is absolutely amazing. I can talk to it from the next room, and it generally will understand me. It's just something magical about it. It's really good at understanding. [15:30] Do not try to do too much with it. It's focused. It's not trying to boil the ocean with it, and I think that's a good thing. It's interruptible, that's one of the first voice interfaces. If you try a lot of voice interfaces, you realize you cannot interrupt most of them. This is the first one that you can, which is a little bit away on that journey to having an actual conversation. [15:51] The really big sign that this product was successful was that I used to use it for monitoring Minecraft use with my kids. I would limit it by going, "30 minutes" or whatever. Then, my kids started using it on us. Of I said, "Joe, I'll come play with you in half an hour," runs down to the kitchen. "Alexa, set a timer for 30 minutes." I thought, "Oh, my God." [16:15] [laughter] Ben: [16:16] My daughter she said this -- I'm going to give you a super proud daddy moment here. She said, "Speaking to Alexa is a bit like a Jedi mind trick, isn't it daddy? Tell her what to do, she repeats it, and then does it." Ah! Ooh, insightful kid. [16:30] [laughter] Ben: [16:30] She could do user researcher in the future. Let me talk about another promising product. This is called Hound. [16:38] [beginning of app clip] [16:38] Man: OK, Hound. Show me hotels in Los Vegas that are on this strip, costs less than $200 a night, and have over a four star rating on March 1st through March 5th. [16:54] Automated Voice: Showing 10 results with availability, with more than four stars, near Las Vegas Strip, for Tuesday, March 1st, staying for four nights for maximum of 200 US dollars per night. [17:06] [end of app clip] Ben: [17:07] They have some work to do on the tone of that voice in their delivery. [laughs] That's pretty interesting that they can interpret the language in that way and give you a result like that. Of course, they have to stick to these limited scenarios, but it's very interesting that they've got the NLU down to this level. Let's talk about what I like to call the galactic empire of our industry. [17:27] [beginning of app clip] [17:27] Woman: Back on the home screen, Siri is also really great in helping me find something to watch, even when I'm not sure what it is. [17:35] Show me some action movies. This looks good. I really like "Skyfall. How about a Bond film, the James Bond ones? I'm in the mood for a classic, just the ones with Sean Connery. With Siri? [17:56] [end of app clip] [17:56] [applause] Ben: [17:56] I don't know if anybody's used the Hound, I'm sure you've probably figured out by now to what point... [18:02] It is kind of interesting, particularly if you start thinking about the multimodal stuff again, the dancing between button, voice, and screen touch. We're starting to think about how these things work together. Again, the context is really important. [18:15] I've started to think, "What would make it revolutionary? What needs to happen for things to really start to change?" The first thing is the shift in design mindset. Scott Jensen once pointed out at one of our conferences, that he's an ex-Apple designer, that when cars are first...Sorry. [18:44] [laughter] Ben: [18:46] When cars were first designed, they had tillers because people brought the wrong metaphors with them. Steering wheels did exist in some form, like on boats before that. The first thing they went to is tillers. It took them a while to figure out that actually a steering thing was a better mode of input. [19:07] I know personally that I'm stuck in thinking about layouts and screens and typography. I'm not so much in the mode of thinking about context or sensors and things like that, broadening what the design means. I do think that we might also be in a place where we're not quite...We're in a tiller phase, right? All these new stuff's happening, and we haven't figured out exactly how to use it yet. [19:37] Another thing is, obviously, social acceptance. If you watched the Google Ads where people use a voice interface, I think every single one happens in a social context and not alone. That is because they want to normalize the act of talking to a computer around other people. [19:54] This is a still from the Google Home demo video. It's the Amazon co-competitor, which is here. It's weird because you see this family getting up and using it. They don't really talk to each other. It's just fucking weird. They all talk through Google Home. I'm done with that. [20:15] Anyway, another interesting thing that I think needs to happen, which I've talked about already briefly is going beyond use cases. There's this thing called Viv, which is created by the creators of Siri. It's not released yet. [20:29] You probably can't read all this, but what you can see here is that when somebody queries this by voice, their claim is that all this stuff is basically building a software on the fly and finding out which APIs to query based on the question. [20:46] It breaks down rain in Seattle three Thursdays ago, finds out how it should respond to that query, and do it on the fly. They're claiming they write software on the fly. I'll believe it when I see it, but it's an interesting idea that takes us past one of its key problems. [21:01] Another key problem that we need to get over is platform independence. I have caught myself picking up my iPhone and addressing Alexa a few times and vice versa because everybody wants my business. They want to own my whole ecosystem -- Apple, Google, Microsoft. [21:19] I can't, for example, say, "Hey, Alexa, can you call my mom on Skype in the living room?" That's where my big screen is, where I'd like to talk to my mom on a big screen. My Skype is on my Xbox, and these things don't talk to each other. [21:33] Actually, we're screwed if these companies keep just trying to own everything and not actually link them up together. Don't know how to solve this problem, but we need to break out of these silos. [21:44] On that, there's this issue of connective tissue or the APIs and standards jumping between modes of input, devices, software being able to talk to each other. I don't know. Nothing is really doing this gracefully in IOT, yet not particularly well. There are current ways to connect are pretty pony and immature. [22:05] Connective tissue is coming, the idea of linking software to the voice interfaces or devices. You can now add a Siri kit, Google now on tap. Sorry, the last time I did this talk was an event which the theme was "Life, the Universe and Everything." It's 42, there you go. [22:32] When I started writing this, I started thinking, "When will I be able to say, 'Hey, Siri, call me an Uber'?" How will I not just be able to perform search queries through my voice interface on my device but actually perform actions? That is now possible, in fact, on the Echo. [22:50] As Benedict Evans said, "When you can ask Amazon Alexa to order you a car, the impossibility of returning to just the plain old web is clear." I think that's why I'm here today. [laughs] It's fun to give you a heads up about some of that stuff. [23:05] Back to apps where I think they might be redundant. Shazam, if you still use the Shazam app instead of using it through Siri, is anybody here from Shazam? Just before I say this. [laughs] It's a little bit of a needy UI. It doesn't really need to exist anymore if I can access it through the platform and through the voice interface. [23:29] What does that mean? That means no more swiping for the user, no more tapping, no more icon, no more GUI. Potentially, no more brand. Does the brand matter anymore? I don't know that it does. No notifications, not trying to sell me other things and really misunderstanding what my user needs are. [23:49] Anyway, this has happened before. This progression of software becoming hidden has happened before. Because if you think about it, Unix is thousands of tiny bits of software that run in your pocket. There was a time when people who used Unix knew what those pieces of software were. [24:06] I did a bit of sysadmin in a previous life so I know a few of them, but how many of us actually know the processes that run these things? Not many of us. [24:15] You guys will probably be pretty close to knowing some of them because you have to use them to come online now and again. This stuff just disappears. This is a very natural, natural progression. This stuff just disappears into the background. It doesn't need to shout at us anymore as users. [24:29] Steve Jobs understood this very well because when he tried to buy Dropbox, he said this, "Dropbox is a feature, not a product." He's absolutely right. It's just something that sits in the background. Do we need to know what it is? I think in the future we won't. [24:47] I want to talk about designing VUIs. It's like it'll be a moment just to get you some grounding in present data. Has anybody played with prototyping voice interfaces here? No? Cool. This'll all be new. [24:59] This is what it's like to write stuff for the Amazon Echo right now. You literally write dozens of versions that say the same sentence. It's like the most horrifying and scary design process I could possibly dream of. [25:13] This is because the Echo is incredibly good at understanding which words you said but not particularly good at parsing sentences. It's not that actually as smart as it may seem. [25:25] When you are on app for it or you prototype something, you just literally have to feed it lots and lots of variations. That's quite mature. You can get it running fairly quickly. That's kind of scary. I thought, "What, this is my job now? [laughs] I'm a designer. I'm just going to write lots of different versions of the same sentence. That's horrifying." [25:44] There are tools that can do this kind of work now. I suspect, this practices in terms of building, won't be around for very long because as the platforms get smarter, it won't need to do this. [25:54] Let's talk about the roles that a voice interface can have. The one that's been around the longest is the idea of dictation or secretary. Drag and dictate, back in the '90s. Everyone was using that. [26:07] Banks were actually doing a lot of work in this to prototype security interfaces like, you are who you say you are. The one that we're using the most at the moment is for gofers, just performing simple short tasks. The one that we keep being sold, which is complete bullshit, is this one, an assistant. Siri and the Echo are not magical assistants. They are gofers. The way they're sold to us oversells it, I think. [26:41] This next thing I'm going to talk about is out there, but it's actually a really common design method amongst teams, the Wizard of Oz method. You pretend to be the machine in order to prototype what you're doing. One of you is the user, one of you is the device. [27:01] The mature way of testing this is to actually have the machine voice sit in another room with a script, and the person who's coming in as a test participant doesn't actually know that it's not a machine. [27:12] I have no idea why this guy dressed up as a phone in an iPhone [inaudible] but he did. The Wizard of Oz method is actually how the Amazon Echo was created. What Jeff Bezos did over time was he would challenge the team who were building it and said, "I want response times faster, faster, faster." [27:29] They had huge headaches from all the late night effort that went into making it respond as quickly as possible. That's how it got so good. [27:38] Wizard of Oz method, if you want to try something really different with your team, trying being the thing. It's interesting. I should say, I'm probably going to run some workshops about this around the world and probably some in London. If you think this might be of interest, let me know. [27:58] Voice interfaces, I want to talk about what they're no good at, and you've all experienced this. Large amounts of input. Imagine doing forms this way. Sucks. Really not good at that. Presenting choice. Choice may be going away. Let's be honest. [28:12] Because if you ask Siri, "Where should I eat?" You are not going to process all the answers because voice interfaces, unless there's a screen involved and we switch back to multimodal, you're going to be read a list of things. That's a sucky way to process the information. [28:27] I suspect voice interfaces will just be making choices for us. In fact, that is Amazon's long game. At the moment, I'm not sure if you can use Prime here. Certainly, you couldn't when I had one in the kitchen. The game of Prime and the Echo is so you can just stand there and ask for things. [28:42] Can you sell me some AA batteries? Same-day delivery, batteries show up. That's pretty cool, although quite dystopian because everything goes to firms anyway. Another thing voice interfaces are no good at, the natural way we talk. They're not really achieving conversational interface yet. Although, Echo, as I said, does a little bit better at this. [29:07] Another thing is fuzzy tasks. Imagine going to a shopping center or to a market and knowing that you need to get a gift for somebody, but not really being able to say what it is. You'll just know it when you see it. Voice interfaces, they're really hard at this. [29:25] Abby Jones, who's a voice designer at Google, who I might be doing some workshops with, she carried a voice recorder with herself and issued commands to the world just as an experiment. She said, "It made me realize how hard it was to say what I wanted in the world." It can be difficult as a cognitive load involved in saying what we want. [29:46] Security privacy, you all know this one, and noisy environments. The kid thing is interesting because as soon as there's three kids in my kitchen, it just basically fails hard. Of course, they all want to play with it when there's three kids. Tough. [30:08] I think I'm going to skip that one. [laughs] You all know Pastels law from open source? There's a rule involved, I guess. Riley talked about this a lot. That's one of the philosophies behind open source. Be conservative in what you send, be liberal in what you accept. Same is true for voice interfaces. [30:32] When you design these things, you tend to find that they've got to be amazing at interpreting all kinds of things, but actually the voice interface is better if it says less. It follows Pastels law. So a few quick opportunities I know you weren't expecting me to talk all that stuff. [30:50] The first one is platform integration. If you have an app where it's providing search to something, you can now start to put the hooks into Android and iOS to actually have it searchable so that when somebody searches for a restaurant or a recipe or food or whatever yours will come up in the search result. If you have an app, I would suggest you do that. [31:16] Another thing I've been thinking about is search results in apps, so anyone from ASOS, here? No, OK. Imagine the amount of effort you need to put into using an ASOS app to, let's say, find a red dress. You open up the [inaudible] nav. You spin around and figure out what to use, you tap it and then you need to find something else. Then, you put the keyword in. You actually count the number of steps? It's a lot of work. [31:45] Imagine if there was a little mic button here somewhere and you could just go, "Show me red dresses in size 10." Then, you'd be done, a lot less work for user. You'd have to let them know that the app does that. I think there's a case. [31:59] FatHounds, that thing that I showed you earlier, they have an APO now. You can start using Hounds to build a voice interface into your app. Simple gofer tasks, as I mentioned. I don't think anybody uses Pennies. That's how I track expenses, but this is an example of a task where you think as a user you're doing something very short and quick, you should start thinking about would this work in voice? [32:20] I guess, one of the big things is figuring out context. Do you work on something where voice fits the context of what you're doing. You think working in an office like this, like we do. [32:37] Voice interfaces are completely inappropriate, but if they become really mature and we have voice assistance in the future, then I might see a case for creating these sound barriers around our desks. [32:49] We're already feeling the pain of open plan offices, but imagine nightclubs now, they can create these zones of silence. What if those mics surround us when we are in the offices of the future, we can have discreet conversations. I think that would be useful now with conference calling anyway. [33:08] Yes, you could also play with the Amazon Echo. You can get yourself really quickly, like with a pie. It's a set image. You can just stick the pie on the cards. You can use Amazon's Landar and build it all in Note. I had a bit of playing around with it, but it's painfully slow. I would imagine it goes to crap in a minute. [33:29] You can also build prototypes with this, TinCan AI. It cuts out a lot of the building and the playing around to get to some of your designs, if you're a designer, messing around in this. That's the bulk of what I have to say to you today, but I'm just going to summarize. [33:45] I think voice interfaces are going to render many screen-based interactions kind of redundant. You need to keep an eye on that, but you can't replace everything. Multimodal is probably the future as the screens and other modes that you can put on are still ideal for some things. [34:02] In that case, start thinking and designing multimodal and beyond our use cases. How would this integrate to something that's not just solving that one problem? How can we link these things together so you have if this then that on the fly, is what I'm thinking about. [34:21] Ursula Franklin about 20, 30 years ago, she did a series of lectures called "The Real World of Technology." She was surveying technology throughout history, and she -- I'm paraphrasing here -- but she said technology is not things, it is practice. [34:37] Her view is, technology, the things don't matter. It's how they change us. What are the ways that systems change, that people change, social interactions change? Those are the things. She thinks that a lot of technology has enslaved, historically. She's looked at this from thousands of years. [34:58] For me, what I think about a lot is outcomes not outputs. I have to remind lots of the people I work with, particularly clients, who are fetishizing apps -- "Build me an app." Well, what for? What do you want to change in the world? This is a mantra we talk about at Clearleft a lot. Outcomes, not outputs. [35:16] I don't want to be Basil Fawlty. That's the reality. In fact, lots of people if you go look at the media, and are worried about this with Amazon's Echo, that the kids are turning into rude little fuckers. "Give me this, give me that. Turn the radio on." I can sort of see it in my kids. I'm going to start training them to say please and thank you soon -- to the robots. [35:36] I mean, there's other things we need to consider here. At the moment, all these platforms are basically training us to think that women are servants. I don't like this. This is not cool, but that is the choice that all these platforms...with the possible exception of being Apple. That isn't good. [35:58] I'm going to leave you with a quote by my advisor. He's the founder of Victor's Computing. He really had the idea. He said, "The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it." [36:13] I can't think of a more fitting way to talk about voice interaction. Thank you.