Blink and You'll Miss It - Building a Progressive Web App With HTTP2

Dean Hume speaking at Bristol JS in July, 2017
945Views
 
Great talks, fired to your inbox 👌
No junk, no spam, just great talks. Unsubscribe any time.

About this talk

Progressive web apps are a total game changer for the web, enabling developers to build lightning-fast, engaging experiences. Drawing on firsthand experience at Settled, Dean shares a step-by-step guide to implementing this technology in your own projects.


Transcript


- Hello. Thanks for coming out tonight on this Wednesday evening. We're gonna be talking tonight about progressive web apps and HTTP/2 So who's this guy up here right now? Well my name's Dean Hume. I'm a Google developer expert so I specialise in web performance and I also work for a company just kind of East London called Settled. And we help people buy and sell homes. But now you're probably thinking to yourself, "This guy, he's just an estate agent, what's he doing? "He's just one of the fat cats." But actually, that's not what we do. Instead everything we do is completely online and it's all transparent so from the start of buying your home to selling your home it's all there, all in one place for you. And the whole idea behind the name is Settled helps you get settled in your new home. So we had a part of our site that customers regularly access and we started to look at our data and we found that around 56% of them came to us via mobile phones. So this is using 3G and 4G and they're on the go. Which meant, as you know, 3G and 4G, you drop down to 2Gs sometimes, can be flaky at the best of times, signals drop. So we wanted to build an experience that was accessible on any device, no matter if it was old school or brand new, that was super fast 'cause no one likes waiting. We also wanted something with little or no internet connection, as well as used one codebase. We're a small startup, we don't have unlimited resources, as I'm sure some of you in the audience will know with your companies, and we're gonna go into that a little bit in a sec. We also wanted something that has an app-like feel so it feels fast and it feels snappy and the user can get to what they need quickly. And this is what lead us to progressive web apps. And they're all of these things. So they work offline, they're super fast, and they help you engage with the user. And this talk is really just the story of our journey and the things that we learnt along the way and hopefully I can share this stuff with you and you guys can, you'll take this away for yourself. So in this talk it's broken down into four key sections. I'm gonna start off with the basics of progressive web apps. I know on the web right now it's kind of the new hotness and it's buzzing about, but some of you may have never used it before so I'm gonna kind of go right back to the basics and just talk about it. Also talk about how we built, using this technology, to make our web app as fast as possible. Reliable, so it works when the connections were flaky and when connections dropped, and also how to improve the look and feel, so that app-like feel. So let's take it right back to the basics of progressive web apps. I really like this quote by Alex Russell from Google. And he says, "These apps," these web apps, "aren't packaged and deployed through stores, "they're just websites that took all the right vitamins." And I think that last bit there, that key thing is all the right vitamins. An idea behind that is, imagine it's kind of like Super Mario levelling up or putting your website on steroids, that's what progressive web apps give you. And they aren't some new thing you have to go out there and learn, they're built using HTML, CSS, and JavaScript; all the things we know and love. So you have your website already, there's a thing called a service worker, we're gonna go into that in some detail, something called a manifest file which helps describe your app, again we'll look at this shortly, and it needs to run over HTTPS, so your site needs to be secure. Just waiting to see if you got that shot. So again, I like the quotes. This is from Jeff Posnick at Google, he says, "Think of your web app requests as planes taking off. "Service Worker is the air traffic controller that "routes these requests." So we're jumping onto Service Worker and it's really a key part of what makes a progressive web app. Now, when you make a request for something, let's say you're requesting a CSS file or a Javascript file, as you're making the request the Service Worker is able to intercept this and say, "Hey, I want you to go over "there, I want you to go over there," or, "I want to cancel the request." It puts you in control. So how does it work? Behind the scenes, user types in their address bar they type the URL they want to go to and they navigate. But with Service Workers they sit on a separate thread, they're just a simple Javascript file and they're able to intercept anything coming or going. So you can inspect the HTTP requests and responses. And you can inspect the headers, you can respond with different things, you can even decide to not respond from the server at all and just respond directly from the client, which is pretty cool. And Service Workers are the key to unlocking the power inside your browser. They're there already. Most modern browsers already support this and you could start using it if you want to. Think of them as all the vitamins. They're there ready to use. I also really like the way they've been described as the perfect progressive enhancement. Because if the user's browser supports them, they get these features, they get the vitamins. If it doesn't, it just falls back. So it's a win-win for your users. Now, it has to be HTTPS only. Now you're probably like, "Ugh." But the reason for that is because the Service Worker is so powerful, imagine what would happen if that got into the hands of dodgy people. They'd be able to intercept these requests, put them somewhere else, intercept your banking requests, it'd be pretty dangerous. And that's why they need to be HTTPS only. But if you have a blog and you just want to start goofing around with this stuff or something on the side, there's free SSL right now. It's open and available. CloudFlare, I really like CloudFlare, it's free. You can sign up, you literally-- It's kind of like a service, it's probably one of the easiest DNS services I've ever used. You literally point it at your website, change your DNS entry and go HTTPS. Done in like 10 minutes. It's really cool. They also have some other features. I don't work for them, I'm not trying to sell them. GitHub pages also, I think they have HTTPS by default now when you do aesthetic sites and Let's Encrypt have free certificates that you can just download and get started with. But what about support? So, most modern browsers today support Service Workers. But you'll notice, first of all let's get onto the edge, marks of edge. Marks of edge is currently in development and it's coming this autumn so that's gonna get rolled out, which is pretty exciting. However you may notice there's a little orange amber there next to Safari. Well, Safari, it's on their long term road map, they've indicated interest, but remember that we're building for everyone on the web, we're not trying to just build for people with the new stuff. The thing about this is they're progressive enhancement so it just falls back. If they don't support it, well they still get a good experience. If they do they get an even better experience. You know, at this point I normally lose a lot of people 'cause they're like, "Well, we develop for Safari a lot," and that is important, but when we looked at our users we discovered that 75% of them coming to our site, we're quite lucky, came on with newer browsers. And all of these browsers supported Service Worker features, which is pretty cool. So it was kind of a no-brainer for us. But have a look at your data. And believe it or not, you're probably already using them without even knowing. So, Google's inbox uses it under the hood, sites like Twitter's mobile website, there's also Forbes. Loads of companies are starting to take this on and it's a good step forward for the web. So, we've kind of covered the basics, but I'm gonna jump into how we use these, the different features to making our web app as fast as possible. There's a few areas I'm gonna talk about. Start off with caching, HTTP/2, and some techniques. Now, when I made this slide first these were supposed to be like speed coming off Mr. Bolt at the back, but I actually realised it's just like one big fart coming out. Whatever, I just kind of left it as it is. For the lols. So let's have a look at caching. Remember with Service Workers you're gonna see this diagram a lot tonight. HTTP requests coming and going, you were able to intercept them. And you were able to say, "Look, I already have this "thing cached so I'm gonna just respond straight away "from what's already on my device." And we actually do this at Settled. So when the user's logged in and it accesses their dashboard, the part of the site we call the hub, and they're moving around, we cache the resources that they need as they go to the rest of the site. And this means that as they navigate through the site they have what they need already. Now, this makes their experience a whole lot faster as they're coming or going. Every single page just gets faster and faster. But what about Service Workers? How do they actually work and what happens under the hood? So I really like to think of Service Workers as kind of like this traffic light system that they go through when they're getting activated on a page. First of all you've got your red light and that's when the Service Worker is being registered on the page. So you include this little bit of JavaScript at the bottom and tell it to start using it's Service Worker. What it does is it downloads this JavaScript, it passes it and if it says, "Yep, it's all good, "this looks good to go," it then starts to instal it. Now when the installation happens, behind the scenes it executes the code inside your Service Worker and it says, "Hooray, good to go, everything works, "I've cached what I've needed to, or whatever you want "to tap into at this point." And then, finally, it's activated. This means that the Service Worker is really running on your page and it's controlling the request that are coming or going. Let's have a look at a really basic example. This is kind of just the 101's, taking it right back. At the top of the page I've declared a variable called cacheName and I've given it pageCache. This is just a string, you could call it whatever you wanted to. Next what I'm gonna do is I'm gonna tap into the instal event. So remember that traffic light? I'm tapping into it. Everything about Service Workers is event based, which is pretty cool. To be honest, I actually am not very good at JavaScript, I kind of just get by. And there's a lot of smart people that I work with, but I found getting started with Service Workers is easy. It's literally just I want this event, tap into it. I want that, I could do something with it. So let's go back to this. We're listening to the instal event and where saying when this is getting run for the first time on the page what I want to do is I want to open a cache. You'll see this is JavaScriptbased. And when the cache is open I want to instal or add to cache two files. I'm gonna instal a CSS, or cache a CSS and cache a JavaScript file. Now I know this is an array of two things and I know this is just an example, but when this page loads again those files will be there ready to retrieve so it doesn't have to go over the network. And then what we do, we tap into the fetch event. And the fetch is every time something's getting fetched or retrieved on your page. What happens is it simply goes, "Do I have this in cache? "Does this match the current request that I'm looking for?" If it does, it will return that else it will go off and just complete as normal. Now, that code was kind of like boiler plate code and you might find yourself changing it and it's quite cool. But one of my favourite, I guess if you want to call it a library or set of modules, is something called Workbox. So Workboxjs. It does all this for you and takes the pain out of it. We're gonna be looking at an example a little bit later, but I thought I'd recommend to checking this out if you want to get started with it. So what did this mean for our users? At this point all we've done is added caching. Now, this isn't an exact science, but we were getting around four and a bit seconds for our page load. That was pretty good. This is after we've rebuilt it and improved it. So this is what you'd get without service workers. Then we started to improve it and we got around 2.25 seconds. Big jump, and this is snappy. Once this page is caches we were getting, I didn't even put it on there 'cause normally most people don't believe me but we're getting sub 500 millisecond response times. That's like blink of an eye. The page is navigating, it doesn't even feel like it's moved, it's that snappy. Our page weight, originally we were just under a meg per page. Let's see what's happening, there we are. After this update we were around 50k and then once the Service Workers kicked in, you were literally just getting 1K over the wire. That means your users are practically downloading nothing every time they're moving between your site, which is a great step forward. What about HTTP/2? So there's a lot of talk about HTTP/2, but how many people are actually actively using this right now? One of the things I really like about HTTP/2, there's a load of great features that come with it, but I think this is the one that I think is my favourite is this concept of multiplexing. And with HTTP/1.1 there's this concept that if you want to make a request for there different files, you have to make three separate TCP connections. So this opens it up over the wire and it goes, "Can I have this? Can I have this? Can I have this?" And it means it's not really efficient, it's not as efficient as it should be. But with HTTP/2 there's this concept called multiplexing and it's really cool. If I make a request for those three things, the same thing, what it'll do is open one TCP connection and kind of give it to me all at once. That's a big improvement. Now, HTTP/2 runs over HTTPS, it's kind of like a requirement but if you've got your web app up and running and if you're already thinking about Service Workers maybe, this is a great step forward, using HTTPS. And, believe it or not, it's a lot easier than it sounds to just get this up and running. Now, this is a fantastic package. I don't know if any of you know developers, there's other packages out there that are pretty similar, but there's this node package called SPDY, S-P-D-Y, you get it up and running and we're gonna have a look at a code sample shortly. Or if you're one of the cool kids you might use Yarn and add the package to your project. So with HTTP/2 this is a bit of node. Again, you can use a different example depending on your stack, but we require it at the top of the page, we're requiring it, we're using express. And then, again this is a sample, but we're making a request for home. But at the bottom of the page all we're doing is using that spdy.createServer, providing it with two keys. And you know we could get the keys off um-- Not GitHub. Let's Encrypt, there you are, thank you very much. And voila, you are running HTTP/2 over the wire. It's that easy. Now, when you turn it on you'll start to notice. So this is in the developer tools and, spun it up, seen what's happening. You'll notice, first of all, there's a protocol over there, protocol is H2 coming down the wire. That's HTTP/2 and it also means that our waterfall chart looks like this long straight line. Remember that TCP connection is just saying, "Here, have all these things in one go" instead of staggering it slowly? Now, the great thing about this package, if you're interested in looking into it, is that it's also very configurable. So you can say to yourself, "Look, I want to provide "an array of different protocols," There's HTTP/1, maybe you don't even want to provide that and jump straight to HTTP/2, you can narrow that down by just passing an array. There's also something called a connection window and I hadn't heard of this until I started messing around with it a bit, but the idea behind this is that with H2, that TCP connection, you can open it a lot and you can say, "Look, I want to send down a huge chunk," or, "I only want to send down a little bit". But remember that the clients on the other side can sometimes get overwhelmed with the amount of information that's coming through. And there's something called flow control which is built into browsers. And if you wanted to send bigger chunks down the wire, this is where you could start looking at it and configuring it. To be honest, we used it literally out the box. That code sample used is not actually far off from how we used it. Just literally drop and it's ready to go. Now, there's a few other techniques that we implemented to make our site as fast as possible. We used no JavaScript frameworks. And I realise that isn't really the thing you should say at Bristol JS, I know, right? But, but hang on, let me back this up a little bit. So first of all, we knew that a lot of people come in over the wire with 3G and 4G connections, we wanted to keep it as lean as possible. The things that we were sending down the wire, just the minimal JavaScript required. This isn't to say we don't like JavaScript frameworks, this isn't to say we won't consider using one in the future, but because of our use case we wrote things as vanilla as possible. We have a few libraries in there, but in general we don't use anything like React, we don't use anything like Angular, we keep it lean. Which means we kind of went a bit old school. It's not a single page app. We rendered the page completely server-side, so we're doing the whole get in post as people are moving through the pages. One of the things we have noticed it because we are using Service Worker caching the page is like, it's blink of an eye fast. It almost does feel a bit like a single page app because it's loading so quickly. So maybe something for you to consider in a project. We also experimented with H2 Push, or HTTP/2 Push. Have any of you played with it or heard about it before? HTTP/2 Push? Okay, it's pretty cool, it's a cool concept. So the idea behind it is that if you open up your network tab in Chrome, or whatever your browser of choice, and you look at the network requests. So we've enabled HTTP/2, down the wire we've got our CSS and our JavaScript being sent, but what actually happens is when you type that address into a URL, the browser's gonna go get that page, read the HTML, and scan through it and go, "Oh yeah, I need that file, I need that JavaScript, "I need that CSS," go and get these thing for me. So it kind of does it in a staggered step. So you'll see it downloads and then it gets what it needs. But with H2 Push the idea behind this is it's a little bit different. What happens is you almost override the default browser behaviour and you tell it, "Look, we know better than you, "so what I'm gonna do is when I give you this HTML page, "at the same time I'm gonna send you all the assets "that you need." So there is no jaggedy staggered delay, it's all at once, "There you go, you can have it." You may notice it looks a little bit like this. Our initiator is now pushing something down the wire and the asset's already downloaded. You'll see the pip almost happens straight in line. Pretty cool. And there's this whole concept that HTTP/2 and Service Workers are almost like this perfect match because H2, if you're using Push, whether you are or not, it gives you that first load as quick as possible. Bam, you've got it on the screen. But then when Service Workers kick in you've got things cached so you've got your cache returning. So it's almost like end to end seamlessly fast and amazing, supposedly. But there's a bit of a caveat with H2 push. So we started experimenting with it and it's actually really hard. We didn't end up using it, we started playing around with it a bit. And subsequently not long after we wrote, Jack Archibald from Google, he wrote this really good article that kind of sums it up. It was a little bit of configuration for us and there wasn't much gain in it for-- We weren't actually noticing really increased page load times. It was insignificant almost and there was more conflict involved. This isn't to say you should use it but this is just kind of the experience that we had with it. If you'd like to learn more about it and just mess around there's a fully working sample. It's using that SPDY package that I was talking about earlier and it takes you through the different steps of how it's getting pushed down the wire. So there's an example if you wanted to have a play. We also used a technique called prefetch. Prefetch is pretty cool because when someone lands on a site they sign up for the first time, what we do is we take them through a walkthrough and we introduce them and tell them a little bit about ourselves, tell them what we do. It's a bit of a jarring experience if they just come straight in and don't actually know why they're here. And as they're going through this, we kind of know that once they finish the walkthrough they want to get to where they really want to go. So they've gone next, next, next and they want to land on the starting page of the dashboard. So we used something called prefetch. And the idea behind this is, it's actually really easy to use. You just include this link in the head of your page and you say I want you to prefetch another page. In this case we said when you hit the third page in our walkthrough, go and fetch the fourth page, which is the landing. 'Cause we kind of, we had this, we looked at our data and we knew that almost 90% of people going to that third page it's very likely they're gonna go to the fourth page. So adding this in just sped it up exponentially. What happens behind the scenes is the browser goes, when it lands on that page and it goes, "Cool I wanted to render this page, but behind the scenes "I'm actually gonna go and fetch this other page "at the same time." So that when you navigate to it it's there already. It's kind of done, the browser's already done the hard work and it knows where to go. There is a slight caveat with this. It's no real caveat, it's more about being a responsible developer, I guess. You could put this in every single page, but it's a bit naughty because what'll happen is when the user visits your site they'll start to download pages they don't necessarily need and they might actually navigate away, they might just bounce at that point so be a little careful about where you use this, but it's good stuff. We also use something called Brotli. You may have heard of Gzip compression, this is when something's being sent over the wire, and it's a way of kind of taking your files and squishing them right up and taking out the white space and compressing them, kind of like just a zip file. But Brotli is a bit of a revisit of Gzip and it's making it even more compressed. So we noticed that we've got around 10% improvement just by turning this on in our file sizes. And you're probably wondering why this pastry over here. Brotli was actually named after a Swiss pastry. I don't know. It matched the slide so I'll put it in there. Again, you can turn this on, it's really easy to do. You can just simply add a package if you're running H2 already this can be added in. So free gains really. We also tested using a tool called Lighthouse. I don't know how many of you have heard this before, but Lighthouse is a tool that lets you audit your progressive web app. There's a few different ways you can get to it; in you developer tab, in Chrome in the developer tools if you head over to audits you can actually run one right now against your website. And what it does is it runs-- Think of it as Google page speed, but on steroids. What it does is it kind of goes through your site, runs a number of tests and tells you the areas that you can improve. It doesn't just do progressive web apps, it does performance, it also does accessibility, which is really cool. It tells if your links aren't quite right and areas you can improve. And your score gets bumped up and up as you start to plug in these holes. You can also get to it via a Chrome extension, it'll run it inside your webpage and just audit it. Or you can get to it, I think there's a node package that you can also use. So let's say, for example, you wanted to run it in this suite of tests and fail if you went below a certain score, this is quite good for regression testing, especially if you're building progressive web apps. This is a shortened URL. I recommend heading over to this to just check it out. And I'll be sharing these slides afterwards so you'll be able to get to these links. So what did this mean for us? At this point it meant our users consumed 15 times less data. That was a big win for us. Once they started navigating, once the Service Worker kicked in and the cache was working this was a big step forward. It also meant that the page loaded three times faster, the pages across the site. Made us happy, we liked this. And Mr. Bolt likes it to. So we talked about caching, HTTP/2, and some of the techniques that we used. But next up one of the things that I think's important to talk about is reliability, especially when you're building for the mobile web. Now, I don't know how many of you have ever been on a train and you go under a tunnel and your signal just drops and you're trying to do something important, it's pretty painful. Well, building reliable apps doesn't actually have to be that hard. The web has been tough previously, but using the features of progressive apps you could build offline applications. Remember with Service Workers as you're making a request, HTTP request, you can decide not to go to the server. So you could get something, the resources that are already cached on the client. Again, this is just a simple example, but what I'm gonna do is tap into the fetch event. Remember it's all about the events, so I'm listening for this event and what I want to do is check, I'm inspecting the request that's coming and going, is someone trying to make a get request? Are they trying to navigate to somewhere? And are they trying to ask for, I'm inspecting the header there, are they trying to ask for an HTML page? If they were asking for a CSS or JavaScript, maybe I could fetch that from cache but in this case I want to decide how I want to respond. So sorry about that, I had to try to fit it all in one go. But now that I can confirm that someone is actually trying to navigate somewhere, what I want to do is check if the fetch request that they're trying to make, does it error? Is anything going wrong with that? If it errors what I'm gonna do is go and get anything in cache that matches something called the offline page. Now, that was just a string. You could call that whatever you like, really, and during the Service Worker instal what you would do is get all the resources you need for this offline page, put it into cache, and then when they're offline it's there for them to retrieve, they just go and fetch it as they need it. And we actually do this inside our site. So as the user is navigating through the site, remember earlier on I was talking about how we cached the different resources that they needed and what this means is as soon as the network drops, they have everything they need. Now we use an approach called network first. There's a number of different caching strategies you can use. Some people, let's say for example, you were making-- Imagine you were making a Kindle reader online and you basically could load the whole book right from start to finish so if someone was offline it doesn't really matter, you just fetch it straight out of cache. That would be a cache only approach. We use the network first approach, which means that we always go to the network, but if we can't get a connection then we fall back to what's in cache. Now, the reason we did this was because we release often and we didn't want to get into a state where the user had something that was behind where they needed to be. Especially messages, messages that are coming and going between buyers and sellers. If they had a message that they didn't get so the cache expired, that's no good for us. So we use network first approach. And if they didn't have something that they wanted, we kind of had this default page that just says hang tight and just lets them know that they can still get to the other parts of the site instead of just that blank fallback. And if you want to start experimenting with this, this is a URL, a shortened URL of a GitHub example of this in action and you can start playing with this, regardless of your stack. It's just HTML and JavaScript. Now, I don't know if there's any product managers in the room or analytics people, but a lot of people really like seeing how dat is moving through the site. And there's this great package, it's called offline analytics, there's actually another one in Workbox, but the idea behind it is if you include this in your Service Worker, what'll happen is if you use Google analytics, that is, it will monitor the usage as someone's offline, save those requests, those Google analytics requests and store them in an offline. And as soon as they come online it'll replay them back again. So it'll send them all to the server. So you never actually lose anything when someone's offline. Which is pretty cool to make sure that the parts of your site that you expect to behave in a certain way still do that while they're offline. But I'd recommend caution with this because, you know, Service Workers can be quite dangerous. I don't know if you guys have ever played with them before, but you can essentially cache something and leave it stuck in cache infinitely. So I would recommend testing your code. There's a fantastic library. Google's Matt Gaunt wrote, bit.ly/sw-testing, I think it's an article on medium and it kind of shows you how you can test this. That's one of the reasons we actually use network first approach is to just make sure we never got stuck in a bad place. What about third party scripts? They can be prickly fellas. So a third party script, for example, is something like a Facebook like button or a Twitter share button. Something like that, something that's outside of your control. So you know those widgets you'll include on your site? All those sort of things? Those are third party scripts. Now the problem with those scripts is that if you make a request to their server, say for example, let's pick on Facebook. I'm sure they don't go down, but let's imagine the Facebook like button is on your blog and the Facebook servers go down. Now what'll happen is as you make this request, your site is gonna hang and the server's gonna fail. Now, I'm gonna show you an example of what this looks like, let's see how this works out. So on the right hand side, my right your left, is just my blog. It's right running normally and it loads just around a normal amount of time. On the left hand side is a single point of failure, or what's happening when the third party script server's not returning. You're still seeing a white screen and we're around 18, 19, 20 seconds. This is how long your user will have to wait with just a white screen. Because, remember, resources like JavaScript and CSS are render blocking. So the browser has to wait for them before it can continue anything. And you can find yourself in a dangerous position, especially if you're an ecommerce site. I actually used to work for A Sauce and we had this happen one day. We included some scripts in our page for people to share, those social buttons, and we didn't get the response and we were like, "Our servers are up, what's the problem?" Everything's responding as expected, but the pages were just blank and everyone experienced this. So it's something to keep an eye on. But remember, with service workers your in control of what's coming and going from the network. So there's a pretty cool trick that you can do. If a request fails to respond from the server in a certain amount of time, you can say, "Look, I'm gonna just respond, I'm gonna give you something "even if it's not the thing you're looking for." But that at leaves gives the browser a chance to go, "Okay, I can do what I need to do." Now, this is a very cool script. It's pretty basic, really. It was shown to me by Patrick Hammon, he now works, he used to work at the FT he now works at Farsley, I think. But on the screen you can see there's just a JavaScript time out and it's being promiseified. So what's happening is we're passing in a delay, a certain time out amount, so in this case let's say we're gonna pass for an amount of six seconds. And what's gonna happen is if this time out goes past a certain amount we are gonna respond with a new response. And we can build up HTTP responses in this case, inside Service Workers. And we can say just do a 408 status and say that this thing timed out. It's not ideal, but it means that at least the browser will be able to continue and you won't be stuck with like 20 second of nothing. And then what we can do in the Service Worker, we can tap into the fetch event and we can add a promise.race. Now, I hadn't heard of this before, but it's a pretty cool feature of JavaScript promises called promise.race and it's a race condition. And the idea behind it is you can set off, in this case I've done two, but it's an array of functions and you can say, "I want these two functions to go off "against each other and the first one to return wins." It's basically a race. And as they're going, what you'll see is we've got timeout, which is calling that timeout that I've promiseified and passing in 6000, that's six seconds or you could put that to be whatever you want your threshold to be. And then at the same time we're requesting the fetch event and just saying go get it as normal. So whichever one returns first is your winner. And it's a pretty quick way of just falling back naturally. Now, a that library that I was talking about earlier called Workbox, it kind of makes that code a little bit smaller and easier to use. This is how you would do it. Once you've included this in your Service Worker, what I want to do is, let's say for example you're using Google fonts on your website and Google fonts are really cool, free resource, but for whatever reason let's imagine that the Google fonts server doesn't respond. What we want to do is listen to any requests that match this route and what we want to do is use a cache first approach. So we're gonna put it in the cache and always go to the cache first and then fall back to the network. And I'm gonna call this cache googleapis, you could call it whatever you like or just leave it out completely. But the key here is that network timeout. And that little bit of code is just saying, "Listen, if the network times out after four seconds, "just fallback either to what we have in cache or "just fallback completely." Really really small and it gives you a lot of code, a lot of action for very little code. And if you want to play around with that example that I talked about in there, bit.ly./sw-timeout. There's a coding example in there. Okay how are we doing? We're good. What about the look and feel of our site? Now, we are a small startup, we've grown quite a lot since then and we're almost double the size, but we don't have unlimited resources. We would love to be able to build for every platform out there, but right now we are sticking to the web for now. And using progressive web apps gives us a lot of these features and I just want to talk a little bit about something called a manifest file, which we'll run through, and something called add to home screen. So what's a manifest file? Well a manifest file is just a simple JSON file that allows you to control how your app appears to the user. What happens is your browser reads this file and it says, "Okay, this is the name of your web app, "Okay I understand that." But there is also a number of other features in there. So let's say, for example, you take these two examples. This is one of the pages on our sites, an adverts site, so the user's logged in. On the left hand side you can see that the address bar at the top is grey and on the right hand side the address bar is matching our brand colours. Now, the manifest file allows you to control and tweak little things like this, to adjust it according to how you want it to look. And this is all part of that whole app-like feel. So a manifest file looks a little something like this. And this is actually our manifest file that we have in the site. We've got a short name there, and the name and the short name which is really what's gonna appear on the home screen, which I'll talk about in a second. We also tell it how we want it to look. You see there it's got display standalone. You can choose to hide the address bar, you can choose to make it full screen if you really wanted to. It literally appears exactly like an app. We're also providing it with some icons and we're talking about the start URL, so where do you want it to go when you first land on the page. If you did nothing else as a progressive web app and you just put this into your site, you'd get all these features. There's not much involved in it, really. I talked about that. And all you have to do, really, is include it in your head tag. Just link off and say this is a manifest file and it's manifest.json. And the browser will read this and understand what you're all about. Whoops. If you'd like to learn a bit more about the different features in there and what they all mean, head over to this URL and it'll kind of explain the different options with the UI and how you can lay it out. Now, the add to home screen function is pretty cool. You almost get it for free with your manifest file. You've got your manifest file in place, what happens is you get this little pop up appear and it says add to home screen. Now, you almost get it for free, but you need a manifest file, it needs a URL so it can actually add it to your home screen, sorry I say URL, start URL and the icon can actually appear on your home screen so it looks like an app. Your site needs to be using a Service Worker so that it understands that you are a progressive web app. And the user has to of visited your site at least twice with five minutes between visits. Now, that's kind of a bit of the catch and at first I was like that doesn't really make sense. But I don't how many times you've visited those sites where as you land on it it's like, "Hey, please instal our native app." and I don't know about you, but if I was a web developer on a site that did that I'd be pretty bummed. Because the web experience can be great, too. So the idea behind this and the reason the browser's put in there is to stop this being spammy. Imagine if every site you landed on wanted you to add to home screen, that'd be pretty annoying, right? So the idea behind this is that you're more engaged, they want to check that you're actually using the site and coming back to it. And, again, familiar with this, we've got the manifest file that then gives you something like this. So under the right criteria the user is on our site, they get to this page, they see this popup, add to home screen. They go, "Yeah cool, I'm gonna add it." What happens is on their home screen they then get our logo, which we've specified, as well as the name that's also in the manifest file. When they tap on that logo they then get a splash screen that arrives. And the splash screen, you define the colours in there, you define the branding, everything is up to you. And the idea behind the splash screen is it gives your user a sense of perceived performance. It's almost like, okay cool something's happening and then your page loads. So it really just feels smooth and like this gentle intro into your web app. And then finally they're there. And I may have cut this slightly wrong, but you'll notice that the address bar, the address bar's missing at the top of this, that's because that's how it would look, that should have an address bar on it on the first one, this one doesn't. Because that's how it's being setup. So, I've talked a lot. But let's summarise. So I've talked about how we made our website fast using Service Worker caching, HTTP/2, all the things that are kind of there ready for you to start playing with. How we made it more reliable so it worked offline, and also falled back in case of third party servers failing. And also the look and feel, so using the manifest file and getting that add to home screen functionality. So we're still learning. There's been a few things that we've put in place and we've gone, "This doesn't really work," and we've taken it out. But if you want to start experimenting with this stuff, I recommend just, you don't have to go full blown and have every single feature, start off with a few little features and see how it goes and see what you can pass onto your users. But so far it's been great success. Thank you very much.