Service Worker & Streams

Jake Archibald speaking at Milton Keynes Geek Night in January, 2017
250Views
 
Great talks, fired to your inbox 👌
No junk, no spam, just great talks. Unsubscribe any time.

About this talk

In this talk, Jake Archibald gives an exciting tour of everything we have to look forward to in JavaScript.


Transcript


It's fair to say that this is not my first talk about service workers. I'm kind of known for doing this topic. I've kind of become a little bit typecast. Remy Sharp says that if you say "service worker" three times in the mirror, that I will appear behind you and start talking about offline first. I do wish that was the case, because it would make international travel a lot cheaper for me if I could just phone someone up. Although, I'm sure if that were the case, people would troll me. I'd end up having to call my girlfriend in the middle of the night and say, "Yeah. Sorry. Someone did the "service worker" mirror thing again. I think it's Sydney this time. Yes. I was asleep as well. Could you go to the mirror and bring me back?" But back in 2013, my talks went a little bit like this. - The new thing is the service worker. Actually, I think this is the first talk on it. There's nothing to play with in the browser yet. - Nothing in the browser yet. This is before anyone had ever written a service worker. There was nothing in the browser at all. But now, we have two independent implementations in Blink in Chrome, and in Gecko in Firefox. That means other Chromium browsers come along for the ride as well, so things like Opera and Samsung Internet, and loads of others as well. Microsoft, they have an implementation in progress, and little bits and pieces are starting to appear in their insider builds. Safari still haven't given any public commitments, which is kind of the usual thing they do. But they have been giving implementation feedback on the specification, like the kind of questions you would ask if you were implementing it. So we kind of have clues there. They are implementing Fetch, which is one of the prerequisites, so fingers crossed. But thanks to progressive enhancement, we've gone from having nothing in any browser, to hundreds and millions of page loads handled by the service worker every day. That's just in Chrome. We don't have numbers for Firefox. I'm not just talking about service workers that are used for push messages or whatever, because there's loads of those as well. These are service workers which have FetchEvents that they're working offline first. They're managing how the page loads. So I could, today, talk about real things that we have shipped and sites that are using them, unlike in 2013 where I kind of just made stuff up for 30 minutes. I mean, this slide in particular is a sort of work of fiction, really. Like all of this is sort of just made up stuff. But it was good fun doing this. So I'm going to do it again. Because there's a lot of stuff that we're starting to implement in the browser or starting to think about, and I'd like to share it and see what you think about it. And hear from you, which things you would like to be in the browser right now, and which things you don't really care about, and things that we should deprioritize. I really should have called this talk, "Seven Things That Don't So Much Exist Right Now, But I'm Pretty Excited About and So Might You Be." It's going to be a journey to the future. This is a page from the FAQ of the Arriva Trains website in Wales. And their FAQ just has this one question that says, "Can I buy train tickets for future travel?" To which their answer is just, "Yes." This is amazing. I'm like many people here. I've been to Wales before, and it is like time travel. Just not forwards. So what kind of stuff do we have coming up? Streams. This is one of the things. I really love streams. A lot of them are already in the browser. You can already fetch a URL and get a reader for the body, and sort of read the parts out one by one. So here I'm going to do "while (true)" and call "reader.read". That gives me a promise for an object, similar to what iterators returned. So it has two properties, done and value. If done is true, we've reached the end of the stream. Otherwise, value is a chunk of the data as it's arriving from network. This could be nicer. "While true", it kind of works. But it makes me nervous when I have a "while true". It feels like I'm going to get stuck there, and the tab is going to crash or whatever. This brings me to the first future feature that I want to talk about, and that's async iterators. Now, I have learned from the mistakes I made in 2013. So I'm going to try and correct this. This is the vagueness graph, and right now I would say that async iterators are about this vague on the vagueness graph. But do bear in mind that the async graph via the vagueness graph itself is about this vague. So you can kind of work out the relative vagueness from those two. I hope that clears everything up. Async iterators, they're being specified right now. They're at Stage 3 in the ECMAScript process. Stage 3 means they're waiting on browsers to actually start implementing it. So we expect to see things happening pretty soon. Hopefully, early 2017, mid-2017. How do they work? So instead of this while loop here and having to get a reader, we don't have to do any of that. We just do "for await" value of stream. That just gives us the value out, and this is a kind of async for loop. It works just as well as the while loop that we had before. When these land in JavaScript, we'll start seeing DOM APIs using them, things like the cache API for iterating of the caches or iterating over items in caches, and maybe even in things like IndexedDB for iterating over cursors. These things could become async integral. To do this, all you have to do is define a function, Symbol.AsyncIterator. That returns promises, and then it just works. You can see more on async iterators on GitHub in the TC39 REPL there. And if you can't wait, you can actually use them today, using Babel. This is async iterators working as expected in the Babel REPL. I only show you this screenshot, because it's an excuse for me to say "Babel REPL", which is really, really fun to say. I love the way we name things in this industry. We just make shit up. I mean, look. I love this, as I saw this tweet recently. "My tiny yelp clone (built with redux) is now up on ember-twiddle." This is a legitimate sentence in our industry. It's amazing. So when you stream values like this, each value is a Uint8Array of bytes. Like streams can be of anything. But when you get them from Fetch, there are bytes like this. But often you don't want bytes. You want some other format, like text. You can actually do this today with the text decoder. So here, I'm going to create a new text decoder and loop over the values as before. But this time, I'm going to pass each one through decoder.decode and pass the value through. That will convert these bytes into strings. But having to call "decode" on each value, and passing those values through JavaScript is a bit of a pain. It would be nice just to be able to get a stream of text, and that's actually going to get a whole lot simpler, thanks to Transform streams, which are hopefully going to land in 2017 as well. I'd say Transform streams are about as vague as async iterators, maybe even a little bit less. They're still being specified. But there's a JavaScript implementation of them and work has begun in the browser. Writeable streams have recently appeared in Chrome Canary, and Transform streams kind of build on top of those. So before we introduced the decoder, we were kind of streaming stuff straight from the network into the log. Thank you. That's okay. I was hoping someone would laugh at that. All I did is went to the Noun Project, where I get all of my icons from, and just sort of, "Oh. Well, that's a good image for a log." Typed in "log," and that appeared, and I was like, "Yes. That's the one I'm using." But Transform streams become this little bit that sits in the middle that can do something with the data coming in, and pipe out different data. Now, in terms of code, they look like this. You just call "new TransformStream", and then you parse it in a few functions. "Start" is called straightaway. "Transform" is called every time a new value comes into the stream, and you get that as a chunk and you can pass something else out. "Flush" is called when the incoming stream has ended, but you might have additional stuff that you want to send. So when you do "new TransformStream", this object only has two properties. It has a readable and a writeable. I really like the design of this. It's a lot better than the Node API streams. I'm not trashing Node streams at all. They went through three or four iterations of streams now that have been kind of not really compatible with each other, as they've been trying to find a design. The Node community gave a lot of feedback into this design. It was basically, they told us all of the mistakes they made, and we avoided most of them. So you have this Transform, which is just a readable and a writeable, and that means you can pass the individual parts to APIs without handing over the whole Transform stream. So say we wanted to create this text decoder as a Transform stream. We would do it like this. So I'm going to have a function that returns one of these things. I'm going to create the text decoder, because that's the tool we have to decode strings. I'm going to create a new Transform string, and when something arrives from the incoming stream, I'm just going to call "controller.enqueue", which is like, "Here's a value that I want to appear out of the readable stream." I'm going to call "textDecoder.decode", and then pass the chuck through that. So again, bytes in and pushing text out. So if we go back to the fetch code from before, we can change this by taking our stream and piping it through the decoder there, just like that. So pipeThrough connects the fetch stream with the writeable of the Transform and returns the readable. So now, all of these log values will just be text, rather than bytes. Like async iterators, when this actually lands, it means DOM APIs can actually start using this, things like compression and decompression, like all the browsers know how to do gzip. Some of them know how to use brotli. Some of them know how to use standard zip. All these things are in the browser already. But as developers, you don't get access to them. Now we have streams, that'll be a great opportunity for us to expose some of that stuff. But the first DOM API that's likely to land with Transform stream support, and we've kind of wasted our time by creating it ourselves, the text decoder itself will become a Transform stream. So once you do that, you'll get text out of this loop. If you want to dig into streams a little bit more, there's a Streams spec. This is part of the WHATWG. There's also links to a JavaScript sort of polyfill of reference implementation there as well. I'm really, really excited about streams landing in JavaScripts, and it's kind of about time that we got access to these. Because streams have been behind the scenes in the browser for like 20 years now. If a page is well-built, you'll see this happen. It will render gradually, because as the browser is downloading stuff from the network, it pipes it through the HTML parser, which is fully stream-aware, and it can start creating elements on the page, even though it hasn't got the whole network content yet. So this is a silly sort of demo app that I made Wiki Offline. But it makes good use of this ancient browser feature. It makes good use of streaming. So on a low-end device over a 3G connection with an empty cache, the HTML for that page takes around about five seconds to download. But while all of that happens, the parser is processing it at the same time and that means that something will appear on the page after about half a second. It's not real content at this point. It's just this sort of top banner that says, "This is Wiki Offline." But at least the user kind of feels like something is happening, that the server is responding. But then, at like 1.8 seconds, it gets to the first page of content in the actual article. The user can actually start reading at this point, and rendering continues as more stuff downloads. As an experiment, I also built this site as a single-page app, which is a popular pattern with JavaScript frameworks. So here, I'm just serving this page, this empty thing with just a top banner there and some JavaScript. I'm letting JavaScript go and fetch the content and add it to the page. This changes the story a lot, because here the HTML fetching and parsing is very quick, because it's an empty document pretty much. There's nothing to it. Here, we get the first render, as long as we're not using blocking scripts. But I'm not in this case. This first render is, once again, just that top banner. So at the moment, performance is neck and neck with the server-rendered version. But while this is happening, the JavaScript is downloading, and then executes. Then, it goes and fetches the content for the page, and then inserts it into the page. We get a little bit of parsing there. Now, we can get to render that first page of content. This is almost two seconds slower than the server-rendered version. I'm actually being kind here. We regularly see single-page apps taking way longer than this to get on the screen. Also, this graph is slightly flattering as well. It makes it look like everything finishes here, which is earlier than the server-rendered version. But it's because there's a lot missing from this graph. You see, as the server-rendered version is downloading, as it's parsing, it discovers things like images, like fonts, like CSS, like important stuff. It goes, "Oh, hang on. Some of these images are right at the top of the page. I really need to start downloading those now," and it can use priorities to distribute the bandwidth. Whereas, that can't happen in the single-page app version, because the browser doesn't find out about any of those images until right here. So the whole page download takes way, way longer. So what can we do about this performance problem of the single-page app? Well, we can bring in a service worker, and that means we can kind of return the page shell from the cache. So that download time becomes way lower, because we're just fetching it locally. We can actually do the same for the JavaScript here as well, so that goes away. But the page content has to still come from the network. We can't cache all of Wikipedia. The problem is that JavaScript is initiating the download, so the download can't start until JavaScript has arrived and parsed and executed. We can actually avoid this using link rel="preload". This is only supported in Chrome right now, but it's a kind of declarative way to say, "Here's something I am going to start downloading, so begin that a lot sooner." That means that we can take this content fetch and we can start it much sooner in parallel with the JavaScript. But so what? Like all of those tricks, all of that optimization later, our content render is still happening later than the no-cache server render. We've introduced things like service worker, the HTTP cache a lot of stuff that's quite a bit to maintain and it's still slower. This is because we spend all of this time downloading content before we actually do anything with it. We've traded this gradual rendering model for a different model, one where we render absolutely nothing until we have everything. This is because there's no API, really, to kind of stream HTML, and this is something that I want us to solve in 2017 as well. But there's nothing there yet. But until then, we shouldn't be breaking performance by using a single-page app, and then trying to limit the damage. We should be taking the well-performing server render, and optimizing that even more. Service workers combined with streams, they let you do this. Like we saw before, this pattern of streams, the same is true if you put a service worker in the middle. It just works. Even if the content is coming from a cache, it will still stream. Streaming is important, even if you're caching locally. Because say you're streaming a four-gigabyte video, but you still want that to stream. You don't want to have to put all of that in memory just to play the start of the movie. But ideally, we want a mixture of the two. We want to create a single response from the service worker that's made up of bits of cache data and bits of network data, because this is what we... We mix things together on the server all the time. Like we'll mix together bits from a database, bits from a template, and we merge them all together. Well, we want to be able to do that on the client as well. So you can actually do this in Chrome Stable today. In a service worker FetchEvent, you can get all of the bits that you want. So here, I'm getting the start of the page from a cache, the middle of the page from a network, and the end of the page from a cache. I'm just going to merge them together, and I'm going to do that by getting all of the readers for those streams. I'm going to create a new stream that's going to be a combination of all of those, and I can respond with that just by saying, "Hey, new response. Here's the stream. By the way, it's HTML." Unfortunately, they're split in the middle here. Populating that stream is big. Like it's a bit ugly, and I'm not going to talk through it. But it's not pretty to look at, and we're passing every value through JavaScript as well. So it's not particularly fast. Well, this is going to become a lot easier, thanks to Identity streams, which is the next one of the 2017 features I want to take a look at. I would say these are a little bit more vague, mostly because the API changed about a month ago quite significantly, for the better I think. To use these in your service worker FetchEvent, again, I'm going to get the three parts, the start, the middle, and the end, and I'm going to create a new Identity stream. An Identity stream is just a Transform stream that does no transforming. So anything that goes into the writeable just comes out of the readable. So now, I'm going to respond with that readable part of the Identity stream. So now, we just need to write stuff into the writeable, write these three streams into it. To do that, I'm just going to create an async function here. I'm going to loop over all of the response parts that we have, like the start, the middle, and the end, and for each one, I'm going to take the body and pipe it to the writeable. PreventClose here says... Because by default, if you pipe one stream to another, the stream you're piping to will close itself once it's done. But here we're like, "Hey, no. We've actually got more streams we're going to pipe in." But once that loop is done, we call "close" to say, "Hey, that's it. All the parts are done." This is simpler. But it's also faster, because now we're not passing every value through JavaScript. The Browser could be smart about this, because it knows that the fetch stream from either the cache or from the network, that's happening behind the scenes. Like it's not happening in JavaScript land. Through piping, it now knows that the thing reading the output is also behind the scenes, well eventually the streaming HTML parser. So we can just move all of that work behind the scenes. So once you do this, we're joining these two streams together, creating one stream out, and things get a lot faster. The result of that... This is optimizing the top graph there, because we're taking the server end of the graph, and we're optimizing that. Once we do that, the parsing starts much earlier, and that's because we start with that first bit of content from the cache. So we're kind of delivering this huge chunk right at the start, because we've got that and we don't have to go to the network for that. That means the first paint can happen sooner. Once again, that's just the heading. But the important bit is all of the other paints happen much sooner as well. So this is us getting content in less than half a second, compared to the single-page app which is slightly over two seconds. So with a model like this, I'm actually quite happy with full-page reloads when it comes to navigating. So here's an example I did. On the left here, this is a single-page app. Every time I click a link, JavaScript is going to go and fetch the content, and replace the content on the page and use things like push state or whatever to update the URL. On the right-hand side here is their website. When I click a link, it will follow the link and the browser will handle all of the navigation for me. When we run these at the same time, where the clicks on the links are being timed to happen at exactly the same time, we can see that the server-rendered version where the browser is kind of doing all of the work for us, it gets onscreen faster. Sometimes it's in the same time, but it's almost always faster. Now, your mileage may vary with this. It can depend on the content. But I'm not making this up. In fact, I had a real-world case of this relatively recently, a few weeks ago. I was at Heathrow Airport, and I was sort of doing some work before getting on a flight and I was looking at GitHub. I noticed that if I click a link in GitHub, JavaScript handles all of the navigation. It does all of the fetching. It handles the page. But GitHub also does server rendering. So if you take a link and you paste it into a new tab, the server render kicks in. I actually recorded this at the airport. What I'm going to do here is I'm going to click one of the links here, and then I'm going to copy the link and paste it into the tab. So on the left, we've got JavaScript and on the right, we'll have a server render. So click the link. Paste it into the other tab. This is airport Wi-Fi, so it's not great. But we see that the server render is like, "Oh, yeah. Here's your content." Meanwhile, the JavaScript one still chugging along. It's dead slow. It finally gets onscreen now. It's that much slower, and bear in mind that the server-rendered one was started later and it still gets onscreen like... It wins by a country mile. A lot of JavaScript has been written at GitHub to make this really slow. They've had to write a lot of code to make JavaScript handle the navigation and to cause this performance problem. Unfortunately, too often I hear people say that a progressive web app must be a single-page app, and I am not so sure. I don't think that's true. There's a lot of cargo-culting around single-page apps, and I know what happens when you just copy someone else without really understanding the situation. Like a few years ago, I went out for a meal with Paul Irish. Yeah, that's right. I've had a meal with Paul Irish. Who wants to touch me? Anyway, I watched Paul Irish, I watched him taste some wine. He kind of swilled it around in the glass and he took this huge sniff of it, like really loud, and then took a sip of the wine. I kind of watched him do this and I thought, "Wow. Paul is so cool. He looks like he really knows what he's doing." So a couple of months later, I was back in England and I was meeting some friends. We had a meal and we had some wine, and I thought, "I've got this. I've seen this done before. I can look as cool as Paul looked." So I took the wine and I swilled it around, and I took a big sniff of the wine. But I tipped the glass just a little bit too far and dipped my nose in it. I don't know if you've ever snorted wine before. It is not pleasant. That thing that happens if you get like swimming pool water up your nose, times that by a thousand. In fact, I just kind of sneezed the wine everywhere and my friends just kind of stared at me, covered in a kind of wine mist that had been somewhere inside me for a moment, and just wondering why I didn't drink it with my mouth, rather than my nose. But the moral of that story is, you might not need a single-page app. Right? Like if you've got a... There's a link there. Come on. Like if you've got a service worker controlling your page loads and stuff, things can just be as fast. Of course, if you are using a client-side framework, you must include server rendering. A lot of the client-side frameworks claim to support some kind of server rendering. Mm. It's actually a little bit shaky when you look at it. Like Ember have got it, but it's not production-ready. Anguler, too, have kind of punted to the community for that work to be done. Web Components, they sort of support server rendering, but there's no declarative Shadow root. So there's a lot of features missing there. React has also punted to the community, but things are a lot better there. I've used Preact for a couple of projects, and their server rendering model is pretty good. But that means you can get stuff on-page before the JavaScript fetches, before the JavaScript executes. So things are looking pretty good right here. But Facebook have been experimenting with this approach, and they identified a problem. They are serving content from a service worker. There is also the startup cost of the service worker to take into consideration, which is zero if it's already running. But the service worker will shut down if it hasn't done anything for around like 30 seconds. It's up to the browser to decide that number. It can differ between device. But for Chrome, on desktop it's 30 seconds. So depending on the user's device or like whatever's happening else on their machine at the time, starting the service worker up can add, in the worst cases, a few hundred milliseconds. So that's sort of the worst case. But the important thing is that delays this HTML fetch. It delays the fetch of content. We're looking at a few ways of fixing this. One of them is just to be better at service worker startup. But it's always going to be greater than zero if your service worker isn't already running. We're not going to just live with that, so we're introducing this new concept of navigation preload, which is the next 2017 feature. I'd say this is a little bit vaguer still. We have an implementation of this in progress. The specification is starting to settle down. I did a couple of commits to it just yesterday, and people are agreeing with stuff, which is a good sign. So do still take this with a grain of salt. The API might shift around a little bit. So our goal here is to start the HTML fetch in parallel with the service worker startup. To do this, we just want it to be something, a simple command that is part of the activation event of the service worker. You'll call "navigationPreload.enable", just that single line. You can do this whenever you want. But the activate event is kind of the most predictable time to do it. That means now, for navigation requests, the browser will make the request to the network while the service worker is booting up. That will appear in the FetchEvent here as event.preloadResponse. That will be a promise for the response as well, and it will resolve with "undefined" if it's not in navigation, or if the feature isn't enabled. So if you're going to use this, it's always worth having a kind of .then. If it's false C, just do a fetch to the network. That's a good fallback. What you do with this is up to you. You could respond from the cache. If there's an item there, return it. Otherwise, fall back to the response of "networkFetch". But given how early this kind of preloading starts, it kind of becomes more realistic that the network can actually beat the cache. Because the network response could be like 200 milliseconds into downloading by the time you even think about going to disk for this. We do see some crazy situations where fetching from the local device is actually slower than fetching from the internet. This is always Windows machines that we suspect have a series of virus scanners installed, like a chain of five with them. So actually, getting content locally becomes slower than getting it from the network. So in that case, you might want to race, like see which one returns first. Like, "Are we going to go to network? Or maybe we'll go for the cache, whichever arrives first." But it's important to note that if you're going to do something like this, Promise.race which his kind of the standard function, is not your friend here. It doesn't do what you think, or what a lot of people think anyway. It doesn't do what I thought it would do. When you give Promise.race an array of promises, it takes the results of whichever one ends first, not whichever one succeeds first. So take this race. Now, I would say this race is in progress, because no one has won yet. Promise.race, on the other hand, would say, "Look at that. She fell over. That's the result of the race. Don't care about anything else. The whole race is a failure because of her." Promise.race is basically a dick. Don't use it. Yeah. It will trip you up. So you'll need to write your own racing function that does the correct thing of taking the first promise, which successfully returns with a non-false C value. So what does this actually do for us? Oh, if we were taking our streaming code from before, where we were picking up the free parts, you might think that, "Well, this isn't necessarily going to work, because if something is going to do a preload here, a preload isn't useful to us. Because we're actually requesting something different. We're requesting an include of the middle of the page, not the same page that would be requested if the service worker wasn't there. This, thankfully, isn't a problem, because all of these preload requests go with this special header. So your server can see that and go, "Oh, this is a service worker preload request. So I'm not going to serve the full page. I'm just going to serve the include, because I know that's what it's expecting." So in the code there, you can just look for the preload response and if it's there, use it. Otherwise, go to the network. So this means for navigation requests, this stuff will happen in parallel, and that will bring that first render even closer. There's lots more we can do with this. If we know that hint, if we know you're wanting this request to go out anyway, we can maybe make that as the... If the user has added the website to their Home screen, we can start this request even as the browser is booting up. So we can save the 500 milliseconds to a second that it takes to actually boot the browser up. We can have this stuff kind of ready to go. If you're wanting to dig into this anymore, there's a long GitHub issue about it. I will post these slides up later with links and stuff. What else have we got? Ah, okay. So the current way the service worker works is requests from your page go via your service worker like this, and you can respond via a cache, put stuff in the cache, whatever you want. Even if those requests go to another origin, like some font service, things are still going to go through your service worker. This is by design, because it means you can cache things like images and fonts that are on CDNs, even if the destination server hasn't thought about how to make something work offline. The downside to that is that many sites might end up with the same logic for font caching or analytics, or images, or whatever. In the future, we might dedupe storage. So if we know that there's the same item in these two caches, we can just keep an internal reference to it. We don't do that at the moment. But a better way that we can do it is to use foreign fetch. This is another one of the features that's landing next year. I would say this is a little bit higher on the vagueness scale. There's actually an experimental implementation of this in Canary today. But I'm pretty sure that the API is going to change slightly, that there are parts of it I'm not too happy with. But you can play with it today, and see what it actually does. So with foreign fetch, the font service, the other origin, will have its own service worker. If you make a request for the font service, it first goes to your service worker. But if you send that request onto the actual network, onto the font service, it will go through their service worker. How they respond is kind of up to them, which could be to just pull something from the local cache if it's there and respond with that. So if another site makes the same request to the font service, it will get the same caching benefit, because it can be responded from the same cache. So if you were the font service and you wanted to make this work, in the service worker, you would create this new event, "foreignfetch", and then call "respondWith", very similar to how you'd do with a FetchEvent. You can do things like look for a match in the cache. Otherwise, fall back to the network if there's nothing in the cache, and then you return the response. This is where it's a little bit different from a standard FetchEvent, because you return an object with a response property, rather than just returning the response itself. When you do this, the thing that receives it, the other site, it will have no visibility into this response. It will be like an image request to another origin. If you want the other site to be able to see the actual bytes of that, like the text of it or the pixel data, you need to sort of save the name of the other origin that you want to be able to access it. You can do things like have a whitelist of friendly origins that you trust with the data that you have visibility into. It's kind of like a reimplementation of calls, but just on the client. So one detail that's missing here is, how do you get your service worker installed if the other user is never going to visit the other site, if they're never going to visit the font service, if they're never going to visit analytics? A solution for that as well is that once you request something, like the CSS from the font service, it can respond with this header saying, "Hey, here's my CSS. But also, go and fetch this service worker and download it, and install it. So if that's of interest to you, then we have this really nice article on web updates by Jeff Posnick that kind of goes into the whole API and how to use it. What's next? Oh, yeah. Last year, we shipped a feature called "background sync", which means that you can defer single tasks until the user regains connectivity. Say the user like updated some setting in their profile or sent a chat message while they had no connection. Background sync lets you queue that work up. So now, the user can navigate away, or even close their browser, whatever, get on with their day. Then, later, once the user regains connectivity, the service worker can wake up and send those details to the server. This is a feature that's shipped in Chrome, and it's in development in Firefox. It's really good for small bits of data. But if the user is uploading images or downloading a movie, it doesn't really work. Because when the synchronization is happening, the service worker has to be awake. It also means that the browser has to be in memory, and that's bad for privacy. It's bad for battery, because we don't want the service worker running for a long time when the user is not on your site. Because we don't want you to be able to abuse it by bitcoin mining, and all the sort of things people would do if they could just run JavaScript in the background as much as they wanted. So to cater for this long upload/download situation, we're looking at background fetch. It's kind of early days for this one, so it's pretty vague. Quite high on the chart there. All we have now is a kind of API sketch that I've been throwing together. But we're starting to explore the issues and get a feel for how this thing could work. We're going to start implementing this in the start of next year. It's actually going to be done by the Chrome team in London. So here's the idea of how it works. So from your page, or in your service worker, as long as you can get a hold of the registration, you call "backgroundFetch.fetch" give it a tag name, and then just provide it with all of the requests that you want to be made. So for the movie, it would be, the video resource or all of the video resources, and maybe some metadata as well, subtitles and such, and that's it. The fetch will happen in the background, even if the user closes the page or even the browser. Once the fetch completes, the browser will wake up the service worker. You'll get an event saying, "Background fetched," or whatever. The event object will hold all of the requests and responses, so you can kind of do what you want with those, have a look at the tag name. "Oh, they've downloaded the movie." I'm going to open a cache called "movies", and I'm going to take all of the fetches and output them there. I don't know why that last line isn't appearing. There we go. So map all of the requests and responses, and put those into the cache. So while that fetch is happening, we want this to be visible to the user for the same privacy reasons as before. So during the fetch, there'll be something like a notification showing the progress and the site that it's coming from. Once that completes, you'll be able to show your own notification like saying, "Your movie download completed. Tap this to watch it," or whatever. But that's how we think it will be working. That's the first and only Android app I ever developed. Oh, by the way, all it is, is an app that shows that notification, because I couldn't find a video of it online at all. So I had to develop my own. So this is something we'll be developing early next year. We just need to make sure we get things like privacy right and make sure it isn't abusable. So if this is something you're interested in, you can take part in GitHub. It's just on my GitHub page right now. But next week, I'm going to move it to one of the standards organizations, probably the YCG. But that URL will redirect. So the final thing I want you to look at. Earlier, I was showing that standard navigation can be faster than having JavaScript do all of the work. But I know why people do this. I know why people get JavaScript to do it, and it's because you have the ability to do nice transitions from page to page, like have things slide or fade, or whatever. People introduce large frameworks just to do this, and that's kind of a shame to sort of dump 100k of JavaScript into the browser just for a simple feature like this. That's why we're going to take another look at navigation transitions, which is something we've kind of looked at a few times before, and not just as over browsers as well. Right now, this idea is very, very vague. In fact, we kind of have to rescale the whole graph just to sort of see the top of it. So take this stuff with a big bag of salt. It's kind of just stuff we've made up so far, because it's not the first time we've looked at it. But you used to be able to do this in IE4. You would specify this metatag and this, whatever that is. You had like a selection of Transforms that you could take off the shelf, a small amount. So when you had this on your page, when the user clicked a link to go to the next page, Internet Explorer would... Well, it would normally crash. That was my experience of it. But it was supposed to like fade or something, or slide. Yeah. That was that Transform. Both Mozilla and Chrome have pitched ideas for this before. But they're both solutions that exist entirely in CSS, and I don't think they're expressive enough. Like I want transitions like this to be possible from page to page, something where all of this slides and whatever. It'll utilize streaming and everything, and you'll get the Back and Forward button working for free, because that's just how the browser works. You don't have to reimplement all of that. But let's take a closer look at this transition, because there's sort of two parts to it. The first part is this box kind of expands and goes up, and we can do that without any data from the network. Because we know when the user clicks that box, it's going to expand. We already have the image of the clock. We already know it's a clock. That's fine. So we can just sort of do that. That's easy. That will improve the perception of performance as well, because the user taps and something happens instantly. But while that's happening, we can go to the network and actually get the real data, and fade that in or whatever as it arrives. That's fine, because if the content from network arrives while we're doing that transition, we can just sort of bring it in as that transition is happening. The transition out here, that's a little bit different and that's difficult. Because we can acknowledge the user's click to go back straightaway, but it's just a little ripple effect. But as that thing comes down, we really need the network data of all of this content. So that's something we would have to wait a little bit longer for. I mean, because a lot of stuff depends on... Because when you go back in the browser as well, it'll try and restore the scroll position. So you need to know that as well. Transitions are like a piece of choreography between the exiting page and the entering one. It's actually quite complicated. So I want an API that'll kind of allow you to use that. So I was sort of thinking of something like this, where you would have on your page a navigate event and you could say, "Hey, I'm actually wanting to do a transition here. So let me take control for a bit. Tell me about the new window that's arriving, by the way. If there's no new window there, then just return." There'll be no new window object if you're navigating to a different origin, or if the navigation terminates for some reason, like it turns into a download, rather than a navigation. But otherwise, you'll have full access into this window. So you can go into it and say, "Hey, actually, you set the opacity to zero," and then wait until content has actually appeared on the new page and fade it in using the web animation API. This is a very simple transition. But once you've got full synchronous access into both documents, you can do this choreography. You can move things around in the current page, move things around in the new page, and sort of create this transition between the two. If that's interesting to you, there is information on GitHub as well. That's also on my GitHub page. We're either going to look at an API that'll just do this in the browser, or a polyfill, and we'll kind of work on this stuff under the hood that'll let you do that. So the term "progressive web app" is kind of just over a year old. But work has been happening on this stuff for years, really, and we're not done. But we really want to hear from you, from developers. We want your feedback on this stuff. Like what do you like? What don't you like? Some of it's like, we want to hear feedback on the early stages on GitHub, but also the stuff that's landing in Canary. Is it working for you? Is it bad? Because we can change it, otherwise. But basically, I can't really put it any better than this image in a shop window here. "We're you're not 'til not happy." No, that's not it, is it? "We're not happy 'till you're not happy." No, that doesn't work either, does it? "'Till..." No, I don't know. Anyway, thank you very much. Cheers.