Sessions is temporarily moving to YouTube, check out all our new videos here.

Front-End Performance at Argos

Jack Franklin speaking at JS Monthly London in June, 2017
Great talks, fired to your inbox 👌
No junk, no spam, just great talks. Unsubscribe any time.

About this talk

We will be covering the aspects with working around performance optimisation as they currently building the new UI for Argos in React.


I would like to say that I'm a full stack engineer so I'm not technically a UI developer but I'd like to take you through some of the UI performance that we're gonna target some of the changes that we've been through in the past seven months. And perhaps try and explain the need for optimising code and the need for optimising development. We do have some performance metrics and results at the end, so I'll give you the exact numbers that we have optimised in the past three months since we moved to Webpack. I'd like to start with the Business Case. One of the reasons why we are trying to improve performance is that retention decreases for pages that are loading for over three seconds, so people are getting impatient of navigating through the page and clicking through the products. It's one of the main reasons why people don't convert. And we would like to speed it up. For us, at the moment, I think the page goes around six seconds, and I think the benchmark is five seconds. Engaging experience converts more. If we had performing page, people would be spending more time on it. They would not be waiting for the page to reload, they would not be waiting for the images to load. And the other part is having faster development opens more doors in the sense that we can iterate faster, we can get something out to be released within the week. We don't have to release once every two weeks. We would like to ideally be releasing twice every week. Okay, so, one of the first things to mention is what's the traffic? So we have a lot of customers coming onto the website and this is mainly the website. The numbers are coming from the PDP page, so I'm working with the PDP team, and this is the pro display page, which is the main point before people at the trolley, just before they get to buy a product. The statistics that we have of these, we're trying to say that we have more mobile users than desktop and tablet combined, so for us, mobile performance is really important. At the moment, this is obviously not as fast as it could possibly be, we don't have progressive web apps, we're not optimised for slow connections on 3G networks and so on. These are weekly visits, we have about four million visits every week coming from mobile. We have a little bit of statistics on browsers, we are supporting all of these browsers, these are the top 15, we have a larger list which we've hidden. Most of the people on the page are coming from Safari so this includes Safari on iPhone as well as iPad and desktop and then we have Google Chrome, so Google Chrome on a desktop and Chrome on mobile. The next slide has devices. We have here split on Apple iPhone and Apple iPad and then the rest of our Android users perhaps come from Galaxy, we have a lot of Samsung Galaxy users for some reason. Some of them are unspecified, which means that we have not been able to capture the device. Unfortunately, that is the second highest. You see that the numbers a little bit flipped, but this is for a different reason, the main client has an Apple iPhone. Now our optimization roadmap. We've tried to do a couple of things to improve performance and try and optimise the bundle size, which is our main application for mobile. The things that we've accomplished so far is migrating to Webpack, a couple of months ago, we had Gulp as our bundler with Browserify, so migrating to Webpack minimised the bundle and we also managed to use the Webpack visualizer which is a tool for displaying all the packages in the bundle. We've also migrated to Browserstack which is not necessarily for performance on the website, but it helps us test more and test more often, test on more devices. We have migrated to Jest + Enzyme and as well to Yarn. Yarn instal for us was faster before NPM 5. Now NPM 5 has come along and they are very comparable, they're very similar, I have some numbers to show you. Jest + Enzyme, as I have on the next slides actually increase the time that we run the tests and before it would take us perhaps a minute, two minutes to run the tests, and this is pre-commit hoop for us so every time a developer is trying to commit something, they'll run the test and that takes two minutes of their time. If they do that 10 times a day, half an hour of their time has been taken to run the tests. In progress, we are trying to migrate to HTTP2 at the moment we're not fully migrated to HTTPS, but we'll try to do them nearly at the same time. HTTP2 would increase performance for us because we can push the bundle as the page loads which is one of the main reasons why our page is not interactive sooner. We have third party scripts which are blocking the rendering. Then we have two internal projects also I would like to talk about, Argos Bolt and Argos Dexter's Lab. I think I've gone through most of these already. But let's try again, for development, some of the reasons are bundling and development, time for running the tests, the unit tests would be over a minute. 25 minutes to deploy to our testing environment, we're using AWS and we have jobs which will be deploying whatever's been built from the UI and this will take up to 25 minutes from commit to having it deployed. Our automated tests which are now running on Browserstack used to take 45 minutes to run, so this is a really slow development cycle for us when we want to make a change. I have some results, as in numbers, how faster it is. Now our deployment time is down to 10 minutes, so as soon as we commit something, we can have it on the test environment in 10 minutes. In storing packages, we had Yarn and maybe we'll now go back to NPM, because NPM five or four and Yarn are pretty much the same, but a week ago, we were seeing that Yarn takes half the time to instal the packages and dependencies that we need. Bundling with Webpack is very similar for us in terms of getting the bundle, but obviously there are benefits in terms of the size and so on and testing, I could get the number at the moment, which is 20 seconds, considerably less than two minutes, we can't really get the number for Gulp with Karma because our tests are not running anymore and that was in the past. I did go back to the commit where we had Gulp and Karma but there's something issues with third party packages which we cannot fix and unfortunately cannot run them, but I do remember that it was around two minutes, that was the longest I've seen it run. Some reasons to optimise code. Our bundle size was compressed almost half a megabyte, half a megabyte is something that every visitor on our website has to download almost every week because it was released once a week. Page load is over five seconds, we're still trying to get down to five seconds and below. We have an example, we decided to do it that way by having an estimate number of unique visitors per week that we have on the PDP page, and we definitely have more than 800,000 visitors, unique visitors. In terms of our bundle size now, which is nearly half of the size of this, we would be shipping around 200 GB of bundle every week, so this is something that we're also paying for and obviously want to minimise. The benefits of faster load time, faster time to interaction. Bundle size now is 276 KB, I wish I could show you the old code with the bundle, but in terms of page load time, we've saved around one second, which for us is a lot. We've gone down from about 6.5 seconds to something smaller, I have some numbers which we use to get from Lighthouse, a Google Chrome plugin. Overall, this amounts to, I need this mouse on here, this amounts to, which side oh, my mouse is a pain, overall it amounts to 35% decrease because we've removed some of the vendor packages out of the bundle, that doesn't necessarily mean that we've reduced the amount of code you need to download in total, but it is 35% less. That saves us around 80 GB shipped weekly. Now the internal projects, I have some examples to show you of what they look like, but this is Argos Bolt. The idea behind Bolt is to be mobile first because obviously most of our customers are mobile users. We want to have minimal impact and component features shared across teams, so we don't want to have repetitions of code which every team is using and it's making everyone's bundle larger, unnecessary code. That's one of the snippets, I know most of you won't be excited by HTML code, but I wanted to show how it looks like in code, it uses Flexbox, it is much smaller in size, it makes better use of mixins which we did not have before, we were based on Bootstrap and if I do manage to get the other window behind, shouldn't have Skype me. This is something that we have to upload, that interesting code that we just saw, these are components that have been built with it. Flexbox, I don't know if all of you have used that before, is the green layer that we've created and as we add and remove elements, they take different proportions of the page and the rows. One of the interesting things that it does is if I can grab that bit, is that it becomes mobile friendly and all of these things are built into Flexbox so as in how they look on different break points. I don't have an actual number of how much smaller it is at the moment, we're releasing this soon, so maybe within a week we would have the search page on Argos using Bolt. Some of the other pages will still be using the old one until we move because our teams are split and we release at different times. The other internal project I would like to mention is Argos Dexter's Lab, so we had a problem with running A/B tests on the website and what would happen is we would see flickering, so we would see the test that is running would actually be modifying the page for the customers. The reason is because we would be caching the pages through Akamai and our tool was only doing that from the client side. We've changed that with this internal project. We're rendering the page on the server and then caching a different variant on Akamai. It does increase performance and there is no flicker anymore for customers, I have a small demo as well. This is at the moment a page, this is where our instances run. It is before Akamai, so this is not a cached page, this is directly getting our UI service. The variant is this, so we have this button either under check stock or above and as the code shows, we specify essentially a cookie name and then depending on the value of that cookie, we render a variant. This is not something that is set by the customers, it's something set by Akamai and the page is loading with the, what happened here? Sorry, okay, if we had value too, you can see this being rendered on top. To show you the difference, I can do the same in our production, okay. We have probably, oops two, four, what would happen is we will see it flickering sometimes if we had a slow connection, what I mean is, this looks this way first and then it gets flipped. This is what we wanted to get rid of and one of the main reasons is because the page would take sometimes to download the bundle and then mount it and it's actually the client side code which decides which variant is. We tried to move that way from front end and have it rendered in the back. I thought it would take just 15 minutes, but this going back and forth with the mouse takes a while. Okay, so what's improved? We have some numbers and the metric that we use is first, meaningful paint, which means the primary content of the page is visible. The other metric is first interactive, so when people can actually first interact with the page and the third one is consistently interactive which means they can continue interacting, they can add to the trolley or they can see different images and so on. There is a little star here saying that it also includes CPU is idle and some other things depending on which one of these two is. We've run the same test on the same laptop for our code based seven months ago and the one at the moment and these are the results. Before, this was all with Lighthouse, with code and performance 21, which is not great, but then now it's 30, so it's a little bit better. In terms of meaningful paint, we've saved about a second. First interactive, it might awful of 60 seconds, but this is on a slow connection and loading all of the code, loading the page, loading the bundle. We've reduced it by two seconds, we have around a second for consistent interactive and these are some other metrics that is a speed index that Lighthouse gives. But we've saved about a second in terms of performance. What's next? Our roadmap is to implement newer and newer things. Some of them, well perhaps switching to Preact which is a lighter version of React, React Metrics is something we could use for dealing with our analytics code. This is one of the slowest parts on our page, we have Adobe scripts that are blocking the rendering of the rest of the code and we do have to load them first. Using React Router is a minimal but still an improvement. Using Service Workers is something that might be very useful for mobile users and that's why we have it on the list. It's something that could speed up navigation through the page, it will work on slower connections, it will work without a connection. And Web Components is an experimental thing that people are trying to get on this, mainly by Google Chrome developers and this is something that we'll perhaps consider adding to React. And I think that's all, thank you.