Sessions is temporarily moving to YouTube, check out all our new videos here.

A Year With AWS

Andrew Clarke speaking at London Node User Group in January, 2017
Great talks, fired to your inbox 👌
No junk, no spam, just great talks. Unsubscribe any time.

About this talk

Moving from a company where everything ran on 'bare metal' in a DC on Goswell Road to a startup where everything was already running in AWS has proven to be an interesting learning curve. This talk will take you through some learnings along the way and some of the ways to use bits of the AWS toolkit.


I'm Clarkie, I work for Tido, and I'm going to talk with you about my first year with AWS. I was working at a company where we ran everything on bare metal, in Goswell Road, not very far from here, just off Old street. And I joined the Tido, proper startup, and everything was already in AWS. And this is my introduction to it. So, this is my life basically, from my first year at Tido. Go on, go on. Tido, we're building a bunch of apps to help people learn and discover more about music. We're starting with piano music for a predominantly classical music, but we are looking at expanding genres, instruments, all sorts of stuff. And we've built a bunch of stuff... like, it's cool, but that's another talk. This is about infrastructure. So I'm going to talk you through one of our API's, and this is driving basically the main content inside the app. So you can browse and you can have a look, and see what you want to download and play, and all that sort of stuff. So, It's pretty simple app, bunch of node servers. Our model is really simple, we've basically got four things, I think we got four things now, five things maybe. So we've got users, volumes, which is, in a traditional more audio sort of world, is an album, a piece, just a track, so whatever your favorite hip-hop artist is. And then a masterclass is a deep dive into one of those pieces. And this gives you the ability to look at a bunch of performance, so by an actual, like, truly professional playing the piece, so you can see what they are doing. Like, this is really useful for if you are learning, you can see which fingers they're using and all that sort of stuff, and the way they're positioning their elbows is apparently really important. I'm not actually that much of musician. And then we also have an explore section where you can scroll through the notation, and this professional will talk you through why the composer may have chosen certain things for that piece of music and all that sort of stuff. It's really cool, but there's not a lot to it actually, from a data point of view. So, using HAPI, HAPI's cool. Like, no one needs to say anymore, it's cool. So the requirements, I needed a database, all right. It needed to be clustered, I didn't want it to break. This is kind of my job on the line. Backups, if someone did like drop table users, it's like, that's really sad. I needed an application, so we're at node, so L node, so node, it's obvious. But again, I needed to run that on some servers. Someone had told me once it needed to be clustered, so yeah cool, we'd do that as well. And I wanted it to be really easy to deploy, so continuous integration deployment. This is all stuff that I'm really passionate about and have been for a long time. So it needed to just do all of that as well. So, what actually happens in my previous base, was this, and this is what happens when you get an infrastructure team involved. Unfortunately, the infrastructure guy that did this with me isn't here anymore, but there is some really cool stuff. Like, and this is really good, this is like best-in-class way of deployment, if you're running it on your own bare metal. So we're using Jenkins, Puppet, MCollective. We've got app servers. We've got fact we had three different levels of app stuff going on. Here... There is stuff that we weren't even doing as well over here, that was really bad. So anyway, I didn't want that anymore because it was just me doing this. So I came across AWS and there's a lot of icons, and there's a lot of acronyms, and a lot of jargon. And basically, this is my introduction. So we're going to play a game. Who's used AWS? All right, loads of you. All right, so you're going to be good at this. Please, please, please. Animation, animation. You're right. Yeah. What's this? - [Man 1] [inaudible 00:04:12] - No, close. - [Man 2] RDS. - Yes , yeah. RDS. So relational database service, super simple. You can choose whatever database you want. Mine is pretty much, within reason, SQL, so relational. But MySQL, PostgreSQL, Microsoft SQL Server, Oracle or auro... I can't even pronounce it, whatever that one is. Super simple setup, fill in a few boxes, you're off, you've got a database. Automated backups, done. It's multi-AZ. So it's clustered across multiple different data centers. It just takes half of my requirements. It's perfect, done. So what about the application? So next one, this one. - [Man 3] [inaudible 00:04:53] - No, it's the other one. They look so similar, it's really annoying. This one's the Elastic Beanstalk . So what is Elastic Beanstalk? "Platform as a Service," or "Infrastructure as a code." It's kind of a bit like Heroku, although I've not really used Heroku, so can't really say that. But it's basically a layer of abstraction over CloudFormation. So what's CloudFormation? CloudFormation is really the configuration, infrastructure stuff underneath. But this basically is like a higher order version of it. It's really cool, like, so simple to use. So setting it up, we basically get this big list of stuff, you can choose. We're going to choose node obviously. The next thing is the environment type. I realize that's a bit low, you might be able to see it. So you get to choose whether you want to Single-instance or a load balancer, auto scaling. Why would you choose one over the other? So, for all of my proof of concepts that I just completely throw away, just go for the Single-instance, its dirt cheap. In fact, it's even lower than the free tier in AWS. So AWS give you a whole bunch of stuff for free, including basically a whole single load balancing environment, which you should use. And I probably shouldn't say this, but you could actually get more than one free duration, as well, if you just change your email address. I may have done that. You then get to choose your instance type and this is where I was like, "...What do I want?" There's lots to choose from, and it completely depends on your application and what you are doing. If you're basically just running a simple node app, it doesn't... I'm going to say it doesn't really matter, but again, no app is the same. Try one and see, you can always change it later. And this is what's really nice, is that it's all configurable later on as well. We run pretty much all of our stuff on T2 small, and the only reason why we don't go for the micros is because NPM installs actually take up a lot of CPU. And it took way too long and ran out of memory, on the micros, to actually install some of the packages that we were installing. And so, if you're a node module npm maintainer, consider what you're actually choosing to push out because it can have an impact on this sort of stuff. So you're costing me money, all your free software. All right. No, it's really cool. And then we actually run a bunch of the big ones. So the T2 stuff, the T is general purpose, it's burstable. So if you have spikes in your load, it's quite cool, because you basically build credits over time, and then you use them in the high loads. The M stuff is non-burstable. So if you're under consistent loads, just use those. The C ones, are higher CPU. So that's like if you're really intensive. We actually could probably run a couple of those, for doing some of our audio alignment stuff, which is really intensive C stuff. And then there's memory intensive... Like there's loads of documentation on this. Like, try one, see, does your application work? Does it fit? Test it, you know, put it under some loads. See what happens, and move on. So, I've just shown you the user interface for building some of these...for setting up your application. But there is also a CLI or command line. Why would you use one over the other? Now, I've... The first couple that I used, I did, I built through this user interface, just because it was kind of a comfortable thing, but don't do that. Longer term, because the UI is rubbish, like it bombs out. There's some really bad things in there. And also, I don't think everything is in there that you can do on the CLI. The CLI is also scriptable, so you can put it in your build server and you can deploy whole applications on PRs. You can deploy whole applications on however you want. And I'll go into a bit more of that a bit later on. But, like, the scriptable nature of it, is really, really nice. All of our builds look exactly the same, for all of our environments. So actually, we could extract that, and pull it in, kind of centralize our build system, which is quite nice. So, one more. Who's this guy? Okay, disappointed. So Lang Lang, he's a Chinese pianist. He's a superstar. So, remember the app that we're building. He's massive in China, like big, like really big. And he caused us some problems, big problems. One of the things that I had to learn very quickly about AWS, Regions, AZ, all of this stuff is just like... never had to deal with this. Everything was in Goswell Road, I could walk up there. All of this did not make sense to me. So, a region is basically a geographic area, it's pretty simple. There's one in Ireland, there's now one in London, they're opening one in Paris. There's also one in Tokyo, which is quite cool. And then an AZ is basically a data center, or probably a cluster of data centers in that geographic region. And they have, between them, super low latency. So you can deploy servers in each one and they can pretty much talk to each other really, really quickly. So if your clustering things, you want to put them in those same AZ's. If you're doing full geographic thing, you need to be able to basically allow for fault tolerance with longer distance traffic. These are the regions that AWS have now. The green ones are the new ones. There's one opening in Paris soon, and I think that's the second one in China. So they do have a presence in China. But, there is a big but, and we will come to that. This is cool. So, I had my application running in Ireland, with a database and everything else. And then I... then Lang Lang came along. Trust me, I didn't know about this guy either. He came along and I ran this command, and I now have his application, with his configuration, in Tokyo, with two servers and exactly the same as the one in Ireland. And it took like five minutes and it's there, it's done. So, cool, right? Yeah? But it still didn't solve the problem. Because we also have a bunch of content. So our application, the storefront, is just like an e-commerce. You can go in, and choose and browse. But, we actually had like big zip-files of content. This masterclass is four videos, three videos... Four, four, "N" videos. All in one big zip file that we were delivering onto the application. Now don't criticize that, we can fix that later. But anyway, were delivering large packets of data onto an app, long distances. So, one more. - [Man 4] S3 - Yes, on one. So let's see. Lots of people have used S3, it's just buckets, you can stick blobs of data. In fact, Nelson did a really good talk on building a chat application on top of S3, that works, so, it's an interesting use. I'd never seen it before. You basically have buckets, which is just somewhere where you can throw stuff. Think about the way that you're going structure the keys in there before you start just throwing stuff in, because you can end up with a lot of mess. Permissions, again, consider how you want to access this data. There's... We had a lot of problems accessing some stuff we thought we could access, what, last week. Yeah. Just consider it, before you start doing it. You can do things like set a positive permissions and then set negative permissions on top. And thats what kind of screwed us over. Object versioning is quite nice, so you can basically keep throwing assets in with the same keys. So your application can choose the key that you want to store it under, and then AWS will basically increment a version number underneath, and can return that history. So, it's not like a truly, nice Git thing, but you basically get history of what you did with that single object, key, all the way through. It can also do static website hosting. So, this is quite nice. You click a checkbox and it will start hosting it as if it's a website. We run all of our static sites through there, and that's quite nice now. But we still had this problem. Again, Lang Lang keeps coming back. I've never met the guy though. So, last one. What's this? - [Man 5] CloudFront. - Yes, it is CloudFront, but its different to the icon in the AWS user interface. Why? It's different. It's not helpful. So? Document it... Anyway, we can come back to that. So, CloudFront is a content delivery network, its really easy to configure, to set, sit in front of both S3 and your Elastic Beanstalk applications. So, immediately, got problems in China. I just need to get everything over there as quickly as possible. This was going to be the golden bullet, like, it's beyond silver, this is amazing. But, so, this is a big but, when we did this, there wasn't even the Beijing Pop which is the point of presence inside China for AWS. And they do have now a point of presence inside China, which is really good. There's still a but with that. But this is some metrics that we ran with a China-specific CDN. So, China-specific content delivery network. And, like, yes, they're selling it to me, so it's obviously a little bit weighted. And this was actually what we were seeing. We had 80,000 downloads in one week in China. And 80% of them had problems downloading our content. And that was through the Apple CDN. So like, if someone's going to get it good, we kind of thought Apple might be all right, but apparently not. So, we had real issues. So we thought we'd bring it out, externalize it and we could actually have a go at doing it ourselves. And sure enough, we could have got it a lot better. We could have got it up to 90% delivery rather than 20% delivery, but the minimum pricing for this was £1000 a month. And we hadn't proven the market yet in China, so we basically just went, "Sorry, dude. We're kind of out in China." So, this was a big problem for us, so think about your way you want to position your application before you start trying to deliver content. So what happened next? That's a really sad story, right? It's not all bad. Our application is still running and it works in most of the world, most. We have a bunch of autoscaling Elastic Beanstalk API's. We also have a bunch of an autoscaling Elastic Beanstalk instances behind a queue. Now, I've not covered those but you can basically stick them behind an SQSQ to do offline processing. So if it doesn't need to be real time and you can do it a bit later, just stick it behind a queue and it will just run automatically. We have our S3 buckets, we have two CloudFront CDN's. So, this is quite a nice little trick that I picked up. It takes a long time for the configuration to propagate on CloudFront distributions. And if you're not quite sure on what the configurations should be, just set "N" of them up. You can always take them down again. I might have done that a few times. It's all learning, though, right? So this is now what we've got, a bunch of these and a bunch of these. We're now running 13 Elastic Beanstalk applications, environments. So these are all clustered or single-instances, some of them behind SQSQ's, as I said, a bunch of S3 buckets, a few CloudFront distributions. Some of them may be configuration changes, I'm not sure, I need to check. Elasticache, for all of our session storage. So you can basically choose to Redis or Memcached. It's really nice, super simple, to set up. One little caveat on that, though, is you can only access it from inside the AWS world. So if you want to access it from your local machine, you basically have to proxy in, and its hassle. So you may want to choose to run Redis on an EC2 instance instead, but again it's all balance and maintenance versus hassle. And then we've actually scrapped RDS in favor of DynamoDB. But that is another story and still a lot of hassle. So, AWS is great. It really is. I learned a lot in the year, and one thing I would say, its a lot better than running your stuff on Goswell Road. That's all I would say. My old boss used to have a problem or nightmares with an oil tanker blowing up outside the data center on Goswell Road. It's a slight risk, but we don't have that anymore, I think. The one, big problem though is vendor lock-in. So, AWS has got a whole bunch of stuff and every day I find new things that they do. And my latest ones are Certificate Managment, Free SSL Certificates. Wild Card SSL Certificates free, but you can only use it on their services. Incognito, so user management, like beautiful, really nice award-stuff, like done. Code commit, so GitHub but in AWS. Code-build...just like.... So much stuff. You basically just have to stop, just to accept it, it's fine. And it's still not great for China. Now they are getting better, they have a point of presence and you can run stuff outside of China, however, you still have to go through the Chinese government. Which is, I think, at the last count was about a minimum of fifteen grand initial outlay to set up a business in China, to then be able to use the AWS services and anyone else inside. There are hidden limits, this I found out last week or the week before, I think, one of the two. I was trying to.... I did an EB clone, to get up a new set of instances to try a different version of a particular application. And it basically said, "No." It said, "You've reached your limit." And I was like, "What? We're only running like 26 servers." I was like, "No, you told me I could scale to 10,000." But behind the scenes, they actually put a cap on the number of instances that you can have of any one type of server, and that this 10. So, we'd managed to get by, by having basically 10 of each type. And then when I tried to deploy two more and it said, "No." And then basically you just have to ring them up and say, "Can I have a bit more?" And they go, " All right, cool. Your limit is now 50." I think it's to try and keep the budgets down, but I'm not entirely sure. And the documentation, its all there. But, yeah. So that's kind of my introduction to AWS. It's been a really fun year. I've learned a lot, sometimes pulled my hair out. Sometimes kind of did one of those like, "Yeah!" But, yeah. Any questions? Whoa. Go on, start over here, Sam. Mic, mic, brand new mic. - [Man 6] Elastic Container Service, have you used it? What's your take on it? - No, but, so we're doing some hefty audio alignment stuff, which takes a midi file and a performance audio and tries to match them, in partnership with Queen Mary University. And their stuff is in, like, dirty C stuff and it only runs on Ubuntu. And this is all running on AWS's Linux thing and so we couldn't get it to work. So it's like, "Sod it. Stick it in Docker, go on, off you go." And you can actually run Docker through EB, Elastic Beanstalk. But no, I haven't used Elastic Container Service. Yes? - [Man 7] How do you manage all those services? Are you just using [inaudible 00:20:43] interface? Or, you stuck it into like shaftboard? How do you manage all the services? - So we use TeamCity, to do all of our builds. And basically what we do is we run a Prod and a Dev branch, and we fork off Dev. We do all of our stuff on Geoticket feature branches. Those then get merged into Dev, they get automatically deployed straight out onto Dev. We can QA, we then go through a review process into Prods, which then automatically gets pushed out straight to Prod. Like, it's a lightweight, but it's continuous delivery. Pretty hands off, we trust our Devs. He's one. Any quest... Over here. Cyril, run. - [Man 8] I have just a few remarks to your speech, and then one question. [inaudible 00:20:34] CLI has all the betas far before they are released on MUI. So you may notice a one year gap in between them. So, the [inaudible 00:20:25] manuals, will CLI update them every day? Because you can have some of these that no one knows about. And S3 has a huge number of files. If you try to deploy [inaudible 00:21:55] application, like a single page application [inaudible 00:21:59], something like that. You [inaudible 00:22:01] data, let's say, one data approach to make a [inaudible 00:22:05] application [inaudible 00:22:07]. And that's [inaudible 00:22:08] from, even, you're not thinking about every different [inaudible 00:22:13]. You can not do SSL on your own domain without [inaudible 00:22:17]. And one question, if you are just.. I already realize that you are looking to optimize the cost. Why, especially in... maybe you have other jobs which you are doing on SQS, or on [inaudible 00:22:31]. Have you found a [inaudible 00:22:23]? - Yes, so, we have. But again, most of our offline stuff is this audio processing which is in C. Which as far as I'm aware, probably wouldn't work very well with Lambda. - So, on Lambda, it can have, like, no Javascripts. So, it can transfer most of your online packets, stuff like that. So for [inaudible 00:22:52], it's amazing, because it scales infinitely with almost no limits. Only limit is memory and five minute execution. So, quite a long [inaudible 00:23:00]. Or you can [inaudible 00:23:02] them with Javascripts over that. So quite a choice. - Cool. Everything's an experiment, right? Anymore? One more. - [Man 9] A quick question for a new project or a new business, would you rather go straight away to AWS, set up a Beanstalk instance and get running with it? Or start a bare metal answerable provisioning or [inaudible 00:23:30] on whatever server is available? - That's a good question. So, from my experience, and I tried to spin up a business on my own with a friend, like, disaster. But, that was when I learned that you could actually use two free tiers, quite nicely, to experiment with. The most important thing when you're setting up a business is to just go as fast as you can, and if you're spending time in configuration, don't bother. And I think the quote, I missed it, from Elastic Beanstalk's documentation, is that it's the fastest way to spin up an application on AWS. It really does not take very long. I was going to do a little demo but then I got a bit of stage-fright so I thought, "No, I won't do that." But it really is, really quick. And then to clone it, if you want it to basically just go," Yeah, fine, that's working but I just want to just try something over here." Just clone, change, go, deploy and you're done. Something else that you can you do really nicely with Elastic Beanstalk, which I've never done, as part of my deployment, is that you can switch the urls. So you can do proper true, green, blue, whatever, with two colors. And deployments where you deploy your latest application over here, get it all up and running. And then you can deploy like your existing one, and then you switch the URL's and behind the scenes, CloudFront, doesn't know any different. And your traffic just gets routed. That's really nice. So I...for me, it's whatever tool you're comfortable with. Elastic Beanstalk, I use it all the time, I love it. I'm not getting paid for this by the way. It's so simple, but it is also super flexible. You can go really deep with the configuration. So, you're asking about elastic container service, that's really nice. But, we didn't need to do that. We could wrap our instance in a Docker container, and just deploy it and it works, but then we still get to choose all of these different instances, sizes. We can choose to deploy it however we want, you can have up to 10,000 behind a single load balancer. I am not scared that this is going to fall down, as long as people stay outside of China. Okay, we're done. - [Girl 1] Well, if there are any more questions, maybe we can get onstage at the pub. - One more thing, we are available to download in the app store. It's 2.99 a month, just got to get that in there. - Thanks, Clarkie.