Sessions is temporarily moving to YouTube, check out all our new videos here.

Ava Test Runner - A Fresh Take On JavaScript Testing and Growing an Open-Source Project

Mark Wubben speaking at The JS Roundabout in August, 2017
Great talks, fired to your inbox 👌
No junk, no spam, just great talks. Unsubscribe any time.

About this talk

AVA is a Node.js test runner. We'll start by asking ourselves why we write tests, and which types of tests we end up writing. We'll discuss how AVA helps you get your tests done, without getting in your way. Finally we'll talk about AVA as an open source project and community.


Hello everybody. My name is Mark. I'm a software developer based here in London working as a contractor. I also work on a bunch of open source projects, of which AVA, a Node.js test runner, is the biggest. So we're going to start this talk by asking ourselves why we write tests and which types of tests we end up writing. We'll then discuss how AVA helps you get your tests done without getting in your way. And then finally, we'll talk briefly about AVA as an open source project and community. And yes, there will be plenty of gratuitous stock photos in the background, courtesy of the Unsplash Community. I don't know about you, but I enjoy writing code more than testing the code. And that goes when I'm not being paid to write code, you know, it's just for fun, but also when I am being paid to write code. And especially when people are paying you, there's pressures just to get stuff done and that just gets in the way of having any tests at all. There's a fantastic blog post by Sarah Mei in which she proposes five reasons why we write tests. And this really resonated with me. So let's just go through those reasons. So even without proper tests, you still need to know if your code is working, right? So you can click around in the UI, maybe you can start up a Node REPL and type some stuff just to see the outputs. That gets a little bit annoying and repetitive. So the best way to get confidence in your code is to write some tests for it. And then once you have tests, you can use them to make sure you didn't accidentally break your code or, I assume you're not working alone, that you didn't break somebody else's code. In the words of Sarah Mei, this prevents regressions, a fancy word meaning things that used to work anymore, but don't. Your tests can be a way of documenting what your code does. It's just one way of documenting it, maybe not the best way. But often enough, it's the only way you're ever documenting what your code is doing. And testing your code forces you, at least a little bit, to design it better. Because if you can't test it, you should probably find a different approach of writing it in the first place. And then finally, tests can come in handy when you need to change your code. This is less about preventing accidental breakage. It's more about supporting you when you go about making changes. You need to have tests at different levels of your code base. And that's not always the case. And organising it can be difficult. But still, you will be changing your code at some point and then it's nice to have some tests for it. So why do we test? Because it gives us confidence that the code we write actually works, that we haven't broken anything, to communicate what the code is doing, and so that we can change the code. Still though, writing tests can be a slog. The Wikipedia article on software testing lists about 20 types of testing. We're not going to go through the boringness of 20 types, but I thought we would focus on some of the types of testing that software developers engage with in our day-to-day. So mostly, we'll write unit tests. We'll take a class, or a module, or a function, and then we just test it by itself. A different level is integration testing. So the goal here is to test how different pieces of your programme work together. So don't get hung up on whether you're doing unit testing or integration testing. Like often if you have small helper functions, you might end up testing them completely just by testing something else. You don't have to waste time carefully unit testing this helper function. Whenever we find a bug, it's good practise to write a test that reproduces it. So then when we fix the bug, we can know it's fixed and we can be sure it stays fixed. And this is regression testing. Destructive testing is when we cause a function to fail. And it helps ensure that the code can safely handle invalid or unexpected inputs. And functional testing involves making sure your programme does what your user needs it to do. Suppose it's a form of integration testing, but you're not really testing the internals of your programme. So, for instance, if you have a web app, you could be using Nightmare to test how it's supposed to be navigated by the user, and then make sure that the user ends up at the page they are supposed to end up at. So we learned that testing matters and we've learned about the types of testing we might encounter as we go about our day. Still though, is any of this actually fun? I think testing is a slog, but AVA will help you through it. So AVA was started in November of 2014 by Sindre Sorhus and Kevin Martensson. And other members of the team are Vadim Demedes, Jeroen Engels, James Talmage, Juan Soto, and myself. So we'll talk more about the project shortly. But first, I want to show you how AVA gets your testing done with these previously discussed factors and testing types in mind. So AVA aims to have a small surface area so there's less for you to remember. This is immediately apparent in its testing phase. There is no BDD, or TDD, or exports, object, there's just a test function. And you call it with a title and the implementation. So you can see that here. We import test from AVA. And then we have a very simple test, one plus one equals two, and then we make sure that that's true. So you don't have to worry about whether you're using the test interface correctly. You just give your test a descriptive enough title and you're done. And if you have multiple files, we'll make sure to prefix the test output with the file name. AVA has only eight types of assertions. My clicker is a bit fast today. We can get away with having so few assertions because we use power-assert by Takuto Wada, which captures the values of the JavaScript expressions you use in your tests. So, for example, we could write array.indexOfmust equal two. And we say that has to be true. So if this expression results in true, then the test passes. If it doesn't, it fails but will print out what all the values are in that expression. And I'll have an example of that later. So this means you don't have to memorise, like, a large assertion library, 'cause you can just write JavaScript. AVA runs each test file in its own process. So this means you can safely modify the global state without accidentally affecting a different test, or an unrelated test that you were not expecting to be affecting. You can also run multiple test files concurrently, each in their own process, to hopefully speed things up a bit. It also have a performance overhead. So it kinda depends how many test files you have. And we're looking at defaulting to a lighter level of isolation that should still work for most practical use cases. Tests fail. AVA will help you understand why so you can fix your code or your test, in case your code was actually correct. So when assertions fail, we'll print the values you seek by the assertion and try to give you really detailed and readable output. So in that t.true array example I just showed you, what it ends up looking like is this. So we run AVA. And you can sort of see, it's a bit small, but there's the line of the expression that failed, nicely in red, and then it shows you the result. It's false. Two is the number two. Array index of returns three. Zero is zero. And then at the bottom, it shows you the array. So it gives you a lot of context to figure out what was actually going on in that assertion and that hopefully helps you debug your test. We can do the same when comparing values. So this does a dbCall between two objects and it prints a nice diff with, I think that's extraneous that it's wrong, and then what was expected. So you can hopefully look at this and then you figure out where the problem is. Tests should contain assertions. Without them, what are you testing? So AVA will detect when your test completed without running any assertions and make such tests fail. So, in this example, we're looking over an array and then we're making sure every item in that array is true. But it's unclear. Is getArray supposed to return an empty array, maybe? And if it does, then what are we testing? Because we're never running any assertions because the array's empty. And when we talk about having your tests be documentation for your code, this doesn't document what getArray is supposed to do. So we can improve this test. We can assign the array to a variable and we can make sure it's an actual array. So now we know it's an array and if it's empty, that's probably fine. If it has items, we can make sure they're all true. Asynchronous code can get stuck. So AVA will detect when the event loop in Node empties while a test is still pending. So here we have a test. And maybe we have to get something from a mock database or whatever and it's just hanging. Nothing's happening. What AVA will show you when you run that test and everything just stops running, no, there's nothing left to do, it will say your test returned a promise but it never resolved. So it's probably not working the way you're expecting it to. AVA's committed to automatically supporting the stage-4 language features in your test files. So no matter what Node version you're running, you can use async/await or anything else that is now in the latest Node 8, or anything that the TC-39 committee is going to move on to stage-4 soon. We'll be able to store that in AVA before we have the Node version that supports it. Importantly, though, we don't modify any built-in objects. So a string.padleft function, we won't be, you won't have that unless you are in a Node version that actually supports that. So when debugging, it can be useful if you can sort of skip certain tests or focus on specific tests that you know are failing. So we have skip and only. And you can also, from a command line, match a test by its title. You can sketch out tests that you still have to write. So you can use test.todo and this will print a nice little list of tests that you're still supposed to be writing. So this is a way to just leave some notes for you from when you have time to come back to that code. Sometimes you find a bug, but you don't have time or the knowledge to fix it. So with AVA, you can still write a test that fails while the bug is present, but it won't break your CI. So here's a nice contrived example for the Game of Thrones fans. Gonna assume that Daenarys's favourite animal is a dragon. But maybe whoever programmed this just thought it was a dog or a wolf. So you say test.failing. So assuming this assertion fails, then test.failing will mark that in the output of the test run but it won't fail your CI build. So there's one known failure. There should be dragons. Now when you fix that bug and you don't remove the failing modifier, then it will, like, fail. Because it's supposed to fail but no longer does, which is a little, like, inverted logic. But basically, then your CI will be breaking. Then you realise, oh, I need to mark this test as no longer failing. And then, there you go, you have a test case for your bug that you just fixed. AVA makes it easy to test with multiple inputs. So, for instance, if you're writing a destructive test, you're throwing lots of data at a function and we're just making sure it handles everything that could go wrong. So let's take a sum function. So we're going to add left to the right and they both have to be numbers. Otherwise, we've thrown a TypeError. So the happy path is easy to test, right? sumand that must equal 11. But doing this for all the various inputs that are wrong gets quite annoying. So strings null and undefined, two and true, it's very repetitive. So AVA lets you write a macro function that you can use in multiple tests. So we can write a check inputs function, which has the assertion that it throws a TypeError. And then we just do three test calls with the values. So this is a lot easier to write than just copy-pasting and changing the values. It's much clearer what's going on in this case. AVA also comes with an intelligent watch mode which reruns just the right files when you edit your code. So you can just start AVA in the corner and then go about your day editing code and it will keep rerunning the test. So AVA is an open source project, which means it's not about those in the core team, like it's not about me, it's about building a fantastic test runner. Nobody in the core team understands all of AVA. You know, we're quite familiar with it, but there's different pieces and we don't have time to really study it all. And that's fine. Major contributions have been made by people who are not on the core team. We're there to help with issues and contributions from others. So on, like, some metrics side, we have comments from well over a hundred contributors in the main repository alone. There's 15 recipes that sort of explain how to best make use of AVA, ranging from code-coverage to MongoDB testing. The documentation has been translated into 8 languages, from French to Chinese, by almost 30 contributors. On a vanity metric side, we're nearing 11 thousand stars on GitHub and we have about 400 thousand monthly instals across npm and Yarn. I like the GitHub stars one. It's complete nonsense. It's just a lot of people click the button. It doesn't really mean it's any good. I mean AVA's good, but it just makes you feel nice when you can go to Repo and see a high number. So our guiding principle is to be kind to everybody. So this goes for support requests, for issue reports, helping people add contributions. But yes, the minimum definition of kind is spelled out in our code of conduct. Contributing to an open source project can be daunting. But there's many ways to make contributions. Translating documentation is immensely valuable. Bug reports, writing a recipe on how to use AVA. That's a great way of sharing your knowledge. We also try to label issues by how accessible they might be to new contributors. You'd be surprised how much low-hanging fruit there still is in AVA. It's usually a good idea to leave a comment if you're working on the issue to discuss beforehand if you're about to do a large amount of work. Just kinda so we don't waste each other's time. Like you might start working on something, and then somebody else starts working on it, and then if they finish before you do, then your work might have been for nothing. So you can just say, hey, I'm working on this. That's great. They will let you work on it. If you're gonna spend a lot of time fixing some really complex issue, you might wanna check ahead of, you know, in advance if your approach is the right approach just so you don't waste your valuable time on something that needs a lot more work. Code contributions are best discussed in a core request. And we automatically run tests across our support and Node.js versions both on Linux and on Windows. We make sure your code is consistent with our linter. We're using xo by Sindre. New code should have tests. Ironically, AVA's tests are written using node-tap, not with AVA, so. You should expect some back and forth to make sure the code solves the problem in a good enough fashion, solves the right problem. But you no worry, we'll help you every step of the way. You don't need to be an expert in Git. We'll always, like, if we need to fix something up just to land your code, we'll just do that and we'll always credit you with your contribution in the comments as well as in the release notes. So we've discussed how AVA is run as an open source project with contributions from many people. We've learned the testing matters. We've learned about the types of testing we might encounter as we go about our day. Testing may be a slog, but we've seen how AVA helps you through it. So I hope you'll give AVA a shot and I'd be over the moon if you join us. You can find us on GitHub at avajs/ava. That's where the main repository is. We do the occasional tweet from ava__js. Stack Overflow is great for asking support questions. Just tag it with AVA. And we're on Gitter. So thank you very much.