Sessions is temporarily moving to YouTube, check out all our new videos here.

Thinking Functionally

John Stovin speaking at Dot Net North in April, 2017
Great talks, fired to your inbox 👌
No junk, no spam, just great talks. Unsubscribe any time.

About this talk

Recent versions of C# have seen the introduction of many features that originated in the functional programming arena - but where did they come from, and where do they lead to?


Just for some background, I actually, I live in the Peat Districts, and I work in Sheffield, but it's nice to be back in Manchester, as I was a student here, back in the late '80s. So it's all a bit different from back then. They've stolen the maths tower. But yeah, it's nice to be back. So this is a talk I gave back in, when was it, October, last year, at DDD North? I've changed it a little bit, but not much, so if anybody saw it at DDD North, you can leave now, because there's nothing much to see that's different. And if anybody's going to DDD Southwest late next month, I'll see you there, as well. So yeah, as I just say, I'm like a lot of developers, I like shiny new stuff. I have to thank a friend of mine for doing this cover, it's rather fun. And one of the great things about Sheffield is we have lots of shiny stuff. It's the home of stainless steel, so we've got lots of shiny things in Sheffield, so it's a nice place to live if you like shiny stuff. So my latest shiny thing is F#. I've started, I thought I'd see what all the fuss was about and play around with it for a bit. I've probably spent about the last 18 months to two years starting to get to grips with it, I'm really only still only starting. But I find that one of the best ways I learn stuff is by teaching to other people, so you're my guinea pigs, this is how I learn stuff. As a side job, I do a bit of teaching as well, so I kind of like doing that sort of thing. I tend to lurch from imposter syndrome to kind of Dunning Kruger and back again. So either I think I'm rubbish or I'm brilliant, so this is why I'm kind of experimenting on you. But I'm totally in imposter mode tonight, because I really don't know a lot about this stuff, I'm still learning a lot, but I will impart to you some of the knowledge that I've learned along the way. Has anybody here done any F# programming at all? Played with it? Good, anybody done more than dip a toe in it? Anybody kind of think they're an F# wizard? Well that's a relief, I won't embarrass myself, then. I've found starting to learn functional programming was very much like when I learned, when I moved back in the '80s, from writing C to writing C++, and a lot of that was, there was a huge paradigm shift to make to suddenly start thinking in terms of objects instead of functions. And then going back to functional programming, it's kind of moving back again, so you've gotta pull apart your functions and your data. So I'm gradually making that paradigm shift, but it's interesting, one of the reasons I wanted to give this talk is to say that there are a lot lessons you can learn from functional programming that you can take back into your everyday C# programming and apply and help you to write better code, or at least I think that's what I'm doing. I found this lovely quote, from a guy called Ed Post, he wrote this back in the '70s when Fortran was a thing. "The determined real programmer can write Fortran programmes in any language." I'm determined not to do that, so it's very easy, F# lives on top of .net stack, so you can write object-oriented code in F# if you really want to, but I really wanted not to do that, I wanted to learn what the basic idioms of F# were, why they worked, how they hang together, and then I'm trying to bring that back and explain that to you guys. So just a bit of history, first of all, history of functional languages in particular. Functional languages have been around for a long time. Lisp was 1958, which is almost contemporary with Fortran, I'm not quite sure which came earlier. So the first Lisp compiler was 1962. John Backus wrote the Turing Award lecture Can Programming Be Liberated from the von Neuman Style? Well history seems to suggest no, but maybe we're getting there now. There was a sort of big resurgence in functional programming in the late '80s, which just picked up as I was at university, actually, so I did a bit at university with Miranda, and then Haskell followed on later. We're now getting to the point where processors are powerful enough that we can run functional languages on virtual stacks like the Java stack and the .net stack, and still get reasonable performance out of it. One of the problem with early implementations of things like Lisp and Miranda was that the hardware just wasn't powerful enough to give you the performance you needed to write sizeable applications in these sort of languages. So they were a bit toy at the time. There's a lot more mathematics behind some of this, the functional stuff, than there is perhaps behind some of the more object-oriented languages. I'm not going to go into any of the theory, partly because I don't really get much of it myself, I'm much more of a practical, applied sort of person. But I'm beginning to pick up some of the ideas, and you can, when you start understanding how the ideas fit together, it does help. So I'm just gonna, here's a quick list of the topics I was gonna talk about this evening. We'll get somewhere through this list, and then I'll just have a break when the pizza arrives, so I won't keep you from your pizza for too long. This stuff kind of, I'm going to start simple and work up, because these things build on each other. When I wrote this talk, I had several, it actually helped me to understand how a lot of these things fit together, and I went, "Oh, aha! That's how that works! And that's how that works with that!" And I hope that I can give you some of that stuff and short circuit some of that learning for you guys. So we're going to start with functions, immutable data, types, cover a bit of recursion, lists and sequences, higher order functions, collection operations, pattern matching, and then a little bit on what railway oriented programming, and then conclusions and questions. And if you've got any questions, just shout. I'm more than happy to, don't save them to the end, just shout out if you've got any questions. So kicking off with functions. One of the principle core, one of the core principles of functional programming is to write pure functions, to write functions that don't have side effects. So functions that take data in, return data, but don't do anything in between. There are ways to do things like, that are side-effecty things while keeping your functions pure. They tend to use things called monads, I won't go there, I haven't got all day. But one of the things that functional programmers don't like are exceptions, because they break the flow of your code. There are other ways to handle exceptional circumstances than throwing exceptions, and I'll cover that later. The other thing is, keep functions small. It's better to write small functions that do one thing and compose them, then write big functions. I'm sure you know that from your C# programming, but it's worth reiterating. Because if you have a small function, it's easier to reason about, you have fewer code points, and it makes it easier to test. So write small functions, write pure functions, compose them together to do complicated things. The other kind of, one of the other core principles of functional programming is immutable data. So data does not change, there is no assignment, or rather, there's only single assignment. You only give a value to an object or a piece of data at the point at which you create it. If you want to change it, you don't modify it, you make a copy with a new value. And most functional programming languages make that a very easy process to do, and make it very hard, or uncomfortable to make things mutable. Makes your code easier to read and understand, I think. It also gives you other useful side effects, which I'll talk about in a minute. So in F#, reassignment is a code smell, it's something you don't want to do. There is a mutable keyword, so this says create me a variable called X and make it changeable, and you have to use the right arrow operator to change the value instead of the equals operator. Sorry, left arrow, I'm just, cause I'm stood here looking at it that way. So mutable is ugly and painful to type, which tells you you don't want to make things mutable. And you don't, this is a nasty operator to use. So if you see that, if you're writing F# code and you see that, you go, "Oh, I don't like that. What can I do to avoid it?" So as I was saying, there are other advantages to having immutable data, and one is thread safety. Because if you can't change the data, you can pass it around across threads without any problem at all, because you know that once you've created it, it's only readable, which means there's no risk that some other process or thread or whatever is going to come along, change your data underneath you. In a threaded application, if you context switch, and another thread has access to the data you've just been working on, you cannot guarantee that that thread won't change the contents of the data, and then when the context switches back again, bang, you've suddenly got, different values in your data. But that makes it very hard to reason about mutable data in threaded context. And now that we have multi-core processors and all this extra stuff that we're getting these days, it means that safety guarantees are very hard to provide. So immutable data gives you a lot of thread safety. So how do I write an immutable clause? And see, you can do this C#. This is, I'm sure you've probably written something like that as a sort of DTO or a POKO in C#. So you put getters and setters on all your properties, and you make them public, and you've got a, so this is some sort of point class for two D graphics or something like that, and we have a method translate that says move it by X and Y in the plane. But there's nothing to stop something else coming along and changing the values of X and Y while this is in use. So we don't want to do that. So let's do it this way, instead. Only assigned values in the constructor. So once I've constructed this thing, I can't change these, 'cause the setters are private. I can still read them, but I can't change them. And instead of changing the values in place, I return a copy with the new values, when I call translate. So just to go back to, if you want to compare with the version that was there before. This one changes its own data in place. This one returns a new version of that same data. So I can happily pass this around knowing that nothing will change this once it's been used. And it doesn't take much more code, and in some ways it's much easier to reason about. And also, just for a point, in C# Six, you don't even need to setters, but that was just a. Okay so null, Tony Hoare, who I believe was a professor at the university here at one time, he was a significant mover in the development of ALGOL, and he was responsible for putting null in ALGOL. He said that, he looked at databases and he thought well we have null values in databases, we should have a comparable null value in the language to reflect this. And as time went by, he realised that it was the biggest mistake he's ever made, and probably the whole of computer science had ever made. I think it's more than a billion dollar mistake, I think it's several billion dollars now. I mean, the amount of time and effort you have to put into making sure that nulls don't get propagated or used in the wrong place. We even have a special exception for the fact that you've got a null when you weren't expecting a null, which seems to me to be a fairly crazy way of doing things, really. He did make up for this by later on inventing communicating sequential processes, CSP, which was a forerunner of several major parallel programming paradigms, including ACTOS model, so he kinda made up for it a bit, but it still, he's very embarrassed about that. I think he's, is he still alive? Does anybody know? So how do you avoid nulls in C#? I should have said, in F# there is no null. There are ways that you can have no valued values, but not null. You can use null, I mean it's there in the language, but it's generally hard to get at, as it were. I haven't done that much interrupt to say exactly how you'd do it, but it's definitely one to avoid. And if you're writing pure F#, you just, you don't use null. Unfortunately, in C# you're stuck with it, and it makes your code harder to read because you have to put null checks in everywhere. And it's like, if you've ever written async code, if you write async in 08, once you put it in somewhere, you've got to put it in everywhere. Null checks just propagate madly everywhere once you start using null. So there are things you can do to avoid using null. You can use the null object pattern. Hands up if you know what the null object pattern is. Well that's nice. For those of your who don't, if you have an interface definition or a type definition for a class of types, you can define one particular instance of that type, which contains null values and in some way signals the fact that it's a null value. But because it's actually an instantiation of a real interface, it's not null, and you can pass it around happily. You still have to code around it, but it's a lot safer than passing nulls. Use code contracts, does anybody here use code contracts? Yes, good. And stick not null attributes on your function parameters and get your code contracts to check for them so that you can be certain that you're not passing nulls around. Think of using an option type. So this is one of the main ways that functional languages get around the lack of null. You have a type, which can either be, it's usually called some value or none. And you just say, "Is this thing none, or is it some value?" And if it's some value, then pull the value out and use that. A bit like nullable types but safer. You can create a generic option type in C#. It's a bit tricky, it's harder to use without pattern matching, although as I will say later, now we've got some sort of pattern-matching in C# Seven, Visual Studio 2017, it's a bit easier. But it's a good defensive pattern to avoid using null wherever you can. Types are fundamental to F# and to most other functional programming languages. The idea, one of the core concepts in F# is that when you define functions, you consider the type transformations that you perform in the functions that you're using. There's far too much C# code that I've seen where naked ints and strings get passed around from function to function, and we do string matching, and rather than wrapping them up in some sort of more meaningful type. So don't pass a string for a name, create a name class or an address class or whatever, something that has meaning, that you can create a null type so you don't pass around null values. Yes, so try to avoid using raw, fundamental types in C#, that way you get the benefits of the avoidance of null, as I was talking about before. C# now has tuples, which is great. Still doesn't really have a record type, well it kinds does, I guess it has, struct is about the closest we get. And we still don't have a useful discriminating union in C#, but it's coming I think. The more I see of the way that C# is developing, every iteration, the F# team seem to feed more ideas back into the C# team, and a lot of the ideas, C# is actually becoming more of a functional language. Too many naked strings and ints, and for that matter floats, in C# code. Use types everywhere you can. So wrap them up at the boundaries of your code, pass around concrete types that mean something, and then unwrap them as they go out of your code on the other side. It helps you to make your intent clear. You can look at the code that you've written and understand it better. Yeah, and as I said earlier, use null objects that won't store an invalid value, won't throw exceptions when you hand them around. Here's a little bit of F# code, just to see. This is what's called a discriminating union in F#. So I can define a, you can't do this in C#. It would be really nice if you can. You can kind of do it with interfaces, but you can't really. So I can declare a type of shape. I can say well it's either a rectangle, which has two values, two float values, or it's a circle with one value. And then I can define a function area that takes a shape. And then match, so I can say if this shape, S, is a rectangle, then to find the area, take the two values and multiply them together, and if it's a circle, take the single value that I've got, multiply it by itself and multiply it by pi, pi R squared. So I can, despite the fact that these things have different type signatures, I can kind of wrap them up in a more general container, and then discriminate on the contents of that container later on. So I can treat it, I can pass these things around in sets and lists and things, and then treat them differently at the times when I have to treat them differently, and treat them as the same thing when I don't need to treat them differently. You could do something like this in C# if you had, but you'd need some sort of empty interface at the top level to derive from, or some sort of non-functional base class. There's a lot of functional code that makes use of these sorts of constructs, and it would be really nice to have something like that in C#. One of the things that we've got here is pattern matching. So I'm saying I've got a shape, is it a rectangle or is it circle? When I gave this talk in October, there wasn't anything in C# that would let you do that, there is now, now we've got C# Seven. But it's a lot more flexible than lots of if statements or switch statements. You can match by type, you can match by value, you can match by the absence of a value, and what's more, you can use pattern matching to pull values out of more complicated types, or smaller objects out of big objects. For example, if you've got a sequence of objects, you can use the pattern matching to give you the head of the sequence and the tail, the rest of the sequence. So you can apply that recursively and keep pulling the front thing off this list. There's really nothing quite equivalent in C#. It would be nice to have something. But as I say, you can fake some of this stuff. You can at least try to use inheritance to fake some of this stuff, but it's not quite as satisfactory. You can use inheritance, you can use type of, so you can match on type, kind of. I'm gonna mention in a bit what you can do with pattern matching in C#, but they're neither particularly pleasant to use, and you end up with lots and lots of boilerplate code, either lots of ifs or a switch with lots of cases, neither of which is particularly pleasant to look at. We do have some pattern matching now in C# Seven, so has anybody had a look at that, the pattern matching in C# Seven, anybody? Yes, good. So they've overloaded the is operator. So up to C# Six, you could say is this thing of this particular type? You can now do an is on a static value. So you can now say is this thing null? Which you couldn't do before. You can now say is this int value 42? So you can do a comparison on a static value. You can still match on type, as you could before. And there's also this var pattern, so that you can say if obj O, I haven't got an example, sorry, you're going to have to follow. If obj O is var X and then you can use that value X, and it will automatically cast X to be the most derived type that O is. So you can give it an object which is actually a point, for example, and X will be of type point. So you don't have to go grovelling around in reflection to find out what the type of this thing really is. So that can be quite useful. And you can use it in a switch statement, so you can switch on the type of something. You can say case X is foo, for example. But there's still not structural matching, so you can't use it to pull head and tail off a sequence, for example, and you can't use it to pull elements, actually maybe you can use it to pull elements out of a tuple, now that we've got tuples in C#. I can't remember, does anybody know? No, okay. So, as I say, C# is gradually getting to the point where we've got some of these useful, functional features, but I think the language designers are struggling to work out what they can fit into the framework of the existing language, that sort of fits with what's there already. I remember seeing some of the discussions about this stuff, and there were several features that got put in the early releases, pre-releases, and then taken out again, because they just were too complicated to use. So C#'s never going to be as expressive in functional terms as F# is, but there are features that are coming. Moving on, recursion. Functional languages are heavily reliant on recursion. It's something that ordinary kind of curly bracket language programmers don't use much. It is usually inefficient, yeah, absolutely. It's risky because you might, if you're not careful, you can blow the stack, and it's often inefficient, as Bill says. But often, if you can take a loop and rethink it in recursive terms, once you get your head around the ideas of recursion, and you start getting used to reading recursive functions, which does take a little bit of practise, as I say, beauty is in the eye of the beholder, they're often easier to understand and simpler. You end up with fewer lines of code. No, well no, but I don't know, sometimes, because if you've got a big loop with ifs in it, I dunno, is it harder to get all the concepts in your head at the same time? So just for an example here, a simple, let's count an innumerable. That's how you do it in loop. But in a functional language, the problem is that your loop counter here has to be mutable, because every time you go around the loop you have to increment the count. This is the main reason that functional programmers like recursion, because you can write recursive functions that use immutable data, because you return a new value from each call of the function that's different from the previous value, and so you're creating a new value and assigning it. Sorry, you're not assigning it, you're creating a new value rather than assigning a value to a mutable value. So this is, it's harder in C# because you've gotta write a recursion function that passes this count around, but you can see here at no point do I actually assign value to this. I start with zero, and each time I go around I return the next value in turn, and when I get to the end, all those values pop off the stack and I end up with the result I wanted. So I avoid the sin of assignment, but I have to write a lot more complicated code. It's a lot easier in a language that helps you do these things. F# actually, if you write recursive functions in F#, and it's tail recursive, I won't explain what tail recursion is, the rules are quite complicated. But basically, you pass the last value out as the last thing you do. F# will spot this and compile that back into a fore loop so that you don't run the risk of ever blowing the stack if you write a recursive function in F#, as long as it's tail recursive. And the compiler will tell you if it's not tail recursive. C#'s not that clever, so you've gotta do a lot of the checking yourself if you want to write recursion. So if you're writing code in C#, you only really want to use it if you're absolutely certain that you will not, that your recursion will terminate at some point, and you won't actually blow up the stack. A lot of functional programming concepts revolve around doing operations on standard collection types of your particular data type. So although, in the same way that C# has lists and innumerables, and arrays, most functional languages have two or three fundamental data collection types, rather. The two main ones are lists and sequences. So a list is an ordered collection that you can modify, so usually represented by a list, and sequences are usually ordered but not modifiable. The nice thing about them is that you can define them recursively, as I was saying earlier, you can think of them in terms of a head item followed by one or more, sorry zero or more, more elements at the end, so that you can recurse across this structure, take the top item off, and look at the rest of it, and keep going until you've got nothing left. And there are usually good language constructs for doing those operations, and you can create lists by pushing things onto the head. So you've got a list, you stick something on the front, you've got a bigger list. You can do most of this stuff, you can fake most of this stuff in C#. You can use innumerables, you can use yield in loops, which I use all the time. It's a really nice way of creating sequences. I do, one of the things I do in my spare time is do stuff with audio programming, and thinking of audio as infinite sequences of samples is a really nice model to work with, and you can fake all that with innumerables. So, and I'll just wander off at a tangent here for a bit, talk about higher order functions. So a higher order function is a function that takes another function as an argument. If you've ever used LINQ, you've used higher order functions, because select or where or anything like that in LINQ is a higher order function. It takes a lambda as one of its arguments. Having functions as first class objects is a fundamental concept in functional programming. If you can pass a function to another function, and then use the first function to apply the second function to the data that you're working with, does that make sense? I think it does. You can do very complicated operations in very simple, while still having a simple concept of what's going on, so that you can apply most, a lot of functional language constructs involve, for example, iterating of a list by applying a function that you've defined to every element in that list in turn. As I say, use this LINQ wherever you can, it's really, I find it's really useful. Erik Meijer, who is one of the founders or the founding members of the team who actually brought the idea to Microsoft when he was working there, is a functional programming fiend. It's a very good way of introducing yourself to many of the concepts in functional programming. Also, again, as a kind of side note, if anybody's looked at the reactive extensions, give those a try, they're great, they let you, in the same way that you can use LINQ to compose operations on sequences, reactive extensions does the opposite. It lets you compose operations on events so that instead of having to spread your event handling around across multiple functions, multiple different event handlers that handle an event and then raise another event, you can chain operations on events to filter them, to modify them. It's great if you're doing anything with UI coding, it's really nice, works very well with WPF, if you're ever doing desktop development. If you use LINQ, you'll come across these ideas, except that they're called different things in LINQ. So filter is called where, map is called select, because they decided to use database terms because they thought when they first wrote LINQ that it would be mainly used to connect to databases. But the idea is that you take, one of the ideas in functional languages again is that you should have a uniform set of methods that apply to all your collection types. So it doesn't matter whether you go only innumerable or a sequence, or a list, or whatever, you can apply filter, and where P is a predicate that says if the element conforms to this particular rule, then return it, if not, ignore it. And again, map you can map to any collection type, and F is a function that takes the element type and returns you either a modified version of that, or a transformed version of that, or anything else you want to do. If you're doing work with collections, use LINQ because it's functional, it works on sequences, you can compose it, and it's really elegant and you can do complicated things in single lines, and if you're not using LINQ, you're missing a trick. I did speak about the reactive extensions before, they're really clever, really nice. If you ever have to do anything event driven, take a look at them. Again, the brainchild of Erik Meijer, who was the guy who gave us LINQ, and just great for UI code, or if you're doing anything that's stream based or event based, take a look at them. Has anybody here used them? Good, there's always a few, that's good, okay. I'm nearly done, actually, so I'm just kind of, the last topic I wanted to talk about was what's called railway-oriented programming. If you want, if you're interesting in this in particular, and you want to kind of read up a bit more about this, have a look on, there's a website called F# For Fun and Profit, and there's a whole section on this. But it's very F# oriented and I'm trying to explain some of the ideas. So going back to what I was saying earlier about if you write functions, write small functions with no side effects, and one of those side effects is done through exceptions. So how do you deal with that? Well one of the ways you do that is that you make sure that everything you return, every time you return a value from a function, you also return some sort of indication of whether the operation that that function has been expected to do had succeeded or failed. So the idea is that you return, again, this is F#, this is, you remember our discriminated union earlier? This is the sort of result type you expect to return from a function, and it's got two values, it's got either a success value, but it's a generic type. So this is some sort of type that you return when your function succeeds, it can be any type you like, and again, this is some sort of type that you return if the function fails. So you return on of these results, whatever happens, and then you pull it apart to see if you've got a success result, then you pass that success value onto the next function in your chain. If you've got a failure, you can bypass it, a bit more about that in a minute. So think of your function as a set of railway points. This is why it's called railway-oriented programming. So if we're travelling from left to right and we have some sort of data input here, we call our function and we return success or failure. Then, if the function succeeds, we extract that. Pardon me, pizza went the wrong way. We take that success value, and if we fail, we look at the failure outcome and do something with that. And we can use this to compose more functions. So we can build some sort of generic operation that we wrap around the function itself, and we say, and we chain these generic operations together. This gets a bit complicated, but essentially, within your generic operation, you call the function, you look at the outcome, and then you decide which path to follow. So if my first function succeeds, I take the value, and I pass it into my second function, and if that succeeds, I pass the result out of the end. But at this point, if my first function fails, I take that failure result, but I don't do anything with it. I pass it in to this section here, so it's almost, this is slightly misleading in that you have to think of this as a function that takes one of these result types as an argument. And at this point, if the result is failure, we just carry on passing the result through, we don't do anything with it. But if it's success, we do something more with it, and pass it on. So you can chain these together, and then at the end, you've got a success or failure result, and you don't care really where in the chain it failed. If you do care, you could encode it in the failure result, but the idea is that at each point in this chained series of functions, you look at the result from the previous one. If it succeeded, you do more actions on it, if it failed you just pass it on to the next one. And then at the end of the sequence, you pull it apart and see what the result was and whether you succeeded or failed. But it means you can write composable functions that you can just glue together and you don't have to wrap everything up in dry catches, you just pass this value on and each time you call it, you inspect the value to see whether you've got what you want. - [Audience Member] It sounds a little bit like promises. - It's much the same concept. But the other thing you can, if you're using a functional language, you can use, I'm not going to go into any detail, but for those who are interested in the concept, you can use map and bind to wrap a function that takes, essentially, you can wrap, you can build a function that takes this, pulls out the success value, and passes it into a function that works on this thing, this type. So the idea is that you hide all the processing of success and failure inside another function, inside a higher order function, and then you just pass in a function that does all the hard work. So you can think of these points as a higher order function that takes a function, and this is the same higher order function that takes a different function as an argument, and takes in this data, processes it, so that you can pass these results down the chain. But it's a very, once you get your head around this concept, you can write composable functions that handle success and failure gracefully and still, without having to write lots of try catch blocks. It's a very useful concept to use, especially, I found it really useful in, for example, web pipelines for web API calls and things like that, where you're taking a request, pulling it apart, doing a bit, doing a bit more, then producing a result at the end, which you're returning. I find that it's really useful in, for example, web API applications. There's loads more stuff that are also applicable that you can take, you can learn from and apply to your C shell code. Currying and partial application, the idea that you can break a function with N arguments into N single argument functions and compose them has lots of applications. Map, apply, and bind, as I was saying earlier, use these to move from the world of success and failure results to this wrapper type, and back again. I haven't got time to explain it in detail, and it's quite mind-blowing when you start to understand it, and monads even more so, these two are quite closely connected. Don't ask me to explain monads. You can find lots of online explanations that make no sense. I think the thing about monads is you get them or you don't, and once you get them, it doesn't matter, if you don't get them it doesn't matter how many explanations you read, if you do get them, then you don't need them. And it's, again, it's one of those aha moments once you understand them. So what I've been trying to say in this talk is that reasoning about code in multi-core, multi-threaded environments is hard, and functional languages give you this level of abstraction above the von Neuman machine. So the sequential processing and branching model, which helped you write cleaner, more understandable code, I think. A lot of these abstractions you can still apply in ordinary curly bracket languages that are much more von Neuman oriented, and you can, by using them, you can write cleaner code that your colleagues will understand more, and you will find easier to, less buggy, and more likely to be right the first time. I think that's it. So yes, so this is my take on the earlier quote I had earlier. That if you're really determined, you should write functional code in any language. And just, if you're interested in taking, in pursuing F# and other functional languages, there's a few things. There's Scott Wlaschin, I think that's how you pronounce his name, his website F# For Fun and Profit is great. You can start simple and work up. The F# Foundation is an independent foundation that's there to help people learn, and to try and spread the word about F#. There are lots of online resources. There's a really good online community who are really friendly, there are a couple of really good Slack channels for F#. And there are quite a few books. That's a nice, good starting point if you're wanting to think about thinking functionally without worrying too much about language-specific details. And that's it, has anybody got any questions? - [Audience Member] What kind of polymorphism support does F# have outside of what you get in C#? - Polymorphism in F# is much more akin to what you might call, if you've come across the concept of duck taping in languages like Python, so it's much more based on the, it's structural rather than inheritance based. In other words, if it looks like a duck and it quacks like a duck, then it is a duck. Which means that you can apply rules to much more disparate types because you can just say if it has this particular property then do something with it. So as you saw with, to go back to, go back here to uh, my one little bit of example. This one here. We can say that a shape can be a rectangle or a circle, they don't have to actually look anything like each other at all, because we're not, this isn't object-oriented. We don't carry around the methods that operate on this data with the data. So we have a function here, called area, that works on shapes, and it looks at the particular shape and says match the shape with this particular type. So it's much more flexible. - [Audience Member] On this code, what tells it to run the area? - So there's nothing, I haven't put an example. So there would be another line here that would say let A equal area of rect or area of circ. - [Audience Member] It's not so much a question of, I think, an area there. So S has got an inferred type, I guess? - Yes, F# will infer, pretty much all the time will infer the types for you. You can, if you want to, you can be specific, but it will infer them if you're not, and will tell you if it can't do something. For example, if it can't, if this match, this match for example, because this is a shape, this is a complete match, because it matches all the types in this shape. If I added a third one called triangle, then the compiler would tell me that this match wasn't complete because I'd missed a particular type out of that, and things like that. So it's pretty clever about its type inference. Okay, thank you.