Why monads
The key to understanding this is to see that a line of code can be seen as performing an operation, and then deciding what value to pass to the rest of the code that will come afterwards. How would we model the normal control flow of just running the next operation in this way? Now, this isn't a precise definition of a monad, but each of these is a monad: a different way of linking the result of an operation with the rest of the code that follows it.
You can probably think of lots more examples along these lines now that you have seen some, and that's the point, monads appear all over the place. Monads aren't an invention, they are an observation. There's a lot of code that we write all the time that has a similar form. So that sort of answers the original question of this article: why are monads useful?
We prove them to be useful by using them all the time, even if we don't write functions like the ones I wrote above. That's not very satisfying though. Why do people go on about monads if they are just simple things like this? It's a good question, and you can be perfectly happy continuing as you have, and just knowing that monads are a common programming pattern. However, what do we do when we spot a repeated pattern in programming? We try and formalise it and do something useful with it.
Usually when people are talking about monads, they are talking about having special support in a programming language for them. As we saw above monads are very common, and every language supports them, but not many have special constructs for handling them.
Why would we want such a thing? The first reason is that most languages don't have great support for dealing with "the rest of the code". There are usually two ways to do this:. Return statements are very useful, but can become pretty repetitive. Consider Go's error handling:. If blocks are great to reduce the duplication if there is a common exit pattern, but can become very deeply nested quickly. There are better ways to write this, but let's just say for a minute that a better way of doing this might be useful.
What might that way be? We have a common pattern that we want to repeat lots of times. We know what we want that pattern to be for error handling, it's the familiar. This means, run someOperation , then if there is an error return it, otherwise assign the result to the variable and execute the rest of the code.
We can then chain these operations easily:. This would have the same behaviour as before, but it hides all the error handling away behind those arrows. So far you may not be that impressed though, we've invented a new bit of syntax to do something specific and ignored things like wrapping the errors.
That's not very impressive. Remember though that we had several different examples that I said were similar. Can we re-use this for another of those cases? What about if someOperation doesn't return an error, but returns a nullable type instead? In that case we want the code that we put in the middle to not do the error checking, but instead do a nil check. I realise I have completely glossed over how we change what the arrow does in the two cases.
Obviously we need a way to do that, but I won't cover how to do that in this article, but it clearly requires language support. If we figured out all of the details, then this would work for anything that is a monad, meaning that we would have a single consistent interface for describing how to thread the operations together whenever there was a monad involved. In the case of the list we need to execute it several times, so in fact the arrow is doing something more like our original functions.
In this list case this was. The second operation wouldn't receive the list as a whole, but would be called once for each item in the list. Maybe I still haven't convinced you that this is useful yet, so let's look at one more example. Because it was found that many patterns of computation form monadic structures. Abstraction of a structure allows for writing code that works across all instances of that structure. To put it more concisely - code reuse.
In functional languages, the most powerful tool found for code reuse has been composition of functions. The good old. It makes it easy to write tiny functions and glue them together with minimal syntactic or semantic overhead. But there are cases when the types don't work out quite right. You just want a bit of leeway. You want to be able to treat Maybe b as if it were basically b.
It's a poor idea to just flat-out treat them as the same type, though. That's more or less the same thing as null pointers, which Tony Hoare famously called the billion-dollar mistake.
So if you can't treat them as the same type, maybe you can find a way to extend the composition mechanism. In that case, it's important to really examine the theory underlying. Fortunately, someone has already done this for us. It turns out that the combination of. But there are other ways to form categories. A Kleisli category, for instance, allows the objects being composed to be augmented a bit. A Kleisli category for Maybe would consist of. And suddenly, we've extended the power of composition to things that the traditional.
This is a source of new abstraction power. Kleisli categories work with more types than just Maybe. They work with every type that can assemble a proper category, obeying the category laws.
As long as you can prove that your type obeys those three laws, you can turn it into a Kleisli category. And what's the big deal about that? Well, it turns out that monads are exactly the same thing as Kleisli categories. Monad 's return is the same as Kleisli id. So why go through all this bother? Why have a Monad abstraction in the language?
As I alluded to above, it enables code reuse. It even enables code reuse along two different dimensions. The first dimension of code reuse comes directly from the presence of the abstraction. You can write code that works across all instances of the abstraction. There's the entire monad-loops package consisting of loops that work with any instance of Monad. The second dimension is indirect, but it follows from the existence of composition.
When composition is easy, it's natural to write code in small, reusable chunks. This is the same way having the. So why does the abstraction exist? Because it's proven to be a tool that enables more composition in code, resulting in creating reusable code and encouraging the creation of more reusable code. Code reuse is one of the holy grails of programming. The monad abstraction exists because it moves us a little bit towards that holy grail. A type system can be regarded as calculating a kind of static approximation to the run-time behaviours of the terms in a program.
That's why a language equipped with a powerful type system is strictly more expressive, than a poorly typed language. You can think about monads in the same way. As Carl and sigfpe point, you can equip a datatype with all operations you want without resorting to monads, typeclasses or whatever other abstract stuff.
However monads allow you not only to write reusable code, but also to abstract away all redundant detailes. As an example, let's say we want to filter a list. A slightly more complicated version of filter , that also passes an accumulator from left to right, is. Or we can redefine the nub function, that removes duplicate elements from a list, in terms of filterAccum :. A list is passed as an accumulator here.
However it's not possible to safely leave the IO monad i. Another example is mutable arrays: after you have leaved the ST monad, where a mutable array live, you cannot update the array in constant time anymore. So we need a monadic filtering from the Control. Monad module:. And as a final illustration, filterAccum can be defined in terms of filterM :. This example illustrates, that monads not only allow you to abstract computational context and write clean reusable code due to the composability of monads, as Carl explains , but also to treat user-defined datatypes and built-in primitives uniformly.
I don't think IO should be seen as a particularly outstanding monad, but it's certainly one of the more astounding ones for beginners, so I'll use it for my explanation. The simplest conceivable IO system for a purely-functional language and in fact the one Haskell started out with is this:.
With lazyness, that simple signature is enough to actually build interactive terminal programs — very limited, though. Most frustrating is that we can only output text. What if we added some more exciting output possibilities? But then you'd also want some way to read from files. Any chance? If we could trigger that file-reading from within the Haskell language There's one problem here: we don't really have a notion of when the file is read.
The [Output] list sure gives a nice order to the outputs , but we don't get an order for when the inputs will be done. Ok, now you may spot an imbalance: you can read a file and make output dependent on it, but you can't use the file contents to decide to e. Obvious solution: make the result of the input-events also something of type IO , not just Output. That sure includes simple text output, but also allows reading additional files etc..
That would now actually allow you to express any file operation you might want in a program though perhaps not with good performance , but it's somewhat overcomplicated:. In practice, you wouldn't want to use plain constructors to define all your programs. There would need to be a good couple of such fundamental constructors, yet for most higher-level stuff we would like to write a function with some nice high-level signature.
It turns out most of these would look quite similar: accept some kind of meaningfully-typed value, and yield an IO action as the result. We'd better make that requirement explicit. Now the way they compose differs across the existing monads, thus resulting in different behaviors e. The confusion about monads is that being so general, i. Now, one interesting thing about monads, is that the result of the composition is always of type "M a", that is, a value inside an envelope tagged with "M".
This feature happens to be really nice to implement, for example, a clear separation between pure from impure code: declare all impure actions as functions of type "IO a" and provide no function, when defining the IO monad, to take out the "a" value from inside the "IO a". The result is that no function can be pure and at the same time take out a value from an "IO a", because there is no way to take such value while staying pure the function must be inside the "IO" monad to use such value.
Monads are just a convenient framework for solving a class of recurring problems. First, monads must be functors i. Finally, bind and return must satisfy two equations left and right identities , also called the monad laws.
Alternatively one could define monads to have a flattening operation instead of binding. The list monad is commonly used to deal with non-determinism. The bind operation selects one element of the list intuitively all of them in parallel worlds , lets the programmer to do some computation with them, and then combines the results in all worlds to single list by concatenating, or flattening, a nested list.
Here is how one would define a permutation function in the monadic framework of Haskell:. It should be noted that the list monad is in no way a side effecting computation. A mathematical structure being a monad i. You need monads if you have a type constructor and functions that returns values of that type family.
Eventually, you would like to combine these kind of functions together. These are the three key elements to answer why. Let me elaborate. Life is good. Then, one day, you need to create a new family of types. It could be because you need to consider the possibility of returning no value Maybe , returning an error Either , multiple results List and so on.
Notice that Maybe is a type constructor. It takes a type, like Int and returns a new type Maybe Int. First thing to remember, no type constructor, no monad. Now, you can't easily combine your functions. Life is not good anymore. And here's when monads come to the rescue. They allow you to combine that kind of functions again.
You just need to change the composition. Yes, alright - you're probably trying to learn Haskell, and that's why you eventually ended up here. Bang-patterns are an extension of Haskell ;. Yes, that's it: no passing of results to continuations, no bundling of results with some abstract state - just a plain ordinary result.
All you need to do is provide an all-new OI value - what a novel concept! Well, the way we're using these OI values is similar to the use of timestamps as described by F. Warren Burton in Nondeterminism with Referential Transparency in Functional Programming Languages - the main difference is that timestamps must first be retrieved from trees, whereas OI values can be used directly.
0コメント