First posted Thursday 2 April; updated Sunday 5 April.

Some major monads; comparing Reader to State

In the homework session on Wed Apr 1, we began by discussing the fact that different monads (so far we've considered the super-simple Identity Monad, as well as the Option/Maybe, List, and Reader monads; other commonly-referenced monads include leaf-labeled Trees, State which we discussed in the seminar on Thursday, as well as Writer (State is a kind of unified melding of Reader + Writer), and Error, which is like Option/Maybe except that what corresponds to the None/Nothing variant now has a parameter, so you can attach error messages or the like. A sample type declaration of this sort would be:

type ('bad,'good) error = Error of 'bad | OK of 'good

Haskell uses:

data Either a b = Left a | Right b

in this way. (Conventionally, they understand the Right variant to be the successful/good/correct one.) Another major monad is the Continuation monad, but we'll get to this later in the course when we discuss continuations.

Those are the most famous Major Players in the Monad world. There are lots of other monads, many of them simple little things which are easy to understand once you've grasped these major players. There are some other quite substantial and interesting monads that we won't study --- for example, there's a Probability monad. But the ones listed above are the most commonly-used and -referenced.

When desiging a monad, you have to choose how to implement the box type (in other terms your programming language already understands), and then you have to combine that with definitions for mid/unit/return/pure/eta/whatever-you-call-it, one the one hand --- what we're symbolizing as --- and one of the other key monadic operations, like >>= or <=< or join. Many implementations use >>= here. Let's focus on the choice of box type for the moment.

For the Identity and Option/Maybe and List monads (and also the leaf-labeled Tree monad), these box types are non-functional. Consider:

type 'a identity = 'a
type 'a option = None | Some of 'a
type 'a list = [] | :: of 'a * 'a list
type 'a tree = Leaf of 'a | Branching of 'a tree * 'a tree (* these are trees with no label on the inner nodes *)

If your tree type has a parameter for inner node labels:

type ('leaf,'inner) = Leaf of 'leaf | Branching of ('leaf,'inner) tree * 'inner * ('leaf,'inner) tree

you could also use that as a monadic box type by specifying unit for the 'inner parameter. I believe it's not impossible to use trees with more informative inner node labels as monads, but it requires complex design choices that we're not going to explore. The trees with labels only on their leaves are simpler to think about, to code, and to work with, so they will suit our purposes. It's often helpful to think of Option/Maybe, List, and Tree together, as comparing them helps you to see what the common patterns are, in a way that abstracts from the specifics of those different structures.

Notice that none of those types is a functional type, that is, none of them are of the form something -> something_else. This is also true for some other monads listed above. For the Reader and State monads, on the other hand, where we're turning our attention to now, the box types do have a functional form. Here is the box type for the reader monad:

type 'a reader = env -> 'a

As some of you noticed and was indicated in last week's presentation, this is really parameterized on a choice for the env type too. So we're really going to have:

type env = ...something...
type 'a reader = env -> 'a

But I've written it in this form, rather than as the more polymorphic:

type ('env,'a) reader = 'env -> 'a

because when working with a specific Reader monad, the choice of the env type will be fixed once and for all (for that Monad). Whereas that single Monad can be used as a box around ints or bools or int -> bool, or any type as a instantiation for its type parameter 'a. (Even the box types of other monads can be used for 'a --- or even the box type for other Reader monads, which may have, but needn't have, made different choices for the env type. It will emerge (though don't expect it to emerge fully in these notes) why it might be useful to have two different Reader monads that are using the same env type, rather than just using one.)

Some example choices for the env type are: when using the Reader monad to implement the binding of multiple variables, the env will be some kind of association between variables and their denotations. Perhaps it will be a list of pairs of variables and denotations. Or it might (itself) be a function from variables to denotations. For a homework exercise, we'll work with the simple idea that the environment is just a triple of denotations, where variable x is associated with the first member of the triple, variable y with the second, and variable z with the third. (This simple model only permits three variables.) In (the monadic construal of) Jacobson's semantics, the environments are just the denotations themselves for a single pronouns---that is, it's like the homework idea but with only a single variable. If you want to have two pronouns, her system would require you to have two different Reader monads, somehow coordinating so as to not step on each other's toes. Exploring how to do that is another homework assignment. If you want to use the Reader monad to implement intensionality, the envs will be possible worlds, something like this:

type world = int (* you could choose to implement the worlds another way, too; perhaps as complex state-descriptions *)
type env = world
type 'a reader = env -> 'a

As we saw when discussing the State monad, it makes a different choice about the box type. Essentially the State monad's box type is this:

type 'a state = env -> 'a * env

That is, it's a function from an env to a pair of an 'a and a (possibly altered) env. However, to emphasize the different role that the env type is playing here, it's customary not to call it an env but rather something different---like a store. Hence, for the State monad we'll really write:

type store = ...something...
type 'a state = store -> 'a * store

Ok, let's go back to the Reader monad. So we discussed the one design choice: what to make the box type. Now we have to implement the mid and >>= functions. Sometimes when designing a monad you may have some latitude about how to do this; that's why we say that the Monad consists of the package of the box type plus your implementations for the key operations. Your choices for some of those will constrain your choices for the others, because they have to work together to satisfy the Monad Laws. Also, they have to be defined for any choice of parameter type 'a. So we can't define mid x = x + 1, for instance, because that only works when x is a number. For the Reader monad, the mid function is defined to be:

mid x = fun e -> x

that is, the constant function that takes an env and ignores it, just returning the payload x. This is our friend the K combinator. The type of function mid is 'a -> 'a, where the argument x is of type 'a and the fun e -> x result is of type 'a.

The choice for >>= for the Reader monad is a bit more complex. I'll write it out longhand first; that will be easier to compare to the State monad when we look at it instead. You will see that the >>= for Reader is a kind of simplification of the >>= for State. Here is the longhand version of the >>= for Reader:

(>>=) xx k = ...

The type of this operation >>= has to be 'a -> ('a -> 'b) -> 'b. I'll use the variable xx for the instance of the boxed type 'a, in a way reminiscent of how we used xs as a variable for lists or such. (Which are themselves a boxed type, so on this naming scheme we could also call them xx.) I'll use the variable k for the "Kleisli" function of type ('a -> 'b). Ok, so the application of >== to xx and k has to have the result type 'b, which we said is env -> 'b. So we should expect our definition to have the form fun env -> SOMETHING, where SOMETHING has type 'b:

(>>=) xx k = fun e ->
               let x = xx e in
               ...

First we supply the e to the xx, extracting its "payload" which I'll call x. Next we supply that to k, getting back a new boxed value:

(>>=) xx k = fun e ->
               let x = xx e in
               let yy = k x in
               ...

What should the last line be? It's tempting to say just yy. But remember that the whole bit after the fun e -> has to have type 'b, and yy has type env -> 'b instead. So in fact what we do is to supply the same env to yy. We re-use the same environment that we get as input both for xx and for yy. This is a key design feature of the Reader monad. In the State monad things work differently, and you don't necessarily supply the same env/store to what corresponds to xx and to yy. So here is the completion of our definition of >>= for Reader:

(>>=) xx k = fun e ->
               let x = xx e in
               let yy = k x in
               yy e

That's the long-hand version. If you'll do the calculations, you'll see it can be written more compactly as:

(>>=) xx k = fun e -> k (xx e) e

which is what you'll often see instead in our notes (and elsewhere). It may be helpful to notice some equivalent transformations of fun e -> k (xx e) e. That's the same as fun e -> flip k e (xx e). (flip k works just like k but expects its arguments in "flipped" order. flip is defined as fun f x y -> f y x.) But then that's of the form fun e -> (something) e (something_else e). Which is just S something something_else, using our friend the S combinator. So fun e -> k (xx e) e is equivalent to: S (flip k) xx. This is also what Jacobson's Z combinator does. Thus for the Reader implementation of >>=:

     xx >>= k
<~~> fun e -> k (xx e) e
<~~> S (flip k) xx
<~~> Z k xx

We point this out not because the S ... or Z ... versions are easier to think about. If you're like us or we expect most people, they won't be. But it can be helpful to have seen these equivalences so that you might recognize the Reader monad working in some places where it's not explicitly invoked. For instance, Jacobson herself doesn't discuss monads.

I'll give the long-hand form of the State monad's >>= for comparison. For the State monad, we have instead:

(>>=) xx k = fun s ->
               let (x,s') = xx s in
               ...

Here when we supply the store --- our analogue of e in the Reader version --- to the boxed value xx, we get back not just its payload x but also a (possibly different) store, here called s'. We use that store instead as we move forward:

(>>=) xx k = fun s ->
               let (x,s') = xx s in
               let yy = k x in
               yy s'

You can see that the key difference to the Reader >>=, repeated here:

(>>=) xx k = fun e ->
               let x = xx e in
               let yy = k x in
               yy e

is that the State boxed value xx returns a possible altered s and that's what's supplied to yy at the end. There is also the key difference between the State and Reader box types that makes this possible:

type 'a reader = env -> 'a
type 'a state = store -> ('a * store)

How to handle name-clashes?

OK, now going back to the top, one of the themes I mentioned and has been illustrated in this discussion (though that was not its primary aim) is that the same symbols --- mid and >>= for example --- can be used ambiguously. This not just an example of the polymorphism we discussed before, as when [] could be an int list or bool list, and where head can be a function that works for either of those types. In that kind of polymorphism, there is one underlying implementation, that just (may) give us different result types depending on the types of the inputs. But the compuational machinery is constant. In what we're seeing now, though, we have (possibly, and usually) different computational implementations for the different instances of >>=.

We need to talk about how different languages sort this out. These are design choices. They aren't as fundamental to the programming languages as things like its type system and its evaluation order are. But they do affect your day-to-day interaction with the language in pervasive and noticeable ways.

For organization, I'll talk about four different ways of handling this, though several of these subdivide.

The first, simplest method is to just insist on different names. Restricting our attention to just the map function, we could say, well you should use as variables not simply map but instead list_map, tree_map, option_map, reader_map, and so on. Maybe one of them is so useful that it could just be unadorned map; or maybe not.

That works fine for a while, and it's what we've been using in the course so far in an ad hoc way. But it soon becomes unwieldy. And for better or worse, the languages we're working with have chosen more systematic alternatives, which we need to become acquainted with.

Handling name-clashes with dotted names

The second method is to have different namespaces or libraries or modules. (The latter two of these terms are used interchangeably. The first may arguably differ from them in some ways, but not in ways relevant to our present discussion.) A common notational convention for these is to use "dotted" names. Thus, in both Haskell and in OCaml, you'd refer to the map function from the List module or library as List.map. In fact, though, Haskell doesn't have any module named List; its name instead is Data.List, so you'd really say Data.List.map. The initial Data.List is the name of the module, and the map is the value (in this case a function value) that we're referring to that's defined in, and "exported by" (more on this in a moment) that module.

Another detail is that Haskell also uses the . to represent function composition. Thus the expression f . g (or you could also write just f.g) is equivalent to \x -> f (g x). Some of the potential ambiguity between this and the other use of . as in Data.List.map is resolved by the fact that modules in Haskell always have to start with capital letters. That also happens to be true in OCaml. (For comparison of the type declaration syntax and capitalization rules for Haskell and OCaml, see our Rosetta2 page.)

In the Juli8 libraries we're supplying for OCaml, you instead express function composition with %.

Some programming languages demand, and even when not there is often a custom, of having just one module per source-code file. When you write a stand-alone Haskell or OCaml program, you'll have one source-code file that has your topmost program logic, usually declaring a function called main. This is what gets invoked when you tell your operating system to run the program, at the command line or by double-clicking its icon or whatever. That source-code file may and usually will also use functions (and other values) from various other libraries/modules, residing in separate files. In an interactive Haskell or OCaml session, you will also often want to use functions (and other values) already defined in various libraries/modules, rather than ones you input right now at the interactive prompt.

There are several routes to using these other modules (I'll just stick with "modules" henceforth, rather than always saying "libraries or modules"), and they often involve several steps. First, the file that contains the module, either in source code format or pre-compiled into some binary form, has to be located somewhere where your Haskell or OCaml system knows where to find it. Let's call the list of locations they know to look the module search paths. It will be a list of one or more directories on your computer.

Second, the Haskell or OCaml system has to load that module into memory, and prepare to use it. They typically don't do this for all the modules that reside in directories they know about. One reason is that would take up substantial time and memory. This is especially true if that modules exist in source code format and haven't been compiled into binary form; but it's still true even when the pre-compiled binaries do exist. A second reason is that some of the modules may not work perfectly with each other. So typically only a small selection of modules is automatically pre-loaded. In Haskell, among the automatically pre-loaded modules is a special one called Prelude, and in OCaml a corresponding one called Pervasives. We'll discuss what makes these special shortly.

Third, there is the question of how to specify the functions (and other values) defined in the module. As we said, you can do this in Haskell and OCaml using the "dotted" forms Data.List.map and List.map. There are also some shortcuts, if you are going to want to often refer to several values from the same library. One shortcut is to rename a long module name like Data.List to just L. In Haskell you do this with:

import qualified Data.List as L

In OCaml the same effect can be achieved by:

module L = List

That defines or declares a new module variable L, and specifies the module it should refer to using the longer name. It's the module-level equivalent of something like let f = ... is for values.

Another shortcut is to specify that symbols declared in (and exported from) the module should be referrable without using the List. or even shorter L. prefixes. In Haskell you can do this by saying just:

import Data.List

In OCaml it's instead:

open List

After either of these statements, all of the symbols like map can just be used like that, without any prefix. They will "shadow" (replace) any bindings for those symbols that existed beforehand. So if you then open Option and that also declares map, then simple map will then mean Option.map rather than List.map.

In Haskell programs it's necessary to have one such import statement for any libraries you want to use things from. So you can't just start writing Data.List.map without having some kind of import Data.List ... statement already. In the interactive Haskell ghci program, though, you don't need to do this. There you can just write Data.List.map without an explicit import statement. In OCaml, you can always write just List.map without any earlier open statement. So long as the OCaml system sees a file called list.ml or list.cmo or something like that (the first being a source-code file, the latter being one of the several kinds of compiled binary files that OCaml knows about) in a location it knows to look in, that will often be enough. Sometimes though you have to take special steps to force OCaml to "load" the binary files.

In Haskell the module Prelude is not just pre-loaded, it is also imported in this way so that all the symbols there are usable without any explicit Prelude. prefix. In OCaml the same is true for the Pervasives module.

Now there's an additional capability that Haskell has that doesn't have any (comparably simple and direct) analogue in OCaml. This is that in Haskell, you can say you only want to import some specified subset of a module's symbols. So you can say:

import Data.List (partition, group)

and that will import only the partition and group symbols from the Data.List module (in this case, import them in such a way that they can then be used without any prefix). This can often be useful if different modules you are relying on overlap in the symbols they declare, for instance with map. This way you can explicitly say which module you want to take the map function from, rather than relying on that being the last module you had in your list of imports.

In addition to only importing a specified selection of symbols, Haskell also has the capacity to only export a specified selection of symbols, and this is a functionality also present in OCaml, though the way you do it looks different. This will be important for what we're doing.

One motivation for this is that the source code file in which you implement the module may have some helper functions that you don't want to expose to clients of the module. Well, the programmers can usually read your source code, so they can see that those helper functions are there. But normally in day-to-day interactions with your module, as consumers they don't need to see those gritty details, and it might confuse them into thinking these are functions they might ought to be using, if they see their names show up when they see the list of symbols that your module exposes to them. A second, more fundamental motivation for not exposing some parts of your module is to enforce abstraction barriers, which is important for us and we'll discuss shortly. We're getting there.

Okay, so suppose a Haskell programmer writes a module that defines the symbols foo, bar and qux, and she wants clients/programmers who use her library to have access to the first two but not to qux. For whatever reason. They way she does this is to just specify an export list for her module that includes the symbols foo and bar but which excludes qux. The specific syntax for doing this isn't important; just get the idea.

An OCaml programmer who wants to achieve the same end does this a bit differently. Recall that in OCaml we have module definitions/declarations like module L = ... that parallel value definitions/declarations like let foo = .... In the previous example we had:

module L = List

supplying a name for, or module variable bound to, the module we were taking about. At some point, though, you can't just keep using names for modules, you have to provide the module itself. The way you do that in OCaml is with the syntax struct ... end. In the middle you can have any of the ordinary top-level declarations you can make in an OCaml file, or at the interactive OCaml session prompt. By "top-level" declaration, we mean things you are allowed to say unembedded in other expressions. So for example you can write things like let x = 5 in x*x embedded in larger expressions: let y = (let x = 5 in x*x) in .... But at the top-level, you can also say simply let x = 5 with no further in .... That defines x to be the value 5 for the rest of the session (if it's an interactive session) or module (if it's a module or source code file).

Side note: OCaml also uses the term "Toplevel" to refer to their interactive program, that starts up when you just type ocaml (rather than ocaml file.ml); this is the analogue to Haskell's ghci. It's confusing that "toplevel" is used in these two ways. I won't use "toplevel" to denote the OCaml interactive program.

Thus you can write:

module M = struct
  let x = 5
  let foo y = y * y + x
  (* that's the same as `let foo = fun y -> y * y + x` *)
  type color = Red | Green | Blue | Gray of int
end

and that defines M to be the specified module. Now this module has a type, just like values have types. By default, the type of the module includes all the types declared in the module (here, the type color), and the types of all the variables bound at the top-level in the module (here, the variables x and foo). Thus if you typed the above in the interactive OCaml program, it would say:

module M :
  sig
    val x : int
    val foo : int -> int
    type color = Red | Green | Blue | Gray of int
  end

The sig ... end is the way of specifying the type of an OCaml module. The interpreter saying module M : sig ... end is just like its saying val x : int = 5 after you type let x = 5. (By the way, you have to type ;; at the end of stuff you tell the OCaml interactive interpreter to mean you want it to stop accepting input and process what you've typed so far. You've seen this already. I'm just pointing out this is part of the special syntax for interacting with this interactive session, not strictly part of the OCaml language. To make things convenient, the OCaml language will ignore the ;; in many synactic contexts. So it does no harm if you include them.)

The only difference between the module case and the let x = 5 case is that in the latter, OCaml tells you not just the type but also the specific contents of the value that x has been bound to. It can't always do this. For instance, after you type let square y = y * y ;; (by itself, not as part of a module declaration as above), OCaml will instead just say val square : int -> int = <fun> meaning that it doesn't know how to display this specific function, so it just writes <fun>. But the type of the function is int -> int. And the name of this function is square. And that this is a val --- that is a value --- as opposed to a type or a module. If you type just a simple expression that doesn't bind a top-level variable, you get instead of val square : ... just - : .... Witness:

# 3 * 2;;;
- : int = 6

Here int is the type, and OCaml can display the specific value, = 6, and no variable was bound to this, so it's just - : ... rather than variable : .... Okay, but now in the module case, like the function case, OCaml always acts like it can't display the specific instance, but rather than module M : sig type of the module end = <module>, it just displays module M : sig type of the module end.

Inside the sig ... end, you can see the same type color = ... declaration you used when supplying the module, and also the type declarations for the values x and foo that you declared/defined when supplying the module. They appear with the same val x : and val foo : prefixes we just talked about getting from the interactive interpreter. So now you know what some more of this stuff means.

Okay, all those specifics aside the general point here is that modules themselves look like struct ... end; they have types, which are expressed by sig ... end; and the way to bind a module variable (which must begin with a capital letter) to a module (specified either with another variable, or supplied literally with struct ... end) is to say module M = .... You can omit the type and OCaml will infer it, just as it does with values. Or you can explicitly specify the type, by saying, for example:

module M : sig ... end = struct ... end

or:

module M : sig ... end = List

Okay, now the key new idea I want now to introduce is that you don't have to supply the full type of the module. You can restrict the type in some ways. Just as when you say:

let id = fun x -> x

you don't have to say:

let id : 'a -> 'a = fun x -> x

or equivalently:

let id (x : 'a) : 'a = x

Instead you can specify that you just want to be defining an identity function on integers, by saying:

let id : int -> int = fun x -> x

or:

let id (x : int) : int = x

Somewhat analogously, with the modules, you can restrict the type signature you assign to M. With modules, you do this by leaving out some details. Thus for example if we said:

module M : sig
  val foo : int -> int
  type color = Red | Green | Blue | Gray of int
end = struct
  let x = 5
  let foo y = y * y + x
  type color = Red | Green | Blue | Gray of int
end

That would use the definition you gave for x for the implementation of foo, but it would not expose that symbol x to the outside world, who interact with M. Thus, I can then write M.foo, but M.x will give me Error: Unbound value M.x, because OCaml acts as though there is no M.x. Somewhat similarly, we can also write:

module M : sig
  val foo : int -> int
  type color
end = struct
  ...
end

This declares that there is some type color, but it's not telling you the specifics of how it's implemented. OCaml knows but it won't show it to you or allow you write as though you know it. This puts limits on how you can interact with the type. Witness the session:

# let (x: M.color) = M.Red;;
Error: Unbound constructor M.Red
# let color_id (x : M.color) = x;;
val color_id : M.color -> M.color = <fun>

Usually, when a module partially exposes a "private type" in this way, it will also expose operations that permit you to do more interesting things will values of that type than just write identity functions. A module might also leave the type color out of its type altogether:

module M : sig
  val foo : int -> int
end = struct
  ...
end

All of these techniques are the OCaml analogue of a Haskell module only exporting some of the symbols (whether for values or for types) that it defines. I've got into this at this much length because you need some familiarity with it to use the monad libraries we're supplying for OCaml, which strongly (but not exactly) parallel those for Haskell. More on those later.

Side note: If you think about it, you may notice a disanalogy between what's happening in OCaml when we restrict the type of a value --- there we make the type more specific, that is, less general. And what's happening when we restrict the type signature of a module --- there we expose less information about the module, and so in a way make the type more general.

If you remember, we were talking about different ways languages handle conflicts in names. And we were on Option 2, namely different namespaces or modules/libraries. And we were discussing the Haskell and OCaml ways of doing this, and we had a long side discussion about the different ways they have of only importing or exporting some restricted subset of the symbols defined in a module implementation. There are more options for how to handle the name conflicts. Really they might be and often are just extensions of Options 2, rather than competitors with it. We will get to them soon. That will flesh out the background we've started to provide for OCaml and Haskell.

But before we proceed to the other options, there are two more topics connected to what we've been saying so far that I'll address first. First, special commands to the interactive session, and second, abstraction barriers. Then we'll go back to discussing handling name conflicts.

Special interpreter commands

Okay, special commands to the interactive programs ghci or ocaml. These are different from top-level declarations. You can't include them in ordinary source code files. But you can type them to the interactive prompt. In Haskell the interactive prompt looks like this:

Prelude>

What appears before the > may be different. At the prompt you can type special commands that begin with a :. You can get a list of some of them by typing :? then return, or :help then return. We will discuss elsewhere the most important ones of these to learn early, as well as how to define some additional helpful special commands. But two of most useful are illustrated below:

Prelude> :type map
map :: (a -> b) -> [a] -> [b]

Prelude> Prelude> :info map
map :: (a -> b) -> [a] -> [b]   -- Defined in ‘GHC.Base’

Prelude> :info Monad
class Monad (m :: * -> *) where
  (>>=) :: m a -> (a -> m b) -> m b
  (>>) :: m a -> m b -> m b
  return :: a -> m a
  fail :: String -> m a
   -- Defined in ‘GHC.Base’
instance Monad (Either e) -- Defined in ‘Data.Either’
instance Monad Maybe -- Defined in ‘Data.Maybe’
instance Monad [] -- Defined in ‘GHC.Base’
instance Monad IO -- Defined in ‘GHC.Base’
instance Monad ((->) r) -- Defined in ‘GHC.Base’

The first special command :type symbol shows the type of symbol. You can also abbreviate this to :t symbol. The second special command :info symbol shows roughly the same information for values --- here there's just the additional information about what module the value is defined it. But as you can see, you can also say :info Monad, and Monad is not a value, it's what Haskell calls a "typeclass" (see class Monad ...). More on those below. So you can get :info about Monad, but not a :type. :info symbol can be abbreviated to :i symbol.

Other important Haskell special commands are :load filename or :add filename. These tell Haskell to load any libraries or source-code files that you want to use. They differ in that the first says also forget about anything else you've already loaded, whereas the second is incremental. We'll discuss these more elsewhere. Anyway, the general picture is there are three stages for using a module:

  1. it resides somewhere Haskell knows about, or that you coerce Haskell into knowing about
  2. it is loaded; sometimes, like for system-supplied libraries like Prelude or Data.List, this step isn't necessary
  3. its symbols have been imported, perhaps for use without any prefix though this depends on the specific import syntax you use. With Prelude this isn't necessary, symbols from that module are automatically imported.

With OCaml we have the same three stages, though somewhat different syntax.

The OCaml interactive prompt looks like this:

#

and the OCaml special commands begin with #. One example of such commands was the #trace foo command Chris showed in a previous week, that makes OCaml announce to you every input that the function foo ever gets, and what results it outputs. If you want to turn that off, you can type #untrace f.

If you're using a version of OCaml >= 4.02 (you can see the version as Sys.ocaml_version), there's also a special command #show symbol that works like Haskell's :info symbol.

The other useful OCaml special commands are:

  • #directory "/path/to/dir/on/my/disk" makes OCaml know about a directory, that modules may be located there. You can get the same effect by starting OCaml from a terminal where that is the current directory, or by starting OCaml with the command ocaml -I /path/to/dir/on/my/disk.
  • #load "path/to/file.cmo" is the way to load a pre-compiled OCaml module. For most (but not all) system-supplied modules, this is unnecessary. For modules you compile yourself (you might not end up doing that, though for the full-featured version of the interpreter from last week's homework, we were guiding you towards doing that) it will be necessary. It looks for path/to/file.cmo underneath the various directories it knows about or you have told it about. Pathnames that begin with / are from the top of your disk. .cmo is one of the suffixes for binary files that OCaml knows about; there are others.
  • #use "path/to/file.ml" is something like an analogue to the previous command, except that in this case the files loaded are uncompiled source-code files. Also, OCaml reads these files more-or-less as though you had just typed them directly into the interactive session yourself. One interesting difference this involves is that #loaded files are always modules, that you still need to explictly open. (open is a part of the ordinary OCaml language, so it has no # prefix.) Whereas with #used files, you may not need to do any opening. That depends on whether the #used file defines a symbol directly at its top level, or defines it inside a module M = struct ... end.

Abstraction barriers

Say that you have to write some implementation of sets (where the elements are of some specified type elem). There are various ways you could do this. One way is to just use simple elem lists. When you get a new elem, you just cons it to the front of the list without any special checking. That has some advantages: it's simple and fast to add new elements to the list. But also some disadvantages: it can make it slower to check whether other elems are members of the list, and can make it more complex to define operations like difference. A second way is to use lists, but to curate them so as to ensure that the lists never contain duplicates. A third way is to curate the lists so as to ensure that they're additionally always sorted. A fourth implementation might just to be to implement a list as an int. If there are only a small finite number of elems, then bit 0 of your int could be on if that elem belongs to the set, else off; bit 1 could be for the next elem; and so on. Another way that sets are often implemented are with trees. Here there is the possibility that different concrete trees, for example Branching (Leaf 2, Branching (Leaf 3, Leaf 5)) and Branching (Branching (Leaf 2, Leaf 3), Leaf 5), might represent the same set.

Now you may not want your clients to have to keep track of the details of this implementation. So you may want to expose to them only that your module supplies a type elem set, without saying whether that's an elem list or elem tree or what have you. This can make things easier for them, in that they're not presented with details they don't have to concern themselves with. It can also keep them from writing code that makes specific assumptions about the details of your implementation. If you only supply the type elem set without supplying the actual implementation or definition of the type --- is is called supplying an abstract type --- then clients (programmers who use your module) can't apply functions like List.head to sets they build with your module. Even if behind the scenes, your sets really are lists. This is called an abstraction barrier. From the perspective of the world outside your module, your set types are opaque and even if they are in fact concretely implemented as lists, the world outside your module can't treat them as such. Inside your module, though, you can.

It is not profoundly different from the way that OCaml's primitive underlying representations of None and 0 and [] in memory during runtime might be the same, but OCaml won't let the programmer say things like 1 + [] or head None.

That's the basic idea of an abstraction barrier. These are in general a valuable and important part of programming design. It is practically helpful, for reasons sketched above, and also conceptually helpful, because it forces the library author to think hard about what should be exposed and what shouldn't. Another way of describing that is as thinking about what the abstract algebra governing the library should be. These are like our Mappable and Monad Laws.

The Monad library we provide for OCaml, and some of the Monad facilities in Haskell, have these kinds of abstraction barriers. Thus to use the List Monad functions in our OCaml Monad library, you'd have to say:

Monad.List.mid

You can shorten this by saying:

module L = Monad.List
L.mid

and so on. For the Reader Monad it's more complex. We'll explain the details later, but just to give a quick example, you'd need to say:

module R = Monad.Reader(struct type env = ... end)
R.mid

and so on.

Now once you have a monadic module like L or R to work with, you can go around saying things like:

let xx = L.mid 1;;
L.(>>=) xx (fun x -> L.(++) (L.mid x) (L.mid x));;

The ++ is supplied as part of our List monad library for OCaml. It works like ++ does for Lists in Haskell, only our OCaml ++ is defined on instances of the List monad type, not the native OCaml lists. As we're discussing, these may be implemented the same way behind the scenes but OCaml treats them as different types. More on Haskell vs OCaml notation on things like ++ later. Anyway, the (>>=) and (++) notation is to use infix operators like >>= and ++ in prefix position. As far as I can tell, this is the only way to prefix them with module names like L.. In OCaml, you can't write things like:

xx L.>>= k

(You can in Haskell.) Alternatively, you could say:

open L
let xx = mid 1
xx >>= fun x -> mid x ++ mid x

and all of the symbols mid and >>= and ++ will be construed with the meanings given them in the L module. An alternative, which I find to be the most convenient form, is to write like this:

let xx = L.(mid 1)
L.(xx >>= fun x -> mid x ++ mid x)

You can nest these, thus you can have L.( ... Monad.Option.( ... ) ...). In any case, after evaluating those commands, then behind the scenes what you'll have ended up with is the list [1, 1]. But that's not what OCaml will show you. It will show you instead:

- : int L.t = <abstr>

That is, some unnamed value whose type is int L.t. _ L.t is the monadic box type from module L, here it is parameterized on the type int. So we know we have an int using the List monad box type. But OCaml doesn't display this as an int list, and won't let you apply the usual list functions like List.take to it, and also can't display the list's contents, because the type in question is, as OCaml says, <abstr>.

However, the OCaml Monad libraries also supply a function we call run, mostly following Haskell, though we might also have called it expose. This function takes you from the abstract List monad box type to its real implementation. Thus if we say:

let xx = L.(mid 1);;
let yy = L.(xx >>= fun x -> mid x ++ mid x);;
L.run yy;;

we get:

- : int L.result = [1; 1]

Here the type L.result is an alias for the real implementation of the List monad box type, and OCaml can show us the value. Computationally, run here is just an identity function, but it takes us from the one type to another type, where the underlying implementation of the types is the same. (In a few of the monads, the run function does more than this.)

Why make things so hard, Linmin asked. If it's really just an int list behind the scenes, why make us jump through these extra hoops to get at it? Four points to consider in response. (I hesitate to call all of them strictly "responses".) First, this parallels what Haskell does with some of its Monadic operations. Thus if I say:

Prelude Control.Monad.Reader> let x = return 1 :: Reader [Int] Int
Prelude Control.Monad.Reader> :t x
x :: Reader [Int] Int

Haskell shows me that what I've got is an instance of the type Reader [Int] Int (here [Int] is the type I specified for the environment, and Int is the type of the payload 1. In fact behind the scenes Haskell implements that as an [Int] -> Int, but it doesn't tell me that. Also, Haskell won't let me say things like x []. I have to say, more verbosely, runReader x []. The runReader is like our run; it exposes the real implementing type of x, which is a function which does accept the argument [].

Haskell lets me get even more abstract:

Prelude Control.Monad.Reader> let x = return 1 :: MonadReader e m => m Int
Prelude Control.Monad.Reader> :t x
x :: MonadReader e m => m Int

Here I don't even specify that x is a value of the specific Reader monad type, I only say that x is a member of some type m Int, where m is some type operator (box type) satisfying the constraint that it implements the interface of the MonadReader type class, parameterized on an environment type e. Basically what this means is that m is a box type that acts like the Reader monad. We'll see examples of how there could be such things which aren't identical to the Reader monad in Thursday's session, when we discuss combining different monads.

In any case, that's the first response point: Haskell has these kinds of abstraction barriers too. That doesn't by itself constitute a justification for them, of course, but it helps to see that this is not just an idiosyncratic choice made by your teachers, but is also the choice made by teams of language designers interacting with thousands of programmers. The second response point is that there is pedagogical value to these abstraction barriers. If you've got something that is a boxed value, and you want to manipulate it or use it as input to other monadic machinery, you have to do so (using our library) using the specified machinery that's part of the monad modules' interfaces. Sure, you can always apply run to it and then manipulate the underlying implementing value, now that its concrete type has been exposed. But then the result will no longer be what our libraries recognize as monadic, so you can't feed the result into >>= anymore. That is, when:

xx >>= k

works, this won't work: (run xx) >>= k. Even though xx and run xx may be the same underlying data in OCaml's memory. Forcing you to use the monadic machinery to manipulate xx, rather than doing it by hand, has pedagogical and conceptual value. That is the second response point. If you want to write your own implementations of the monadic operations, which is pretty straightforward for the simple, atomic monads we've been looking at so far, you don't need to introduce the abstraction barriers we have, and so you can do dirty non-monadic hacking on your boxed values and still use your own monadic operations on them if you like. But this takes us to the third response point. This is that pretty soon we are going to be working with combinations of monads, not just the atomic Reader and List and so on, but combinations of two or three monads at once. The types for these can get pretty complex and intertwined. In the general case, the combinaton of a box1 type and a box2 type, parameterized on type 'a, will not just be an 'a box1 box2. There generally has to be more complex ways for the types to be intertwined. When you look at the concrete implementations of some of these complex monadic types, it can be pretty confusing what's going on, and what type in your mental model the thing you're looking at really belongs to. Whereas if OCaml tells you this is an int Foo.t, you can say OK, now I now what this is, even if Foo.t is a complex box type that combines the behavior of several monads and has a gnarly concrete implementing type.

The fourth response point is that sometimes you might be using the same implementation for different theoretical roles. I'll make this point first using a non-monadic example. Go back to our discussion of different ways to implement sets. Perhaps you choose one of the implementations where int sets are just int lists. Now you might also have an implementation of int multisets (multisets are similar to sets in ignoring order, but dissimilar in that they care about multiplicity: so the int multiset that contains 3 once is not the same as the one that contains it twice). And you might also implement them as int lists. But now if you get from one source an int list, perhaps that was intended to be a set, but you forgot and went on to use it as though it were a multiset. That could make for conceptual trouble. I don't mean your code will crash; perhaps it won't. But assumptions you were relying on in order for the code to do what you want may be violated because you thought you had an instance of one type satisfying one algebra, and instead you had an instance of a different type (with the same concrete representation in memory) satisfying a somewhat different algebra. You can avoid this kind of problem by introducing abstraction barriers. They prevent you from using int sets as int multisets, and vice versa, even when both are implemented behind the scenes as the same same int list.

The same point applies in the monadic case too. You might have two Reader monad types, each implemented on an int environment type, but in the one case it's playing the role of Jacobson-style variable binding for a single pronoun (of type int), and in the other case your ints are possible worlds and it's playing the role of modeling intensionality. The same bytes in memory could be used for each purpose, but you won't yourself want to become confused and use the one thing as the other. This is just like sets and (perhaps only some) multisets having the same implementation. Abstraction barriers can help you keep these apart.

Okay, that completes the discursion on abstraction barriers. Let's return to our main organizing thread, how to handle name conflicts.

Handling name-clashes with overloaded symbols

We said Option 2 for handling name conflicts was namespaces or modules, and looked at some of the twists and design choices made by Haskell and OCaml about this.

Option 3 --- which doesn't have to compete with Option 2 but can be combined with it --- is to overload some of your symbols. OCaml does this in a very limited way, just with the symbols = and < and >. In fact it's debatable whether it even does it there. As mentioned before, this is not like [] being able to be polymorphic for the empty list of any element type 'a, or for fun x -> x to be polymorphic for any argument type. The overloaded symbols we are talking about here have a different computational implementation depending on the type of the argument. OCaml does this hardly at all. For instance, they don't do it with +. You have to use different symbols for addition applied to ints and addition applied to real ("floating point") values.

Haskell and other languages do this much more extensively.

In what's called object-oriented programming, you specify various interfaces, called classes. Some of these classes "inherit" from, or in other words extend, others. Finally, you can have values that are instances of some of these classes, conventionally these values are called objects. As an example, perhaps I have the general class Animal, and then one inheriting subclass will be Dog. Dogs will have the same interface that Animals do, but may have additional interface elements too. And then fido may be an object that belongs to (both of) these classes. I might then have some functions that expect an Animal as argument, and any Dog like fido would be acceptable input to these functions; other functions might more specifically demand Dog inputs, and not be defined for other types of Animal. In some cases the hierarchy of class interfaces we're working with might not have a simple tree structure. Perhaps fido is, as well as being an Animal, also a HouseholdOccupant, and this class may be partly disjoint from the class of Animals. (Furniture also occupies my household but isn't an animal.) How to deal with the complexities that arise here can become difficult. A famous example discussed in the literature is "the Nixon diamond". Nixon belonged to the class Quaker but also to the class Republican, and naively we might model Quakers as having some interface choices --- for example, being pacifists --- that we don't model Republicans as having. Figuring out how to sort this out gets complicated, and is important both for modeling reasoning, and for designing programming systems that use this general strategy for specifying interfaces.

Haskell's design is in this general family. They have what they call "typeclasses", and these are instantiated by specific "types". So typeclasses aren't types but rather properties or families of types. What defines a typeclass are certain constraints --- perhaps to belong to typeclass so-and-so, you also have to belong to some others --- and that you provide some implementation or other for certain symbols. But the implementation could be very different from instance to instance. Also, in many cases the typeclass will be associated with some algebraic laws, like the Laws we've seen in our discussions of Monads. These aren't anything that the computer tries to verify; but they are assumptions that the programmers rely on in designing and working with these typeclasses, so if you violate them some things may turn out to be broken.

As an example, we could define a typeclass:

Prelude> class Dot t where { (*) :: t -> t -> t }

This means that in order to belong to this typeclass, a type t has to define a single operator * that takes two ts and yields a third t. We can then declare some new types and make them instances of this typeclass, that is make them provide that interface. Here is one:

Prelude> data Sum = Sum Int deriving (Eq, Ord, Show)
Prelude> instance Dot Sum where { (*) (Sum x) (Sum y) = Sum (x+y) }

The data ... is one of the (several) ways Haskell has for declaring a new type. The deriving (Eq, Ord, Show) at the end means you want Haskell to figure out automatically how to apply = and < to values of these types (using the underlying parameter type Int), and also how to print such values. The second line says that we do satisfy the interface we're calling Dot, and in particular here is how to implement the operations that one needs to implement to count as doing so... Note that in the definition of the type I used the same symbol Sum to name both the type and the tag/variant label/constructor. You don't have to use the same symbol, but it's common to do so. In OCaml these have different capitalization rules, so the corresponding type declaration looks like this:

type sum = Sum of int

There's nothing in OCaml corresponding to the deriving ... part. (In fact, all OCaml values can interact automatically with = and < anyway.) Nor is there anything corresponding to class and instance in OCaml. OCaml has to come at this differently.

In any case, back to our Haskell example. We can declare other types that implement the same interface differently:

Prelude> data Prod = Prod Int deriving (Eq, Show, Ord)
Prelude> instance Dot Prod where { (*) (Prod x) (Prod y) = Prod (x Prelude.* y) }

Note I had to say Prelude.* here to get the ordinary, multiplicative meaning of *, rather than recursively calling the same function I was defining for Prod arguments. Okay, now both of these types implement * but they do so differently:

Prelude> Sum 2 * Sum 3
Sum 5
Prelude> Prod 2 * Prod 3
Prod 6

I can define other functions that only expect their argument to be of some type t satisfying the Dot interface, and don't care about which, like this:

Prelude> let { square :: Dot t => t -> t; square x = x * x }
Prelude> square (Sum 3)
Sum 6
Prelude> square (Prod 3)
Prod 9

The Dot t => at the beginning of the type declaration for the function square is a "type constraint". It essentially means "for any type t satisfying the Dot interface...". And then in the definition of square, the symbol * is used (not with its ordinary necessarily multiplicative meaning, but) with whatever implementation t happens to provide for *. That's why square (Sum 3) and square (Prod 3) give such different results.

We can also have such constraints on our original class declarations. Whereas we had:

class Dot t where ...

Haskell can also have declarations like:

class Semigroup t => Monoid t where ...

meaning that in order to have the Monoid interface, type t also has to have the Semigroup interface. (This example is not in fact yet part of the official language.) And so on.

Haskell uses this technique extensively for its Monad interfaces. Monadic box types are specified in terms of the interface they have to supply, analagously to our Dot interface and the Sum and Product types.

Okay, that was all about Option 3 for handling name clashes/ambiguities. Haskell embraces this by letting different types define the symbol differently, and then it figures out what definition to use by figuring out what the argument types are. There are just some common constraints: for example, with the Dot interface, the * function has to take two arguments of the same type and return a result of that type. With other examples, we might also have that if you declare yourself to satisfy the interface, you have to supply several different operations. An loose analogy might be that when I talk to my family, Mom might have one meaning, with some paired meaning for Dad, but then in the AI lab these have different meanings (two different computers) and in MI6 two yet different meanings. (I don't know if there's a "Dad" in MI6, but in the modern Bond movies they called Judi Dench's M character "Mom.")

OCaml's parameterized modules

OK, now let's turn to Option 4, which is OCaml's strategy. We've already discussed OCaml modules and how one might use their type declarations to only expose some part of the module's concrete definitions. A further quirk is that OCaml also permits you to define things that aren't modules but are rather module makers, that is things that take certain parameters (these are always other modules, usually small ones), and generate modules as a result. OCaml calls these "functors" which is a shame because Haskell (and category theory) use that term differently. (At least, I assume there is no underlying connection between OCaml's use and the category theory use, though I don't know.) I'll just call them "module makers". The specific syntax for declaring these is not important. What is important is how to use them.

Recall the example from before:

module R = Monad.Reader(struct type env = ... end)
R.mid

Here Monad.Reader is a module maker, and struct type env = ... end is the parameter (you have to fill in the ... with an actual type, perhaps int list or string -> int, where strings are how you represent variables). First we bind the module variable R to the module made by supplying that parameter to the module maker. Then R is a monad library, just like Monad.List and Monad.Option are.

Here is some code showing how to generate the common monad modules, and also some additional values defined in each module, beyond the core monad operations. This code assumes you have installed the Juli8 libraries for OCaml.

module O = Monad.Option
O.(test, mzero, guard)
module L = Monad.List
L.((++), pick, test, mzero, guard)
module T = Monad.LTree (* LTree for "leaf-labeled tree" *)
T.((++))
module I = Monad.Identity
module R = Monad.Reader(struct type env = int list end) (* or any other implementation of envs *)
R.(ask, asks, shift) (* same additional interface as Haskell has; we'll explain them later *)
module S = Monad.State(struct type store = int end) (* or any other implementation of stores *)
S.(get,gets,put,modify) (* same additional interface as Haskell has; we'll explain them later *)
module Ref = Monad.Ref(struct type value = string end) (* this is essentially a State monad, but with a different interface *)
Ref.(newref,getref,putref)
module W = Monad.Writer(struct type log = string let empty = "" let append = (^) end) (* or any other implementation of logs *)
W.(listen,listens,tell,censor)
module E = Monad.Error(struct type err = string exception Exc = Failure end) (* or other specifications of type err and exception Exc of err *)
E.(throw, catch)

These mostly have to be entered as individual lines in the interactive interpreter, separated by ;; and returns.

There remains a final major Monad, the Continuation monad, that we'll discuss and add to the library later.

We'll discuss the different ask, shift, pick, and so on functions on another page. But just to give some examples, in the List monad, mzero corresponds to [] and ++ as we mentioned before corresponds to List.append, which OCaml also writes as the infix operator @. For some reason I don't like that operator, so I mostly avoid it. Maybe I don't like it because Haskell uses it with a different meaning. In Haskell, var @ pat is what OCaml writes as pat as var. (Discussed on our Rosetta1 page.) So I tend to write instead just List.append. But when working with Lists as abstract monadic values, in OCaml, you need to use ++ instead of List.append. OCaml will act like it doesn't know that abstract monadic Lists are really lists. test takes a predicate of plain (non-monadic) lists, and if the monadic list that is its second argument satisfies it, returns that list unchanged, else returns mzero (the abstract list version of []). guard takes a boolean argument, and if it's true returns [()] (as an abstract list) else returns mzero. These correspond to the Haskell operations of the same name.

Here is an example of using the List monad.

L.(let xx = mid 1 ++ mid 2 ++ mid 3 in let yy = test (fun xs -> List.mem 3 xs) xx >>= fun x -> mid (x+1) in run yy)

That will construct (an opaque monadic version of) the list [1; 2; 3], bind the variable xx to it, and then run test ... xx >>= fun x -> ... on that xx. The test operation checks whether the list we're working with has a 3 as a member; if it didn't we'd get the empty list and the rest of the ... >>= ... chain would be ignored. (One of the laws for mzero is that mzero >>= ... is always mzero.) In this case we pass the test, and so we extract the "payload" of our monadic value. In this case there are multiple payloads and our definition for >>= binds them to the x in fun x -> ... in turn, and assembles the results properly. In this case what we do to each payload is increment it by one, and then we have to return mid of the result --- since simple ints aren't monadic values, but the result of ... >>= fun x -> ... has to be a monadic value. Then we bind the variable yy to the monadic value we've thereby constructed. Finally, we apply run to the result to remove the abstraction curtain and see what's there. If you've been following, it will be completely expected that we get:

- : int L.result = [2; 3; 4]

Well, that's a lot of abstract machinery and verbose code for something so simple. Yes, in this very simple example the monadic machinery is more complex than if we did the same thing by hand. But in more complex examples, the monadic machinery will be only somewhat more complex but the by-hand version would be enormously more complex and profoundly harder to keep straight in our heads. So cut the monadic machinery some slack. In the pedagogic examples as you're becoming familiar with it, it will generally look to make things harder not easier. But it "scales" much more elegantly.

Haskell has this convenient "do-notation" for working with monads. You could write the above example like this in Haskell:

let { xx = pure 1 ++ pure 2 ++ pure 3;
      yy = (if (\xs -> 3 `elem` xs) xx then xx else mzero) >>= \x -> return (x+1) } in yy

(With the List monad, Haskell doesn't require or even have any run operation; and what we have as test they have to write longhand.) But you could also write it like this in Haskell:

do { let { xx = pure 1 ++ pure 2 ++ pure 3};
     x <- if (\xs -> 3 `elem` xs) xx then xx else mzero;
     return (x+1) }

The key point here is that zz >>= \x -> blah can be written in Haskell as do { x <- zz; blah }. In this case it doesn't help much, but often it makes things much shorter and easier to read. OCaml doesn't have this natively, though I've seen third-party extensions that offer some analogue of it.

Some other things you might encounter are that in Haskell, often instead of return (x+1), people will write return $ x + 1. $ is the symbol for function application, also expressed by plain juxtaposition. So f $ g is the same as f g. But it parses differently, so that f $ g x is f (g x), whereas f g x would be (f g) x. And return $ x + 1 means return (x + 1), whereas return x + 1 would mean (return x) + 1. (return and pure, remember, are two Haskell names for our mid.) Haskell's $ is right-associative, so that f $ g x $ h z means f (g x (h z)), not f (g x) (h z).

OCaml has something similar to Haskell's $, but for complex reasons they can't get symbols starting with $ to be right-associative, so instead they use @@. Again, I'm not a fan of this orthography but there it is. OCaml also has the related operator |> which is just @@ with its arguments flipped. That is,

f @@ g x

and:

g x |> f

are the same as Haskell's:

f $ g x

also known as:

f (g x)

The g x |> f notation in OCaml is convenient when the g x is something monadic and the f is run. Thus we could write:

L.(... |> run)

instead of:

L.(let yy = ... in run yy)

Note though that fun -> ... captures everything that comes after it, so if you have:

L.(let yy = ... >>= fun x -> ... in run yy)

That'd have to be translated with some parentheses to close off the fun x -> ..., as follows:

L.(... >>= (fun x -> ...) |> run)

These little shortcuts can sometimes make life easier, but given the complexity of having to remember them (and explain them to your students), I'm not sure they're worth it.