9
$\begingroup$

As I've been struggling to get a deeper understanding of monads in Haskell, I started reading about functors and their counterparts in category theory. Keep in mind that I have no background in the latter.

According to Milewski(1), any polymorphic function of the type F a -> G a, where F and G are functors, automatically satisfies the naturality condition, although I have yet to figure out why. Anyways, this should hold in particular when F is the identity functor, which if understand correctly would correspond to the aforesaid function having the type a -> G a. Now, for simplicity let:

data G a = C a

If G is a functor, then since C :: a -> G a, C is a natural transformation. Viceversa, suppose C is a natural transformation. Thus, for all functions f:

(fmap f) . C == C . f

which I imagine is what leads to the wrapping-unwrapping analogy. In fact, if we render C x as {x} as in "wrapped x", then the previous can be re-written as:

(fmap f) {x} == {f x}

which quite literally means that applying the transformed f to the wrapped value is the same as unwrapping, applying f to the unwrapped value and re-wrap it. By the previous equality:

law 1) (fmap id) {x}      == {id x}
                          == {x}

law 2) (fmap (f . g)) {x} == {(f . g) x}
                          == {f (g x)}
                          == (fmap f) {g x}
                          == (fmap f) ((fmap g) {x})
                          == ((fmap f) . (fmap g)) {x}

It follows that G is a functor.

So is it correct to say that requiring the functor laws on a type constructor is equivalent to requiring that its data constructors satisfy the naturality conditions or something along these lines? And if not, then what is the purpose of the functor laws from a "practical" point of view?


Edit: I just wanted to clarify that I don't really need an answer to the previous question. What really troubles me is that most tutorials on Haskell's functors make it seem like their wrapper-like nature (on which the intuition behind applicatives and monads is often based) stems from their laws when in reality it seems to be a consequence of the naturality conditions due to Wadler's free theorems(2), unless I'm completely mistaken. What role do the functor laws play in their interpretation as abstractions of some functional programming pattern?

(1) Milewski, B.; Category Theory for Programmers; 164
(2) Maguire, S.; Review: Theorems for Free

$\endgroup$

2 Answers 2

2
$\begingroup$

I'm having a little trouble exactly what question you're asking, but I think it might help if we went through the connection between homomorphisms and free theorems.

But here's the executive summary of what I think you want to know:

  • The fmap laws are not redundant when you have free theorems. On the contrary, you need fmap to state the free theorem for any function that involves a Haskell Functor.
  • Specifically, if a function has the type of a natural transformation, its free theorem is the naturality condition expressed in terms of fmap.

First off, let's get a bit of intuition about what the parametricity theorem states. An informal way of stating it is this: For any function which has a "forall" in its type, that "forall" really does mean "for all types". The function cannot make any assumptions whatsoever about the type.

To explain what this means when we involve types with an algebra, we need the language of homomorphisms. So let's review what a homomorphism is. (Sorry if this is revision for you, but I think it's worth going through this.)

Let's consider monoids. A monoid is a set $M$ along with two operations $0_M$ and $+_M$ which obeys the laws:

$$\forall x \in M, x +_M 0_M = 0_M +_M x = x$$ $$\forall x,y,z \in M, x +_M (y +_M z) = (x +_M y) +_M z$$

A monoid homomorphism is a function between two monoids $f : M \rightarrow N$ which "respects" the two operations:

$$f(0_M) = 0_M$$ $$f(x +_M y) = f(x) +_N f(y)$$

In Haskell, we might represent monoids as a type class:

class Monoid m where
    mzero :: m
    mplus :: m -> m -> m

If M1 and M2 are instances of Monoid, a Monoid homomorphism f :: M1 -> M2 must obey the laws:

f mzero = mzero
f (mplus x y) = mplus (f x) (f y)

Remember that the monoid operations on the left-hand side of these "laws" are different from the monoid operations on the right-hand side. The left-hand side are the operations for M1 and the right-hand side are the operations for M2.

Now consider the following typeclass (which isn't the same as Haskell's Ord typeclass, but is close enough):

class Ordered a where
    lessThan :: a -> a -> Bool

Again, there are laws that any reasonable "less than" operator must obey. But what I want to concentrate on here is the Bool part.

What does it mean for a function to "respect" this algebra? It essentially means that the Bool has to be the same. That is, if A and B are both instances of Ordered, an Ordered homomorphism f :: A -> B must obey the law:

lessThan (f x) (f y) = lessThan x y

Think about what this means semantically: this law is stating that for any two elements, f doesn't change their relative ordering. That is what it means for f to "respect" the ordering algebra.

OK, now back to the parametricity theorem. The free theorem for the function:

sortBy :: forall a. (a -> a -> Bool) -> [a] -> [a]

is this, written out in a slightly fuller form:

for all f :: A -> B,
If lt :: A -> A -> Bool is a function such that
    for all x, y :: A, lt x y = lt (f x) (f y)
Then
    for all xs :: [A], sortBy lt . map f = map f . sortBy lt

We can say the same thing with type classes:

sort :: forall a. Ordered a => [a] -> [a]

The free theorem here is, essentially:

for all f :: A -> B,
if f is an Ordered homomorphism
then sort . map f = map f . sort

Which is, of course, just that f doesn't change the ordering of elements, then performing the substitution map f and then sorting is the same thing as sorting them performing the substitution.

The complication with Functor is that it's a constructor class, so the intuition about what a Functor homomorphism should be is a little less clear. But let's step back a moment and talk about category homomorphisms.

Categories are algebras just like any other. A category has two operations, $1$ and $\circ$, which satisfies the laws that you know and love. So a category homomorphism $F$ is a function which satisfies:

$$F(\hbox{id}) = \hbox{id}$$ $$F(g \circ f) = F(g) \circ F(f)$$

If you think of a functor between the "Haskell category" (objects are Haskell types, morphisms are Haskell functions) and a Haskell Functor category (objects are Haskell functors and morphisms are Haskell functions between these objects), then this is exactly the Functor laws.

fmap id = id
fmap (g . f) = fmap g . fmap f

Now here's the important part: The fmap operation is the homomorphism precondition that you need for free theorems involving Functor.

Consider, for example, something with the same type as fmap but isn't necessarily fmap:

instance Functor F where ...

somethingLikeFmap :: forall a b. (a -> b) -> (F a -> F b)

The free theorem for this function is:

if q . f = g . p,
then somethingLikeFmap q . fmap f = fmap g . somethingLikeFmap p

From this you can conclude a bunch of interesting things, such as that if somethingLikeFmap id = id then fmap = somethingLikeFmap. In particular, somethingLikeFmap can be characterised entirely by what it does with id.

This has to be the case. The parametricity theorem, which generates these free theorems, states the "forall" in a parametric type really does mean "for all types".

If the functor is a container-like type, for example, the function cannot make any decision about what to do based on the value of an element in the container. But it could do anything with the structure of the container which doesn't depend on the values in it.

somethingLikeMap1 f = reverse . map f
somethingLikeMap2 f = map f . tail
somethingLikeMap3 f = []

If a polymorphic function does has a way to make decisions based on the values (e.g. lessThan in the case of sort), then the free theorem states that this is the only way that it can make a decision about what to do, so if the mapping doesn't affect the decisions (e.g. the values returned by lessThan), then it doesn't affect the operation of the polymorphic function (e.g what sort does).

So this is why the fmap laws are not redundant with respect to free theorems: they define the homomorphism between the Haskell category and the (endo-) functor category that you need to state the free theorems.

Finally, what is the free theorem of a function which has the type of a natural transformation?

instance Functor F where ...
instance Functor G where ...

eta :: forall a. F a -> G a

It is:

fmap_G f . eta = eta . fmap_F f

This is exactly the naturality condition. This a good way to think about what's so "natural" about a natural transformation.

$\endgroup$
7
  • $\begingroup$ It’s most likely my fault but I could draw hardly anything from this. I can’t pinpoint where you prove the first claim but I was aware of it even if informally, but let’s just take it for granted. What if we didn’t have free theorems? What if we had different laws? I guess what I was looking for was a more direct relation, if any, between the functor laws and functors behaving like containers. But maybe I'm just trying to read too much into it. $\endgroup$
    – giofrida
    Commented Jan 23, 2020 at 13:48
  • $\begingroup$ Sorry about that. I probably pitched this at the wrong level. I assumed that you knew enough algebra to know what a homomorphism is, because that's what you need to know to understand the motivation for category theory. Can you let me know a little more about where you are at? $\endgroup$
    – Pseudonym
    Commented Jan 23, 2020 at 20:50
  • $\begingroup$ Don't worry, I wanted to understand monads and related classes a bit better but not at the cost of learning an entire theory. I thought that knowing more about functors would explain their use as "containers" but maybe they just happen to fit? Anyways, I understand the first part of your answer but it gets cryptic when you talk about somethingLikeFmap, I just can't follow your reasoning. $\endgroup$
    – giofrida
    Commented Jan 23, 2020 at 22:06
  • $\begingroup$ I have a suggestion if that's the point where you got stuck. Get Wadler's original "Theorems for Free" paper, and implement the free theorem generator in Haskell. You will learn a lot. $\endgroup$
    – Pseudonym
    Commented Jan 23, 2020 at 22:43
  • $\begingroup$ I have another suggestion. Stay away from "Theorems for Free", it is completely unsuitable as tutorial material for practical programmers, even for advanced programmers who are motivated to learn about monads and functor laws. It's all written at the wrong level, it's for researchers and not for learners. $\endgroup$
    – winitzki
    Commented Dec 15, 2022 at 18:39
0
$\begingroup$

Let me answer all your questions as stated, keeping to the practical programmer's point of view.

Is it correct to say that requiring the functor laws on a type constructor is equivalent to requiring that its data constructors satisfy the naturality conditions or something along these lines?

No, this is not quite correct. One of the functor laws can be derived from naturality but not the other. Also, note that not all functors have simple data constructors like in your example. Your example is too simple; it's the identity functor whose fmap is an identity function. Any law will always hold for an identity function. An identity functor is at the same time a monad, an applicative, and a traversable. You need a more complicated example (such as Either a (r -> a)) to see what laws follow from naturality.

And if not, then what is the purpose of the functor laws from a "practical" point of view?

The purpose of functor laws is to express the programmer's expectation about what fmap should be doing to a data structure. The programmer's expectation is that fmap should transform all the data stored within the data structure but should not modify the shape of the data structure. For example, List's fmap should apply a given transformation to each value separately but should not change the length of the list. Since the shape of the data structure does not change, composing two fmaps should give the same result as a single fmap with a composed function. This is the functor's composition law.

Programmers also expect that fmap should work in the same way for all choices of type parameters. So, fmap should not try to make decisions based on inspecting the types of values stored in the data structure, fmap should always work in the same way for all types. This gives rise to the naturality law.

Another expectation is that fmap id should leave the data structure entirely unchanged. This is the functor's identity law.

In this way, you can explain all the laws of fmap (naturality, composition, identity) as equations that formalize a programmer's reasonable expectations about what fmap should do.

Tutorials on Haskell's functors make it seem like their wrapper-like nature (on which the intuition behind applicatives and monads is often based) stems from their laws when in reality it seems to be a consequence of the naturality conditions due to Wadler's free theorems(2), unless I'm completely mistaken. What role do the functor laws play in their interpretation as abstractions of some functional programming pattern?

The special property of functors is that, for a given covariant type constructor, there is only one way of implementing fmap that satisfies the laws. So, in some sense, the functor's fmap is a direct consequence of a functor's type expression.

One proof of that goes by structural analysis of a given type constructor. A covariant type constructor F a must be one of the following: a constant type F a = Int (independent of a), or F a = a, or F a = (G a, H a), or F a = Either (G a) (H a) where G and H are also covariant, or F a = G a -> H a where G is cotravariant and H is covariant. In each case you can show that there is only one way of implementing fmap that satisfies the laws. (The inductive assumption that G and H already have their own unique fmaps that satisfy the laws.) A similar procedure for contravariant functors.

Once you go through these proofs, you will be able to write fmap for any type constructor, such as:

  type F a = Either ((a -> Int) -> a) (a, a, Int, Either (Int -> a) (List a))

You will be able to quickly recognize whether any given type constructor is covariant, contravariant, or neither, and implement their methods accordingly.

$\endgroup$

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Not the answer you're looking for? Browse other questions tagged or ask your own question.