Monads or triples?
From my own reading it seems that monad is more common in general but
triple does seem to continue to live on in the homological algebra
community. Barr and Beck both continue to use it. I found their paper
"Acyclic Models and Triples" to be very enlightening - such a simple
idea.
I am working with a CS advisor who studies connections between category theory and
programming languages. Computer scientists have found it useful to use monads to
structure sequential composition of programs, so monads come up a lot
there - see Moggi's "Notions of Computations and Monads". Really, a lot of the reason I am learning homological algebra is
to understand the purpose and meaning of these "canonical resolutions."
I
am picking up French a little at a time to read the book by Godement,
which is excellent. His goal in the book is to cover the homology theory
of simplicial sets and simplicial complexes, briefly cover spectral
sequences, and cover the essentials of sheaf cohomology theory in only
300 pages, for someone who has never seen any algebraic topology at all.
So far, he seems to be succeeding wonderfully; I really like his
exposition. It is a very good example of working in the right level of
generality - he does not go into a painful amount of detail anywhere (he
does not allow himself the space), but he does point out that a great
deal of the structure of algebraic topology (for example the cup
product) really arises at the level of simplicial sets, like the cup
product. So the fact that the Cech cohomology is the cohomology of the
nerve of an open cover, which is a simplicial set, means that we can
define a cup product in Cech cohomology. The only component of the
presentation that I think would now be considered outdated is the fact
that he chooses to switch back and forth, as is convenient, between
"simplicial sets" defined over \(\Delta\), the category of nonempty finite
ordinals, and "simplicial sets" as defined over a skeleton of the
category of finite sets. However, it is completely clear at all times
that the choice of the underlying category is more or less irrelevant,
so long as it has underlying objects that are naturally in bijection
with the integers, and that there are enough "face maps" around that one
can construct a chain complex by taking alternating sums. In
particular, every "simplicial set" over a skeleton of the category of
finite sets contains a true simplicial set, so this doesn't really
concern me too much. (Nevertheless, I think even this is a more modern
approach than Hatcher, who doesn't even mention simplicial sets!)
What
drew me to the book is Weibel's comment 8.6.15 on p 285 that Godement's
famous resolution is constructed by iterating the application of a
certain monad - the first monad, historically speaking. The fact that
monads give rise to simplicial resolutions, as is the main point of
section 8.6 and the old paper by Barr-Beck, means that the resulting
cohomology has all this nice structure arising from the simplicial
structure of the sheaf resolution. I am curious about Barr and Beck's
claim that comonadic cohomology is the right notion of a cohomology
theory for many notions of modern algebra. Barr's book Acyclic Models
contains many nice examples.
There is a
beautiful application of comonads in logic. Bart Jacobs has developed
this very nicely in his book on categorical logic and type theory.
Perhaps the simplest example of a comonad is that, working in the
category of Sets, if \(X\) is any set, then \(X\times - \) is a comonad. The
projection \(X \times A \to A\) is the counit; the comultiplication \(X \times A \to
X \times X \times A\) is the diagonal. In terms of formal logic, one can see the
comonad as taking the set \(A\) one is working over - considered as say, the
current domain of discourse, where one's working variables are ranging
over - and adding a dummy variable \(x\) ranging across \(X\). Any sentence or
formula \(a: A\vdash \phi(a)\) with a variable ranging across \(A\) defines a subset of \(A\);
by taking the preimage of this set across the projection map \(X \times A \to
A\), one can form an interpretation of \(x: X,a:A\vdash \phi(a) \), which forgets about x.
This corresponds to the structural rule of "weakening" in formal logic. https://en.wikipedia.org/wiki/ Structural_rule
Likewise,
the comultiplication \(A \times X \to A \times X \times X\) can be interpreted as
saying that if we have a sentence \(a:A,x_1:X,x_2:X \vdash \phi(a,x_1, x_2)\) over three variables, we
can always replace the two distinct variables with a single variable \(a:A,x:X\vdash \phi(a,x)\). This is called "contraction" in the link above. Jacobs has
shown that, in some great deal of generality, the richer facets of logic
- in particular, quantifying across variables, equality of terms - are
all controlled by this weakening and contraction comonad, in the sense
that the definitions of quantification and equality can be given with
respect to it. I find it a very elegant presentation and would like to
learn more about these monads.
Comments
Post a Comment