Posted on 2023-06-12 · 7 min read · maths

Classically, (algebraic) symmetric operads are defined as certain graded objects, each level coming equipped with a nice action of the symmetric group, which are also monoids in some sense. While it is often noted that the grading and action correspond exactly to the data of a functor, the fact that virtually all of the structure needed to define operads can be expressed categorically is often passed by, which I think is quite the shame! In this post I want to explicitly calculate the Day convolution for symmetric operads in the category of vector spaces—though the argument holds for all nice enough target categories—in order to show that it is nothing but the usual tensor product of modules.

While there is more to this story—all of which is wonderfully explained in [@kelly05:operads]—I think focusing on the tensor product of \(\mathbb{S}\)-modules in the case of vector spaces already gives one a clue as to how this whole translation works in general.

If we have nice enough categories \(\mathcal{C}\) and \(\mathcal{V}\),
then the functor category \([\mathcal{C}, \mathcal{V}]\) inherits many of the properties of the two parent categories.
One of them is being *monoidal*; if there are nice functors
\[
\otimes_{\mathcal{C}} \colon \mathcal{C} \times \mathcal{C} \to \mathcal{C},
\qquad \qquad
\otimes \colon \mathcal{V} \times \mathcal{V} \to \mathcal{V},
\]
that are associative and unital in appropriate ways,
then there is also a nice monoidal structure—called the *(Day) convolution product*—on \([\mathcal{C}, \mathcal{V}]\).^{1}

Intuitively,
one can think of the Day convolution much like the tensor product of vector spaces.
Given functors \(F, G, H \in [\mathcal{C}, \mathcal{V}]\),
a *bilinear map* is a natural transformation
\[
\beta ≔
\big\{
\beta_{v, w} \colon
Fv \otimes Gw \to H(v \otimes_{\mathcal{C}} w)
\big\}_{v, w \in \mathcal{V}}.
\]
Just as in the concrete case,
maps from the convolution product \(F \star G\) to \(H\) now correspond to exactly these bilinear maps,
and can be seen as some sort of “linearisation”.

Setting \(\mathcal{V} ≔ \mathsf{Vect}_{\mathtt{k}}\)—for some field \(\mathtt{k}\)—one
can also give a definition in more explicit terms:^{2}
\[
(F \star G)x
≔ \int^{c,d \in \mathcal{C}}
\mathtt{k}\mathcal{C}(c \otimes_{\mathcal{C}} d, x) \otimes Fc \otimes Gd.
\]
The \(\mathtt{k}\mathcal{C}(c \otimes_{\mathcal{C}} d, x)\) notation is meant to indicate
the linearisation of the hom-set;
i.e., we take the free vector space with basis \(\mathcal{C}(c \otimes_{\mathcal{C}} d, x)\).
The little integral sign above is called a *coend*.
These are nice universal objects, and show up all the time when working with functor categories.
Still in the case of \(\mathcal{V} ≔ \mathsf{Vect}_{\mathtt{k}}\),
suppose that \(P \colon \mathcal{C}^{\mathrm{op}} \otimes \mathcal{C} \to \mathcal{V}\) is a functor.
In general, one can speak of the coend \(\int^{c \in \mathcal{C}} P(c, c)\) of that functor;
a more explicit description can be given as a certain coequaliser:^{3}
\[
\bigoplus_{f \colon c \to d} P(d, c) \rightrightarrows \bigoplus_{c} P(c, c) \twoheadrightarrow \int^{c} P(c, c).
\]
For a morphism \(f \colon c \to d\),
the two parallel arrows are induced by
\[
P(f, c) \colon P(d, c) \to P(c, c) \quad \text{and} \quad P(d, f) \colon P(d, c) \to P(d, d).
\]

To get a feeling for these things,
consider the following example in the case of \(\mathcal{V} ≔ \mathsf{Set}\).
We know how coequalisers look in the category of sets: they are merely certain equivalence relations.
Squinting at the induced arrows,
one wants to identify \(P(f, c)(x)\) with \(P(d, f)(x)\),
for \(c, d \in \mathcal{C}\),
\(x \in P(d, c)\),
and \(f \colon c \to d\).^{4}
In the special case that \(P\) is the hom-functor \(\mathcal{C}({-},{-})\),
the induced maps are
\[
{-} \circ f \colon \mathcal{C}(d, c) \to \mathcal{C}(c, c)
\qquad \text{and} \qquad
f \circ {-} \colon \mathcal{C}(d, c) \to \mathcal{C}(d, d).
\]
More plainly, given \(x \colon d \to c\) and \(f \colon c \to d\),
we have \(x \circ f \sim f \circ x\).
Thus, the coend here can be seen as a kind of abelianisation of arrows.

Consider the following category \(\mathbb{S}\): objects are natural numbers, and morphism spaces are given by \(\mathbb{S}(n, m) = S_n\) if \(n = m\), and \(0\) otherwise, where \(S_n\) is the symmetric group of \(n\) elements.

Again staying firmly in the case that \(\mathcal{V} = \mathsf{Vect}_{\mathtt{k}}\),
an *\(\mathbb{S}\)-module* is a family of vector spaces \(F = (F0, F1, F2, \dots)\),
each of which is a left \(\mathtt{k}S_n\)-module.
Alternatively, it is a functor from \(\mathbb{S}\) to \(\mathcal{V}\)—this is,
of course, where the convolution product comes into play.
The category of \(\mathbb{S}\)-modules is usually denoted by \(\mathbb{S}\text{-}\mathrm{Mod}\).

This construction might seem somewhat artificial at first, but—as mentioned before—(symmetric) operads turn out to be \(\mathbb{S}\)-modules that are also monoids with respect to a certain monoidal structure (not the Day convolution, but a related one). As such, \(\mathbb{S}\)-modules are quite well studied as a category.

One could put quite a few monoidal structures on \(\mathbb{S}\),
but what is usually called the *tensor product of \(\mathbb{S}\)-modules* is defined as follows:
given \(F, G \in \mathbb{S}\text{-}\mathrm{Mod}\), let
\[
(F \otimes G)r ≔ \bigoplus_{n + m = r} \mathsf{Ind}_{S_n \times S_m}^{S_r} Fn \otimes_{\mathtt{k}} Gm,
\]
where \(\mathsf{Ind}_{S_n \times S_m}^{S_r}\) denotes the induced representation.^{5}
Alternatively, one could write this with shuffles:
\[
(F \otimes G)r ≔ \bigoplus_{n + m = r} \mathtt{k}\mathrm{Shuf}(n, m) \otimes_{\mathtt{k}} Fn \otimes_{\mathtt{k}} Gm,
\]

This tensor product doesn’t look super different than the convolution product above, but it remains to see that the equivalence relation generated by the coequaliser really glues things together in just the right way. Let’s try that.

In a slightly more general setting—now considering an arbitrary (monoidal) category \(\mathcal{C}\) instead of \(\mathbb{S}\), but still fixing vector spaces over \(\mathtt{k}\) for \(\mathcal{V}\)—the Day convolution \(F \star G\) evaluated at \(x \in \mathcal{C}\) can be expressed as

\[ \bigoplus_{\substack{f \colon a \to a' \\ g \colon b \to b'}} \mathtt{k}\mathcal{C}(a' \otimes b', x) \otimes_{\mathtt{k}} Fa \otimes_{\mathtt{k}} Gb \rightrightarrows \bigoplus_{a, b \in \mathcal{C}} \mathtt{k}\mathcal{C}(a \otimes b, x) \otimes_{\mathtt{k}} Fa \otimes_{\mathtt{k}} Gb. \]

Looking at the induced arrows, and—as usual—considering only elementary tensors, on the left side we have “tuples” of \[ h \colon a' \otimes b' \to x,\quad v \in Fa,\quad w \in Gb. \] They are then mapped to either \[ h \circ (f \otimes g) \colon a \otimes b \to x,\quad v \in Fa,\quad w \in Gb \] or \[ h \colon a' \otimes b' \to x,\quad (Ff) v \in Fa',\quad (Gg) w \in Gb', \] and these two representations are identified.

In the special example of operads, the above coequaliser is easier to understand—remember that \(\mathbb{S}\) is a category with only endomorphisms. Thus, due to \(\mathbb{S}(a + b, x)\) vanishing, all factors in the coproduct where \(a + b \neq x\) are automatically 0. As such, the whole thing transforms into

\[ \bigoplus_{\substack{\sigma \in S_n \\ \tau \in S_m \\ n + m = r}}\!\!\! \mathtt{k}\mathbb{S}(n + m, r) \otimes_{\mathtt{k}} Fn \otimes_{\mathtt{k}} Gm \rightrightarrows \!\bigoplus_{n + m = r}\!\! \mathtt{k}\mathbb{S}(n + m, r) \otimes_{\mathtt{k}} Fn \otimes_{\mathtt{k}} Gm. \]

The identifications \[ h \circ (\sigma + \tau),\quad v ,\quad w \qquad\sim\qquad h ,\quad (F\sigma) v,\quad (G\tau) w \] now look an awful lot like identifying some left actions with some right actions. Indeed, due to the extra condition that \(n + m = r\), we are effectively permuting \(r\) by applying the action of two partitions of size \(n\) and \(m\)—a shuffle product! Overall, the expression \[ \big(\!\!\!\! \bigoplus_{n + m = r} \mathtt{k}\mathbb{S}(n + m, r) \otimes_{\mathtt{k}} Fn \otimes_{\mathtt{k}} Gm \big) / {\sim} \] simplifies to \[ \bigoplus_{n + m = r} \mathtt{k}\mathrm{Shuf}(n,m) \otimes_{\mathtt{k}} Fn \otimes_{\mathtt{k}} Gm, \] or, in different notation, \[ \bigoplus_{n + m = r} \mathsf{Ind}_{S_n \times S_m}^{S_k} Fn \otimes_{\mathtt{k}} Gm, \] which is exactly the kind of formula that we wanted to end up with. Neat.

Posted on 2023-01-10 · last modified: 2023-01-12 · 4 min read · maths

I have a new preprint on the arXiv! It is joint work with Sebastian Halbig, and concerns itself with the interplay of different structures on monoidal categories that give rise to a notion of “duality”. At five pages, it is a very short paper; yet I’d still like to give a little teaser as to what kind of question we sought to answer.

We mainly concerned ourselves with three notions of *duality* for
(non-symmetric!) monoidal categories: closed monoidal categories,
*-autonomous^{1} categories, and rigid (monoidal) categories. It
is well-known that these concepts are all connected in the following
way.

Every *-autonomous category is closed monoidal. For all \(x, y \in \mathcal{C}\), the internal-hom \([x, y]\) is given by \(D^{-1}(Dy \otimes x)\), where \(D\) is the duality functor.

Every rigid monoidal category is *-autonomous. The internal-hom then simplifies to \([x, y] = y \otimes x^*\), where \({-}^*\) is the duality functor.

An obvious next question one could ask is: does this already characterise rigid and *-autonomous categories? More explicitly, are there any conditions one could impose on the internal-hom, such that closedness already implies rigidity? What about *-autonomy?

We’ll start with a positive result for *-autonomy. So the question is this: given a closed monoidal category \(\mathcal{C}\) in which the internal-hom is given by tensoring with another object, is this category already *-autonomous?

More formally, is it true that \(\mathcal{C}\) is *-autonomous if for all \(x \in \mathcal{C}\), there exists an object \(Dx \in \mathcal{C}\), such that there is an adjunction \[ {-} \otimes x \dashv {-} \otimes Dx? \]

Almost! In good cases, we can recover what we want from just a little extra condition:

Let \(\mathcal{C}\) be a monoidal category. Suppose that for all \(x \in \mathcal{C}\) there exist objects \(Lx, Rx \in \mathcal{C}\), such that we have adjunctions \[ {-} \otimes Lx \dashv {-} \otimes x \dashv {-} \otimes Rx. \] Then \(\mathcal{C}\) is *-autonomous.

Using the notion of a *-autonomous category of [@boyarchenko13:groth-verdier]—that is, for every \(x \in \mathcal{C}\) the functor \(\mathcal{C}({-} \otimes x, 1)\) is representable by \(Dx\)—this becomes an exercise in “Yoneda Yoga”. More precisely, one uses the fact that the Yoneda embedding is fully faithful a lot. Try it yourself!

At first sight, it’s not even clear there is anything to show for rigidity. Something one is immediately tempted to do is to conjecture the following:

A closed monoidal category \(\mathcal{C}\) is rigid monoidal if for all \(x \in \mathcal{C}\) we have \([x, {-}] \cong {-} \otimes Dx\), for some object assignment \(D \colon \mathrm{Ob}\,\mathcal{C} \to \mathrm{Ob}\,\mathcal{C}\).

This seems sensible; after all, the snake identities of an adjunction
look almost completely the same as the ones for a dual!^{2} However, if
one sits down and actually writes down the diagrams, something doesn’t
quite fit. As a reminder, suppose we have an adjunction
\(F\colon \mathcal{C} \leftrightarrows \mathcal{C} : \! U\) with unit
\(\eta \colon \mathrm{Id}_{\mathcal{C}} \Longrightarrow U F\)
and counit
\(\varepsilon \colon F U \Longrightarrow \mathrm{Id}_{\mathcal{C}}\).
The snake identities for this adjunction look like

In particular, we get two such diagrams if we apply everything to the monoidal unit \(1 \in \mathcal{C}\). Specialised to the adjunction \({-} \otimes x \dashv {-} \otimes Dx\) the above then becomes

These are just the snake identities for duals if we make the definitions
\(\mathrm{ev}_x ≔ \varepsilon_1\) and \(\mathrm{coev}_x ≔ \eta_1\), right?
Wrong! In the latter case we, for example, require that
\[
(x \otimes \varepsilon_1) \circ (\eta_1 \otimes x) = \mathrm{id}_x.
\]
However, the above diagram does *not* say that! It says that the
relation
\[
\varepsilon_x \circ (\eta_1 \otimes x) = \mathrm{id}_x
\]
holds. This means that we would have to impose the additional
conditions that \(\varepsilon\) and \(\eta\) are morphisms of modules; i.e.,
\(\varepsilon_x \overset{\scriptsize{!}}{=} x \otimes \varepsilon_1 = x \otimes \mathrm{coev}_x\),
as well as a dual statement. This is not the case in general.

Finding a counterexample now works by exploiting exactly this fact: we
write down a syntactic category \(\mathcal{D}\) that is generated by a
family of morphisms
\[
\eta_{m, n} \colon m \to m \otimes n \otimes n
\qquad \text{and} \qquad
\varepsilon_{m, n} \colon m \otimes n \otimes n \to m,
\]
and impose relations guaranteeing the naturality of these arrows. There
is a subcategory \(\mathcal{C}\) of \(\mathcal{D}\) in which we additionally
require \(\eta\) and \(\varepsilon\) satisfy the snake equations of an
adjunction. One can now show that the category \(\mathcal{C}\) is closed
monoidal, with the appealing adjunction
\[
{-} \otimes n \dashv {-} \otimes n.
\]
However, it is not rigid! The proof exploits certain strong monoidal
functors to the category of finite-dimensional vector spaces, and shows
that the subset of arrows *in \(\mathcal{D}\)* that contains one of the
snake identities for duals is (i) closed under exactly these relations,
and (ii) all morphisms in this set have length at least two. Hence, if
we project any morphism down to \(\mathcal{C}\), it can’t possibly be the
identity, and thus the snake identities for duals do not hold. If you
want more details, check the paper [@halbig23:dualit-monoid-categ]!

Posted on 2022-10-15 · last modified: 2023-03-13 · 16 min read · maths

If you’ve been doing category theory for any amount of time, you’ll
probably have stumbled upon enriched category theory as a way of
expressing categorical ideas internal to some context other than
**Set**. Reading into it, you might have come across these foreign
sounding concepts like weighted (co)limits and wondered what that was
all about—and then got lost for a few days, trying to decipher what
Kelly is talking about and why symbols resembling tensor
products are suddenly being thrown around. At least that’s what
happened to me.

After scouring the internet for good resources, I found two really enlightening blog posts: one by Todd Trimble and the other by John Baez—and they’re too good not to share. Plus, people always say that you don’t understand a concept unless you can explain it to someone else, so here’s my shot at it!

I will assume familiarity with basic notions of category theory (limits, colimits, adjunctions, monoidal categories, …), as well as elementary abstract algebra (in particular, rings and modules). If you’re not comfortable with these and have a lot of time to kill, I recommend Category Theory in Context by Emily Riehl for the former and A Course in Algebra by Ernest Vinberg for the latter.

Really, it’s good if you have heard about enriched category theory before, as this is where weighted colimits tend to naturally crop up a lot; also because I can’t possibly do the topic justice in a single blog post. I will still try, of course, but be warned. Even if you’re not familiar with enriched categories, however, this post might still be of interest. Weighted colimits do also appear in ordinary category theory, so feel free to substitute \(\mathsf{Set}\) for \(\mathcal{V}\) whenever you feel like it. On top of that, most of the main part of the text doesn’t use enrichment at all.

Before we start I must note that—more-so than elsewhere—these are very much not my own thoughts. I’m just retelling the story in order to understand it better myself. Sources and resources for everything are linked at the end. The key insights come from the already mentioned blog posts by Trimble and Baez, as well as the accompanying (resulting) nLab article.

Before diving into the gory details, enriched category theory is perhaps
best explained a bit more intuitively at first. In short, instead
ordinary categories—whose hom-*sets* are always sets—one studies
so-called \(\mathcal{V}\)-categories, whose hom-*objects* are objects in
some “environmental” category \(\mathcal{V}\). This category is what
replaces \(\mathsf{Set}\), so it will usually be assumed to have some very
nice properties. For the purposes of this blog post, I will assume that
\((\mathcal{V}, \otimes, 1)\) is a (small) complete and cocomplete closed
symmetric monoidal category.^{1} If you don’t know what some of these
words mean, you can read that as “It’s an environment with enough
structure so that a large chunk of ordinary category theory makes sense
internal to it.”

In addition, I would also like to fix a \(\mathcal{V}\)-category \(\mathcal{C}\) for the rest of this blog post. For the moment, you can think of this like an ordinary category such that for any two objects \(a\) and \(b\) in \(\mathcal{C}\), we have that \(\mathcal{C}(a, b) ≔ \mathrm{Hom}_{\mathcal{C}}(a, b)\) is an object in \(\mathcal{V}\). Naturally, all the usual axioms of a category—like associativity and unitality of morphisms—ought to hold in this new setting; however, expressing these laws is a little bit more involved now. The fact that \(\mathcal{C}(a,b)\) is an object in \(\mathcal{V}\) means that it’s a “black box”—we can’t peek into it anymore! Writing \(f \in \mathcal{C}(a,b)\) is no longer legal, so we somehow have to make do with not talking about individual morphisms. As such, a little bit more care has to be taken for the precise definition of an enriched category to make sense.

Before we get to that, however, a few examples should do wonders for
seeing just how wide-spread the concept really is in mathematics.
Thankfully—lest the world explodes—categories enriched in \(\mathsf{Set}\)
are exactly ordinary categories. An equally familiar example should be
\(\mathsf{vect}_k\): the category of finite-dimensional vector spaces over
a field \(k\). It is easy to verify that the linear maps between two
vector spaces are again a vector space, and hence \(\mathsf{vect}_k\) is,
much like the category of sets, *enriched over itself*. So whenever you
do linear algebra, you’re in the setting of enriched category theory
already! Categories enriched over \(\mathsf{vect}_k\), usually called
\(k\)-linear categories, are plentiful “in the wild”; for example,
representation theorists might know Tannakian categories, or the
Temperley–Lieb category.

Other examples of enriched categories include 2-categories^{2} as those
enriched over \(\mathsf{Cat}\), and preadditive categories, which are
enriched over \(\mathsf{Ab}\). Last, but certainly not least, rings can
also be seen as categories; namely, they have just a single object
\(\star\) and \(\mathrm{Hom}(\star,\star)\) forms an abelian group—stay
tuned for more on this.

With all of these examples in mind, let us explore the technical definition of a category enriched over \(\mathcal{V}\). Formally, our fixed \(\mathcal{C}\) consists of:

- A collection of objects \(\mathrm{ob}\, \mathcal{C}\).
- For \(x, y \in \mathcal{C}\), a hom-object \(\mathcal{C}(x, y) \in \mathcal{V}\).
- For \(x, y, z \in \mathcal{C}\), a composition map in \(\mathcal{V}\): \[ \circ_{x, y, z} \colon \mathcal{C}(y, z) \otimes \mathcal{C}(x, y) \longrightarrow \mathcal{C}(x, z). \]
- For \(x \in \mathcal{C}\) an identities map \(e_x \colon 1 \longrightarrow \mathcal{C}(x,x)\).

Further, this data has to satisfy appropriate associativity and unitality conditions:

In the above diagrams, \(\alpha\), \(\lambda\), and \(\rho\) respectively denote the associativity, left, and right unitality constraints of \(\mathcal{V}\).

If these diagrams remind you of a monoidal category, they absolutely should! Much like you can think of ordinary categories as multi-object monoids, a decent mental model for \(\mathcal{V}\)-categories is to think of them as multi-object monoidal categories.

We furthermore need analogues for functors and natural transformations—they now also come with a \(\mathcal{V}\)- prefix. The functor laws get a bit more complicated, as we need to draw commutative diagrams and can’t simply express this property in an equation like \(F(f \circ g) = Ff \circ Fg\) anymore—remember that we can’t talk about individual arrows. However, most of the intuition one already has about functors and natural transformations should carry over just fine. I will leave the technical definitions of enriched functors and natural transformations as exercises to the reader; they are relatively straightforward to write down and not all that important for what follows.

Thinking further, the upshot one will arrive at is that, in order to do enriched category theory, we not only need analogues for functors and natural transformations, but also for all the other basic notions of ordinary category theory. Since limits and colimits are among the most important constructions, people naturally started to think about how one could express them in the enriched language—this is precisely what lead to the development of weighted colimits!

One interesting thing I want to highlight about enriched functors of the form \(\mathcal{C} \longrightarrow \mathcal{V}\) is the induced arrow on morphisms that they always come with; namely, such a functor \(F\) induces an assignment \(\mathcal{C}(a, b) \longrightarrow \mathcal{V}(F a, F b)\). Because \(\mathcal{V}\) is symmetric monoidal, we can use its tensor–hom adjunction and rewrite the above to look more like an action:

\[ \mathcal{C}(a, b) \otimes F a \longrightarrow F b. \]

Likewise, a \(\mathcal{V}\)-functor \(F \colon \mathcal{C}^{\mathrm{op}} \longrightarrow \mathcal{V}\) is equipped with an action from the other side:

\[ F b \otimes \mathcal{C}(a, b) \longrightarrow F a. \]

This already frames functors as little more than generalised modules, and we will explore this connection in more detail later on.

One more important technical detail has to be covered before we get to
the fun stuff: copowers. The basic idea is that in any
ordinary—non-enriched—closed monoidal category \((\mathcal{A}, \otimes_{\mathcal{A}}, 1_{\mathcal{A}})\), we have the tensor–hom
adjunction (also called *currying*) \({-} \otimes b \,\dashv\, [b, {-}]\).
More explicitly, this means that there is a natural isomorphism

\[ \mathcal{A}(a \otimes_{\mathcal{A}} b, c) \cong \mathcal{A}(a, [b, c]), \qquad \text{for } a, b, c \in \mathcal{A}. \]

If we’re in an enriched setting, we want to somehow replace the tensor
product of the monoidal category with some action, say \(\cdot \colon \mathcal{C} \times \mathcal{V} \longrightarrow \mathcal{C}\), while
retaining an analogue of the above isomorphism. As such, the *copower*
of \(c \in \mathcal{C}\) *by* \(v \in \mathcal{V}\) is an object \(c \cdot v \in \mathcal{C}\), such that for all \(b \in \mathcal{C}\), there is a
natural isomorphism

\[ \mathcal{C}(c \cdot v, b) \cong \mathcal{V}(v, \mathcal{C}(c, b)). \]

Above I have slightly abused notation; \(\mathcal{V}({-}, {-})\) now
denotes the *internal* hom of \(\mathcal{V}\), instead of the external
one.^{3} If \(\mathcal{V}\) is clear from the context, one also often writes
\([{-},{-}]\).

The best thing about copowers is their existence when it comes to
\(\mathsf{Set}\) and ordinary categories. If \(\mathcal{A}\) has
coproducts, there is a canonical copower \(\cdot \colon \mathsf{Set} \times \mathcal{A} \longrightarrow \mathcal{A}\).^{4} For all \(X \in \mathsf{Set}\) and \(a \in \mathcal{A}\), it is given by

\[ X \cdot a ≔ \coprod_{x \in X} 1_{\mathcal{A}} \otimes_{\mathcal{A}} a \cong \coprod_{x \in X} a. \]

The fact that this is a copower follows from

\[ \mathcal{A}(X \cdot a, b) = \mathcal{A}\left(\coprod_{x \in X} a, b\right) \cong \prod_{x \in X} \mathcal{A}(a, b) \cong \mathsf{Set}(X, \mathcal{A}(a, b)), \]

for all \(b \in \mathcal{A}\). Because of their closeness to the tensor product, people sometimes call copowers “tensors” and write them with the same symbol as they write the tensor product.

Onto the main dish. The key idea is to reframe an ordinary colimit in terms of “looking like a monoidal product”. The weighted colimit then becomes something akin to the tensor product over a k-algebra \(R\). We like rings and modules, so let’s explore this further.

To recap, when looking at bimodules \(A\) and \(B\) over some \(k\)-algebra (ring) \(R\) we can define the tensor product of \(A\) and \(B\) over \(R\), in symbols \(A \otimes_R B\), as the coequaliser

\[ A \otimes_R B ≔ \mathrm{coeq} \left( A \otimes R \otimes B \rightrightarrows A \otimes B \right), \]

where the two parallel arrows are induced by the left and right actions \(\rhd \colon A \otimes R \longrightarrow A\) and \(\lhd \colon R \otimes B \longrightarrow B\), respectively.

For ease of notation, I will often write coequalisers like the above one as

\[ A \otimes R \otimes B \rightrightarrows A \otimes B \longrightarrow A \otimes_R B. \tag{1} \]

Categorifying this notion, the ring \(R\) can be seen as a one-object category enriched over \(\mathsf{Ab}\) with object \(1\). The multiplication is recovered as function composition in \(R(1, 1)\), and addition is given by the abelian structure. A right \(R\)-module \(A\) is then an enriched functor \(A \colon R^{\mathrm{op}} \longrightarrow \mathsf{Ab}\) and similarly a left R-module is an enriched functor \(B \colon R \longrightarrow \mathsf{Ab}\). Inserting the definition discussed above, we have that \(A\) consists of a single object \(A1\) and a single arrow \(A1 \otimes R(1, 1) \longrightarrow A1\). Likewise, we obtain \(B1\) and \(R(1,1) \otimes B1 \longrightarrow B1\) in \(\mathcal{V}\). Thus, we have induced maps

\[ A1 \otimes R(1,1) \otimes B1 \rightrightarrows A1 \otimes B1. \]

Let us forget about enrichment for a while and just study ordinary categories now. The second observation we need is the well-known fact that any colimit can be represented as a coequaliser. Suppose \(\mathcal{D}\) to be a cocomplete category . Given a functor \(F \colon \mathcal{J} \longrightarrow \mathcal{D}\) we can express its colimit as

\[ \coprod_{a, b \in \mathcal{J}} \coprod_{f \in \mathcal{J}(a, b)} F a \rightrightarrows \coprod_{b \in \mathcal{J}} F b \longrightarrow \mathrm{colim}_\mathcal{J} F. \]

Note that we can use what we learned about (\(\mathsf{Set}\)-valued) copowers above and write \(\coprod_{f \in \mathcal{J}(a, b)} F a\) as \(\mathcal{J}(a, b) \cdot F a\), or even \(\mathcal{J}(a, b) \times F a\), as \(\mathcal{J}(a,b)\) is a set in this case. Behold:

\[ \coprod_{a, b \in \mathcal{J}} \mathcal{J}(a,b) \times F a \rightrightarrows \coprod_{b \in \mathcal{J}} F b \longrightarrow \mathrm{colim}_\mathcal{J} F. \tag{2} \]

What’s left is to define the two parallel arrows.^{5}

One arrow is induced by the “projection” \(\pi_2 \colon \mathcal{J}(a, b) \times F a \longrightarrow F a\). Note that \(\mathcal{J}(a, b) \times F a\) is really a copower, so the existence of such an arrow is not immediately clear. Starting with the unique map \(! \colon \mathcal{J}(a, b) \longrightarrow \{\star\}\) to the terminal object, we apply it to the functor \({-} \times F j \colon \mathsf{Set} \longrightarrow \mathcal{C}\), in order to obtain

\[ \pi_2 \!≔\; ! \times F a \colon \mathcal{J}(a,b) \times F a \longrightarrow \{\star\} \times F a \cong F a. \]

The other arrow is induced by a collection of actions of \(\mathcal{J}\) on \(F\), indexed by arrows \(f \colon a \longrightarrow b\) in \(\mathcal{J}\); i.e.,

\[\begin{align*} (\mathcal{J}(a,b) \times F a \longrightarrow F b) &= \left( \coprod_{f \in \mathcal{J}(a,b)} F a \longrightarrow F b \right) \\ &= \langle Ff \colon Fa \longrightarrow F b \rangle_{f \in \mathcal{J}(a,b)}. \end{align*}\]

So that’s the story with expressing colimits as coequalisers. We now need to completely reframe this in terms of actions. For the second arrow we are already done: \(F\) can be seen as a left \(\mathcal{J}\)-module.

Using the symmetry of the Cartesian product \(\times\) of sets, the arrow \(\mathcal{J}(a, b) \longrightarrow \{\star\}\) can be reinterpreted as the components of a right action of \(\mathcal{J}\) on the terminal functor \(\mathbb{T} \colon \mathcal{J} \longrightarrow \mathsf{Set}\) that sends every object to the one-element set \(\{\star\}\):

\[ (\mathbb{T}b \times \mathcal{J}(a,b) \longrightarrow \mathbb{T}a) = (\{\star \} \times \mathcal{J}(a,b) \longrightarrow \{\star\}) \cong (\mathcal{J}(a,b) \longrightarrow \{\star\}). \]

Putting these two observations together, we really have two induced arrows, each with type signature

\[ \mathbb{T} b \times \mathcal{J}(a, b) \times F a \longrightarrow \mathbb{T} a \times F a. \]

Inserting these into Equation \((2)\), this yields

\[ \coprod_{a, b \in \mathcal{J}} \mathcal{J}(a,b) \times F a \cong \coprod_{a, b \in \mathcal{J}} \mathbb{T} b \times \mathcal{J}(a, b) \times F a \rightrightarrows \coprod_{a \in \mathcal{J}} \mathbb{T} a \times F a \cong \coprod_{a \in \mathcal{J}} F a. \]

This is exactly the way the tensor product of bimodules is defined in
Equation \((1)\), hence it is tempting to write the resulting coequaliser
as \(\mathbb{T} \otimes_{\mathcal{J}} F\). As such, a colimit of a
functor \(F\) over \(\mathcal{J}\) can be seen as a tensor product of
functors with the terminal functor. Now, the terminal functor is not
very interesting—what if we replace it with something more complicated?
Well, that’s exactly the point where weighted colimits come into play!
Using a *weight* \(W\) instead of \(\mathbb{T}\), we would end up with
something like

\[ \coprod_{a, b \in \mathcal{J}} W b \times \mathcal{J}(a, b) \times F a \rightrightarrows \coprod_{a \in \mathcal{J}} W a \times F a \longrightarrow W \otimes_{\mathcal{J}} F. \]

Because this looks like a tensor product—and it’s universal, due to it being a colimit—it should also support some form of currying: given an arrow \(W \otimes_{\mathcal{J}} F \longrightarrow c\), for an object \(c \in \mathcal{C}\), we should be able to obtain a map \(W \Longrightarrow \mathcal{C}(F, c)\). Now’s your time to guess what exactly a weighted colimit will be!

Still in the non-enriched setting, let me now give you the formal
definition of a weighted colimit. Suppose \(\mathcal{J}\) to be a small
category. Let \(W \colon \mathcal{J}^{\mathrm{op}} \longrightarrow \mathsf{Set}\) be a presheaf—the *weight*—and suppose we have a functor
\(F \colon \mathcal{J} \longrightarrow \mathcal{A}\). The *\(W\)-weighted
colimit of \(F\)* comprises an object \(W \otimes_{\mathcal{J}} F \in \mathcal{A}\), equipped with a natural (in \(a \in \mathcal{A}\))
isomorphism

\[ \mathcal{A}(W \otimes_{\mathcal{J}} F, a) \cong [\mathcal{J}^{\mathrm{op}}, \mathsf{Set}] (W, \mathcal{A}(F, a)). \]

Note that, by the Yoneda lemma, the above isomorphism is uniquely determined by a natural transformation \(W \Longrightarrow \mathcal{A}(F, W \otimes_{\mathcal{J}} F)\), induced by the identity on \(W \otimes_{\mathcal{J}} F\). As promised, this is exactly the representation we arrived at above.

A pair of an object \(c \in \mathcal{A}\) and a natural transformation \(W \Longrightarrow \mathcal{A}(F, c)\) on their own; i.e., without the
universal property, is what one would normally call a *\(W\)-weighted
cocone*.

The enriched definition is now exactly the same! If \(\mathcal{J}\) is a
small \(\mathcal{V}\)-category and we have \(\mathcal{V}\)-functors \(F \colon \mathcal{J} \longrightarrow \mathcal{C}\) and \(W \colon \mathcal{J}^{\mathrm{op}} \longrightarrow \mathcal{V}\), then we can
define the *\(W\)-weighted colimit of \(F\)* as an object \(W \otimes_{\mathcal{J}} F \in \mathcal{C}\), equipped with a
\(\mathcal{V}\)-natural (in \(c \in \mathcal{C}\)) isomorphism

\[ \mathcal{C}(W \otimes_{\mathcal{J}} F, c) \cong [\mathcal{J}^{\mathrm{op}}, \mathcal{V}] (W {-}, \mathcal{C}(F {-}, c)). \]

This is the power of the formalism we developed—the definition extends in a straightforward way to the enriched setting. This may now be used to great effect: among other things weighted colimits can be used to define the right notion of enriched coend.

It’s probably about time for some examples. For the first two, let us focus on cocones only; not thinking about the universal property at first is perhaps a little easier to understand—at least it was for me. I learned these from Richard Garner during BCQT 2022.

Let our diagram category have two objects and one non-trivial morphism; i.e., \(\mathcal{J} ≔ \{ \varphi \colon a \longrightarrow b \}\). Further, assume that the weight \(W\) picks out

^{6}the unique arrow \(\{ 0, 1 \} \longrightarrow \{ 1 \}\) in \(\mathsf{Set}\). Suppose that the functor \(F \colon \mathcal{J} \longrightarrow \mathcal{C}\) sends \(a, b \in \mathcal{J}\) to \(x, y \in \mathcal{C}\) and \(\varphi\) to \(\theta \colon x \longrightarrow y\).Again by the Yoneda lemma we have that a cocone is given by a natural transformation \(W \Longrightarrow \mathcal{C}(F, c)\). In this restricted setting, an arrow \(Wa \longrightarrow \mathcal{A}(Fb, c)\) just picks out two morphisms. Thus, the whole thing amounts to the commutativity of the following diagram:

In more plain language, the following equation must hold:

\[ (x \xrightarrow{\;\;\theta\;\;} y \xrightarrow{\;\;g\;\;} c) = (x \xrightarrow{\;\;\theta\;\;} y \xrightarrow{\;\;f\;\;} c). \]

A slightly more complicated example is the following. Assume again that \(\mathcal{J} = \{ \varphi \colon a \longrightarrow b \}\) as above, only this time our choice of enrichment is not \(\mathsf{Set}\), but \(\mathsf{Cat}\). This means that the weight \(W\) is now a functor from \(\mathcal{A}^{\mathrm{op}}\) to \(\mathsf{Cat}\). Suppose it picks out the arrow

\[ \{ 0 \;\; 1 \} \hookrightarrow \{ 0 \cong 1 \}, \]

where the source an target are understood to be categories. In this setting, a weighted cocone becomes something 2-categorical. We still pick out arrows \(f\) and \(g\), but since the category we are looking at contains a non-trivial isomorphism, the commutative diagram also becomes more complicated. Namely, we required the commutativity of

Instead of the requiring \(\theta \circ g\) to

*equal*\(\theta \circ f\), we now only require the existence of an invertible 2-cell that mediates between the two.A subcategory \(\mathcal{D}\) of \(\mathcal{E}\) is said to be

*dense*in \(\mathcal{E}\) if we can, in some sense, approximate the objects of \(\mathcal{E}\) well enough with objects in \(\mathcal{D}\)^{7}(think of the density of \(\mathbb{Q}\) inside \(\mathbb{R}\)). Dense categories are nice because they often tell us a lot about their super categories and are sometimes easier to reason about. For example, the category of finite-dimensional (left-)comodules of any (possibly infinite-dimensional) Hopf algebra is dense inside the category of all comodules, which makes them much easier to work with than modules.Formally, \(\mathcal{D}\) is dense in \(\mathcal{E}\) if the restricted Yoneda embedding along the inclusion functor \(\iota \colon \mathcal{D} \hookrightarrow \mathcal{E}\)

\[ \mathcal{E} \longrightarrow [\mathcal{E}^{\mathrm{op}}, \mathsf{Set}] \xrightarrow{\;[\iota, \mathsf{Set}]\;} [\mathcal{D}^{\mathrm{op}}, \mathsf{Set}] \]

is still fully faithful. Another way of saying this is that every object \(e \in \mathcal{E}\) is the \(\mathcal{E}(\iota, e)\)-weighted colimit of \(\iota\). Indeed, the isomorphism we have for a weighted colimit specialised to our situation looks like

\[ \mathcal{E}(e, a) \cong [\mathcal{D}^{\mathrm{op}}, \mathsf{Set}] (\mathcal{E}(\iota, e), \mathcal{E}(\iota, a)), \]

for all \(a \in \mathcal{E}\), which is exactly what it means for the above arrow to be fully faithful.

*Exercise*: Try to find a weight \(W\) such that a \(W\)-weighted cocone
recovers the normal, unweighted, notion.

*Exercise*: As you can imagine 1. and 2. can be used to produce all
kinds of relations between \(f\) and \(g\). As such, prove the following
statements:

A variant of 1.: in the case of the weight being \(\{0, 1\} \xrightarrow{\;\;\mathrm{id}\;\;} \{0, 1\}\), we obtain a not-necessarily-commutative diagram.

A variant of 2.: in the case that the weight is \(\{ 0 \} \hookrightarrow \{ 0 \longrightarrow 1 \}\) (i.e., we only have an arrow between 0 and 1 and not necessarily an isomorphism), we get an ordinary (non-invertible) 2-cell as the weighted cocone.

And that’s it! I’ve found this intuition very helpful in trying to wrap my head around these concepts—hopefully other people will too. As a parting gift, I leave you with some more things to think about.

First, one of the most important examples of weighted colimits—and coends, of course—is the tensor product of functors. If you ever wanted to be a ninja, now is the time! It’s a fun operation to think about and play around with, and I would invite you to do just that.

Lastly, the category of weights \([\mathcal{J}^{\mathrm{op}}, \mathcal{V}]\) is actually very special: it is the free cocompletion of \(\mathcal{J}\). Every functor \(G \colon \mathcal{J} \longrightarrow \mathcal{A}\) extends uniquely (up to unique isomorphism) to a cocontinuous functor \([\mathcal{J}^{\mathrm{op}}, \mathcal{V}]\) to \(\mathcal{A}\) by the assignment \(W \mapsto W \otimes_{\mathcal{J}} G\); note the tensor product of functors!.

Monoidal Category Theory:

Saunders Mac Lane: “Natural associativity and commutativity”. In: Rice Univ. Stud. 49.4 (1963), pp. 28–46. issn: 0035-4996.

Pavel Etingof, Shlomo Gelaki, Dmitri Nikshych, and Victor Ostrik: “Tensor categories”. In: Vol. 205. Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 2015, pp. xvi+343.

Enriched Category Theory:

Max Kelly: “Basic concepts of enriched category theory”. In: London Math. Soc. Lec. Note Series 64, Cambridge Univ. Press 1982, 245 pp. (ISBN:9780521287029).

Republished as: Reprints in Theory and Applications of Categories, No. 10 (2005) pp. 1-136 (link)

Copowers:

Weighted Colimits:

Richard Garner: Bicategories; lecture series at BCQT 2022, Leeds.

Emily Riehl: “Weighted Limits and Colimits”; lecture notes.

Posted on 2022-05-01 · last modified: 2022-05-23 · 10 min read · emacs, maths, xmonad

After reading Gilles Castel’s excellent blog post about his research workflow, I decided that it was as good a time as any to write about mine—deeming it novel enough to hopefully contribute something to the discussion.

Just like Castel, I’m a new PhD student in mathematics, which means no lab work and—in my case—no code. Just you and your inability to understand basic concepts. As such, I often scribble things down on paper or a blackboard first and, when sufficiently convinced that the information is worth keeping around, type it up. Typesetting something is a surprisingly effective way to catch errors in handwritten manuscripts!

As basically my entire digital life happens in either Emacs or
XMonad, my setup is heavily skewed in that direction; I will make use
of these tools almost every step of the way.
As such, there is a lot of tangential almost relevant bits that I could
cover here. However, since these aren’t directly related to my
*research* workflow—and there is a lot of great resources out there
already—I decided to not do this here.^{1}

XMonad has a module called TopicSpace, which upgrades the X11 workspace—virtual desktop—concept to so-called topics. These are workspaces with a “theme” associated to them; for example, I have a topic for every project that I’m currently working on. This results in a clean separation of concerns. Plus, I always know where my windows are!

Every topic has a directory and a “startup hook”, firing when the topic
is switched to and empty, associated to it. While most convenient for
programming related tasks—e.g., spawn `ghcid`

in the relevant directory
or automatically build and open this website—it’s also quite convenient
for mathematical projects.

I have set up special keybindings to bring up an Emacs session in the
topic directory, or spawn a terminal there. Switching to topics is done
fuzzily via the XMonad prompt, which means I only have to type a few
characters to get to my destination. This makes it feasible to have 30
topics, instead of the usual 9 or so, in the first place. As a result,
it’s rather fast to go from thinking about a certain problem to working
on it. When I’m already inside a project, I leverage Emacs’s built-in
`project.el`

library to search through files and the like.

Here I keep things relatively simple; I have a big “library” directory in which essentially all books or papers that I’ve ever read reside. This may sound a bit chaotic, but since I never interact with this as-a-directory it is actually the easiest and cleanest solution for me.

To keep a bit of order, all files are named in a consistent and
descriptive way: `authors_title.pdf`

, where `authors`

is a list of last
names of all authors separated by hyphens and `title`

is the title of
the work, also separated by hyphens. For example:

` pastro-street_double-of-a-monoidal-category.pdf`

Also in this directory are `.xopp`

files, when I scribble on the
relevant PDFs in xournalpp; more on that later.

Instead of navigating to it, all interaction with the library is done
via hmenu, a small wrapper around dmenu to facilitate this kind of
behaviour. I merely have to press `M-y`

^{2} and can then fuzzy search
through the directory. Once I’ve made a choice, PDFs are automatically
opened in zathura and `.xopp`

files are opened in xournalpp.

My bibliography is organised in a similar spirit; see Citations.

For handwritten notes I… use real paper! A little elaboration is
probably in order, having talked about `.xopp`

files and xournalpp
above. I do have a Wacom tablet lying around and I’m quite happy
annotating PDFs with it. In lieu of printing everything out, this
alleviates a little bit of the usual pain with reading papers, like
coming back to one three weeks later and getting stuck on the same
calculation as last time. I do love those annotations!

However, there is just something deeply psychologically pleasing about ordinary pen and paper—nothing beats drawing up the first version of many ideas there. It’s a very “pure” experience: there’s no noise or distractions, nothing that could break, no additional layer of abstraction between you and the maths. Chalkboards—but not whiteboards, with their ever empty markers—fall into this category as well, especially when collaborating with others.

Not without my quirks (as I’m sure you’ve noticed), I’m a bit picky
about the particular writing setup. It’s either completely white A5^{3}
paper, paired with a good (mechanical) pencil/a fine pen, or thick
dotted paper, paired with a fountain pen.

Quite enjoying the experience, I tend to write quite a lot of manuscripts by hand first. Of course, anything that’s supposed to be permanent should be typed up properly!

Not wanting to go insane, I use LaTeX for all of my digital note taking.
My writing setup for `.tex`

files is pretty similar to Karthik
Chikmagalur’s—whose excellent post you should definitely check out—so I
will not belabour the point too much here. The tl;dr is AUCTeX,
CDLaTeX, and aas.

In case you’re not used to `prettify-symbols-mode`

: the inserted LaTeX
code was

```
\begin{definition} \label{def:day-convolution}
\emph{Day convolution} of two functors $F$ and $G$ is
The \[
F * G \defeq
\int^{C,D \in \cc} \cc(C \otimes D, \blank) \otimes FC \otimes GD.
\]
\end{definition}
```

I do use some smaller packages not mentioned in Chikmagalur’s article,
like math-delimiters and latex-change-env. The former is for
quickly changing between inline and display math, complete with slurping
punctuation symbols into display math and barfing them out of inline
math. For example, “`$1 + 1$.`

” becomes “`\[1 + 1.\]`

” (with line
breaks) and back.

The `latex-change-env`

package is for changing between different kinds
of environments, including display math, while offering to rename labels
across the project if necessary. When deleting a label from an
environment, it also remembers this for the session!^{4}

One neat feature of AUCTeX that I find myself using more and more often
lately is the in-buffer preview.^{5} Usually when writing a draft I’m
not that interested in how exactly something looks in the PDF—that part
comes later, believe me. In cases like these, just calling
`preview-buffer`

is quite convenient and lets me use the screen real
estate that a PDF viewer would have taken up for something else.

I always use pure LaTeX for writing papers, drafts, or presentations.
However, I also take lots of notes in org-mode, which, as a crude
first approximation, is something like a markup language that’s *very*
well integrated into Emacs.

For the actual note-taking, I use the venerable org-roam—a free software alternative to the proprietary Roam Research program—to jot down things that I’d like to remember for more than three days. Org-roam describes itself as a “plain-text personal knowledge management system”, which fits the bill pretty well. In short, it’s a note taking system in the spirit of the Zettelkasten method, which is essentially about having lots of notes with lots of backlinks to related concepts:

In fact, using org-roam-ui, one can even visualise the entire Zettelkasten as an interactive and pretty graph in which notes become nodes and backlinks become edges!

Org-roam suggests
keybindings for all
of the most important concepts: creating notes, inserting them, showing
all of the backlinks of a file, etc. An important extra that I’ve added
is having two “types” of notes: `reference`

s, where things that I
learned but are otherwise known reside, and `novel`

s, where I put my own
ideas.

As I’m predisposed to quite easily forget details, I regularly engage
with my Zettelkasten, so as to keep things fresh in my mind. Reading
through all of the notes that are relevant to what I’m currently working
on, creating new backlinks, filling in gaps, even deleting old
information and re-organising some local region of the graph. Indeed, I
tag every new entry as a `draft`

until further notice, forcing me to go
back there especially. This results in pretty good recollection of the
most important facts, even with my brain.

I use elfeed to query the arXiv for new preprints that are of interest to me. Thankfully, the fields I’m subscribed to tend to be moving slow-ish and so I can manage to at least read the abstract of every paper that pops up in my feed. There is also a little bit of elisp involved to print arXiv entries in a more readable way than the default formatting.

When the abstract interests me, I usually directly download the paper
into my library and open it with zathura. This is fully automated via
arxiv-citation—more on that later. I merely have to press `C-c d`

while looking at a paper and magic happens!

In the above gif, on the right-hand side you can see a score associated to each entry. While reading every abstract has worked quite well for me thus far, it’s nice to get the papers that are “probably interesting” high up, so that I’m more likely to notice them sooner rather than later. I use elfeed-score for this, which integrates seamlessly into the rest of the machinery. It compares certain features of the entry (like the title and abstract) with a list of regular expressions, increasing the total score of the entry every time it matches something.

Speaking of the arXiv, in XMonad I have bound `M-s a`

to look up the
given string there. Likewise, zbmath is searched with `M-s z`

. When
these commands get a “universal argument”—an Emacs concept that XMonad
borrowed—they automatically start a search with the current selection
instead. Briefly, pressing `M-u`

before a command can modify it in
different ways. All of my search commands act on the primary
selection when given such an argument; `M-u M-s <letter>`

will look up
the currently selected text on the relevant “search engine”. One
instance where this is useful is for quickly switching between the arXiv
and zbmath:

For citation management, I use a very simple system—no Zotero, JabRef,
or similar technology. Concretely, this means that I have a blessed
bibliography file somewhere within my home directory and I either
symlink (when I’m writing something alone) or copy (when working with at
least one coauthor) the file into the relevant project directory. In
case of a copy operation, I only have to update a single variable in
Emacs (`arxiv-citation-bibtex-files`

), which is good enough for me and
doesn’t seem to warrant a slightly more automated, yet probably much
more complicated solution.

Adding new citations is done via the now aptly named Emacs package
arxiv-citation^{6} with a bit of
plumbing
on the XMonad side to get Emacs going. The basic idea is that—given an
arXiv or zbmath link—we first look up the paper on zbmath to see if it
was published and, if not, just use the arXiv data to construct our own
bibliography entry instead. By default, my keybinding for this acts on
the primary selection, so I merely have to highlight the link, press
`M-o a`

, sit back, and enjoy the show. The following gif should help
drive home the point, also showcasing the format of a not yet published
paper and a published one.

And that’s it! If nothing else, this post helped me to nail down some ideas that I had lying around and got me to finally clean up and publish many of the extensions talked about here—that’s already a win in my book.

I’m sure that some details will change over the course of the next three years as I mature mathematically and my needs change, but overall I feel pretty comfortable with this setup.

Thanks to everyone who reached out! I received some inquiries as to my configurations, so here are the most important bits again, for your convenience: my Emacs config, my XMonad config, org-roam, math-delimiters, arxiv-citation, latex-change-env, hmenu.