CSS

Showing posts with label Category-Theory. Show all posts
Showing posts with label Category-Theory. Show all posts

Tuesday, February 14, 2012

Feynman Diagrams and Motives

I have been re-reading the following book:

Alain Connes and Matilde Marcolli,
Noncommutative Geometry, Quantum Fields, and Motives,
Colloquium Publications, Vol.55, American Mathematical Society, 2008.

It turns out that Dr Marcolli has taught a course on related material back in 2008! It is mostly dealing with the first chapter of the book.

Hopf Algebras and Feynman Calculations

There is a nice review of Hopf algebras used in Feynman diagram calculations:

Kurusch Ebrahimi-Fard, Dirk Kreimer,
"Hopf algebra approach to Feynman diagram calculations".
Eprint arXiv:hep-th/0510202v2, 30 pages.

For another specifically reviewing the noncommutative approach discussed in Connes and Matilde's book, see:

Herintsitohaina Ratsimbarison,
"Feynman diagrams, Hopf algebras and renormalization."
Eprint arXiv:math-ph/0512012v2, 12 pages.

What is a "Hopf algebra", anyways?

Pierre Cartier,
"A primer of Hopf algebras."
Eprint [math.osu.edu], 81 pages.

Hopf Algebras

What the deuce is a "Hopf algebra"? That's a very good question, and I'm very glad you asked it. Wikipedia has its definition, which may or may not be enlightening.

Lets consider a concrete example. Consider a finite group G, and the field of complex number ℂ. We assert the collection Hom(G,ℂ) is a Hopf algebra.

Recall we have multiplication of group elements. This is a mapping G×G→G.

Now, observe we have functoriality to give us a mapping Hom(G×G→G,ℂ) = ℂG→ℂG×ℂG. Lets call this thing Δ

Great, but what does it do? Good question!

Take some f∈Hom(G,ℂ) then what is Δ(f)?

It is a function of two variables, [Δ(f)](x,y). Functoriality demands, if we fix one of the arguments to be the identity element e∈G of the group, then [Δ(f)](e,y)=f(y) and [Δ(f)](x,e)=f(x).

It follows logically that [Δ(f)](x,y)=f(xy).

We also need to consider the antipode map S:ℂG→ℂG. We have [S(f)](x) be determined by the Hopf property, and a long story short [S(f)](x)=f(x-1).

Note that the antipode map is a generalization of the "group inverse" notion.

The other algebraic structure is a triviality, lets consider other interesting applications!

Feynman Diagrams

Now, I have written some notes [pdf] on the basic algorithm evaluating Feynman diagrams and producing a number (the "probability amplitude").

As I understand it (and I don't!!) Ebrahimi-Fard and Kreimer suggest considering the Hopf algebra of "Feynman graphs" (which are just considered as colored graphs representing physical processes).

The basic algorithm to evaluating Feynman diagrams are based on the "Feynman rules" (what we assign to each edge, vertex, etc.). So Feynman rules are linear and multiplicative maps, associating to each Feynman graph (again, seen as a collection of vertices and edges) its corresponding Feynman integral.

So these maps are the important things, which enable us to algorithmically do stuff.

Lets stop! I said "Feynman integrals" are assigned to each graph...am I drunk, or is that correct?

Yes yes, the answer is "yes" ;)

What a horrible joke...but what I mean is: the scattering process of electrons, for example, is the infinite sum taking into account all the virtual processes.

Usually we only care up to a few orders.

Of course, this is my understanding of the Hopf algebra treatment of Feynman diagrams...and I openly admit: I could be completely wrong!

So to figure it out, I'll stop rambling, and continue reading.

Sunday, August 2, 2009

A First Taste of Categories

Recall we had previously introduced a scaffolding for abstract mathematical thinking in an "object oriented" way. We muddled through with barely introducing the notion of categories and groupoids, we seek to remedy this situation by introducing the notion of categories.

We are inspired from several sources, which we'll cite now:

Definition 1 A "category" consists of:
  • a collection $Ob(C)$ of mathematical objects;
  • for any pair of objects $x,y$ we have a corresponding set $\hom(x,y)$ of morphisms from $x$ to $y$;
equipped with
  • for any object $x$ an identity morphism $1_{x}:x\to{x}$;
  • for any pair of morphisms $f:x\to{y}$ and $g:y\to{z}$ we have a morphism $g\circ{f}:x\to{z}$ called the composite of $f$ and $g$ (it's also denoted as $fg$ so it looks like multiplication);
such that
  • for any morphism $f:x\to{y}$ the left and right unit laws hold $1_{x}f=f=f1_{y}$
  • for any triple of morphisms $f:w\to{x}$, $g:x\to{y}$, $h:y\to{z}$, the associative law holds: $(fg)h=f(gh)$.

In other words, it's a cute collection of objects and morphisms. Note that when we read composition of morphisms $g\circ{f}$ it's read from right to left, i.e. "first do $f$, then do $g$".

In all honesty, I kind of thought of something along the lines of a category when I was studying graph theory. Consider a directed network - in other words, a diagram with dots and arrows connecting dots in a certain way. If we identify each dot with a mathematical object, and each arrow with a morphism, that's precisely a diagram (it's a functor embedding a network into a category to be precise, we'll revisit this idea again later).

We specifically have objects in a category be the same type of objects. That is to say, we cannot "mix and match" objects as we please. All the objects in the category are the same type, they're all groups, or monoids, or ... etc.

Grocery List of Examples

We can best exemplify the notion of categories in a grocery list of examples, as usual. We'll begin with very basic examples that are not too exciting, but useful later on.

0 is the "empty category" with no objects and no morphisms.

1 is the "singleton category" with one object $\{*\}$ and one identity morphism. This can be intuitively thought of as an object as a category.

2 is the category with two objects $\{a,b\}$ and one non-identity morphism $a\to{b}$. This can be intuitively thought of as a morphism as a category.

3 is the category with three objects $\{a,b,c\}$ and three non-identity morphism $a\to{b}$, $b\to{c}$, $c\to{a}$ arranged in a neat triangle.

$\downdownarrows$ is the category with two objects $\{a,b\}$ and two arrows $a\rightrightarrows b$.

We have more elaborate categories of interest which use more complicated mathematical objects. We can also make certain mathematical objects into a category, which we'll investigate presently.

Discrete categories. A category is called "discrete" when every morphism is an identity. This is basically just a set $X$ as a category (we just equip it with, for each $x\in X$, a corresponding morphism $1_{x}:x\to{x}$), and every discrete category is completely determined by its set of objects. So it really is just a set.

Monoid. We have a monoid considered as a category $C$ with a single object. In other words, it is completely determined by its morphisms. We can "equip" it with the composition of morphisms. By the property of left and right unit laws, and the condition of a single object, we see that there is an identity morphism corresponding to the notion of an identity element. By the associativity property, the composition of morphisms is an associative law of composition. This directly relates to the notion of a monoid as we specified in our previous entry on object oriented math.

Group. We have a group as a category when we have a monoid category with the extra property that each morphism is invertible.

Topology. We have a certain topology $\mathcal{T}$ over a set $X$ which consists of subsets of $X$ with certain nice properties (namely any intersection of finitely many elements of $\mathcal{T}$ are in $\mathcal{T}$, and any union of arbitrarily many elements of $\mathcal{T}$ are in $\mathcal{T}$). If we demand that the objects are the elements of $\mathcal{T}$ and the morphisms are inclusion mappings (or if we "reverse the direction of arrows", then the morphisms are restriction mappings).

Addendum (December 15, 2011 at 9:13 PM): I've written some notes on topology which really explains the peculiarity of the morphisms, see A Rapid Introduction to Topology.

Preorders. Consider a category $P$ in which, given objects $p,p^{\prime}$, there is at most one morphism $p\to{p^{\prime}}$. In any preorder, we can define a binary relation $\leq$ on the objects of $P$: we have $p\leq p^{\prime}$ if and only if there is a morphism $p\to{p^{\prime}}$. This binary relation is:

  • reflexive (there's always the identity morphism $p\to{p}$ in $P$);
  • transitive (because arrows can be composed).
So a preorder is a set of mathematical objects equipped with a reflexive and transitive binary relation.

Ordinal Numbers. We can think of each ordinal number $n$ as the linearly ordered set of all preceding ordinals $n=\{0,1,\ldots,n-1\}$ so in particular $0=\emptyset$ is the empty set, while $\omega=\{0,1,2,3,\ldots\}$ is the first infinite ordinal. Each $n$ is linearly ordered and hence is a preorder (a category).

$\Delta$ is the category with objects all finite ordinals and morphisms $f:m\to{n}$ all order-preserving functions ($i\leq{j}$ in $m$ implies $f(i)\leq f(j)$ in $n$). This category is sometimes called the "simplicial category".

Large Categories. When the collection of objects in a category is actually a set, we call the category "small". Otherwise it's "large". We have a list of our favorite large categories:

Set
objects: all small sets
morphisms: functions between them
remarks: in topology, $\mathbb{R}$ had all the nice properties; similarly, we'll find that Set has all the nice properties in category theory (in general).
Set*
objects: Pointed sets, i.e. all small sets with a base point
morphisms: base-point-preserving functions
Ens
objects: all sets
morphisms: functions within a variable set $V$
Cat
objects: small categories
morphisms: all functors between them (covered in next blog post)
Mon
objects: all small monoids
morphisms: all monoidal homomorphisms between them
Grp
objects: all small groups
morphisms: all group homomorphisms between them
Ab
objects: all small (additive) abelian groups
morphisms: all group homomorphisms between them
Rng
objects: all small rings (with units)
morphisms: ring homomorphisms (preserving unit) between them
CRng
objects: all small commutative rings (with units)
morphisms: ring homomorphisms between them
R-Mod
objects: all small left modules over the ring R
morphisms: corresponding linear maps
Mod-R
objects: small right R-modules
morphisms: corresponding linear maps
K-Mod
objects: small modules over the commutative ring K
morphisms: linear maps between them
Top
objects: small topological spaces
morphisms: continuous maps between them
Toph
objects: small topological spaces
morphisms: homotopy classes of maps
Top*
objects: small topological spaces with selected base point
morphisms: base-point preserving continuous maps between them
VectK
objects: vector spaces over a field K
morphisms: linear transformations between the vector spaces

Object Oriented Math: Some Category Theory

All the time in math we have definitions, we often can detect a recurring pattern in them. In fact, Dr Baez points this out in a number of places:

  • John Baez "Quantum Quandaries: A Category Theoretic Perspective", arXiv:quant-ph/0404040
  • John Baez and Derek Wise "Quantization and Categoriefication: Properties, Structures, 'n' Stuff!" Lecture Notes, 2004.
  • John Baez and Michael Shulman "Lectures on n-Categories and Cohomology", arXiv:math/0608420
  • John Baez and J Dolan "From Finite Sets to Feynman Diagrams", arXiv:math/0004133

(Addendum: I have been informed that James Dolan mentioned this earlier in sci.physics.research. For further reading see the following list:

Thank you.)

It is important to review the notion of "objects" for future use. My ambitions right now is to use this blog as a sort of "stream of consciousness notebook" for my mathematical physics studies. Definitions come up a lot, and category theory simplifies things many times. Lets define the notion of a mathematical object:

Definition 1 A "Mathematical Object" is defined by:
  1. specifying some underlying "stuff" (e.g. a set, several sets, etc.)
  2. equipping this "stuff" with some "structure" (e.g. functions, binary operators, collections of subsets, distinguished elements, etc.)
  3. such that certain "properties" hold (e.g. equations or inequalities, etc.).

Remark As pointed out by Baez and Wise, a property is a boolean type situation: it's either true or false, it can't be both. The equation $x+y=z$ either holds or it fails to hold, there's nothing in between.

Remark We can mix up the choice of structure with properties, and vice versa. It doesn't change much in the definition ("up to natural isomorphism").

We assert that every definition in mathematics defines one of the following: a mathematical object, "stuff", "structure", or a "property". If so, we can always present a definition in the same format as we've done for the mathematical object. We will, in this blog, always try to do so when possible.

A Grocery List of Examples

Now, in math, we need to make our notion of abstract definitions more intuitive and concrete by introducing examples. This basically involves browsing through the definitions in any old abstract math book, and reformulating them as definitions in an algorithmic way. We'll produce the results of our algorithmic rewriting of random definitions.

Definition 2 A "topology" $\mathcal{T}$ on some set $X$ consists of

  • a collection of subsets of $X$
such that
  • finite intersections of elements of $\mathcal{T}$ are also contained in $\mathcal{T}$;
  • arbitrary unions of elements of $\mathcal{T}$ are also contained in $\mathcal{T}$;
  • both $X\in\mathcal{T}$ and $\emptyset\in\mathcal{T}$.

Definition 3 A "Law of Composition" $f$ on $S$ is a mapping $f:S\times{S}\to{S}$. To put this in our algorithmic format, it consists of

  • a pair of sets $S\times{S}$ and $S$ (or $dom(f)$ and $cod(f)$ respectively)
equipped with
  • a set $f\subseteq(S\times{S})\times{S}$ (or more clearly $f\subseteq dom(f)\times cod(f)$)
such that
  • for each $(x\times{y})\in dom(f)$ there exists a corresponding $z\in cod(f)$ such that $(x\times{y},z)\in f$.
We usually denote the law of composition by $*,+,\times,\star$ or something similar. Because it is clumsy and hard to read $+(x,y)$ we will instead write $x+y$.

Definition 4 A "Monoid" $M$ consists of

  • a set $M$
equipped with
  • a law of composition $*:M\times{M}\to{M}$;
  • a unit element $e\in{M}$;
such that
  • the law of composition is associative, i.e. for every $x,y,z\in{M}$ we have $(x*y)*z=x*(y*z)$;
  • for every $x\in{M}$ we have $x*e=e*x=x$.

Now we can also define things in a clever way. That is, we can say that a group is a monoid with the extra property that every element is invertible. We can also rewrite the definition of the monoid adding the extra property that every element is invertible. In this sense, it's sort of analogous to a "macro expansion" in C programming. So consider the following two examples, which are two different definitions of the same thing in the way we just outlined:

Definition 5 A "Group" $G$ consists of

  • a set $G$
equipped with
  • a law of composition $*:G\times{G}\to{G}$;
  • a unit element $e\in{G}$;
  • an inversion operator $(-)^{-1}:G\to{G}$;
such that
  • the law of composition is associative, i.e. for every $x,y,z\in{G}$ we have $(x*y)*z=x*(y*z)$;
  • for every $x\in{G}$ we have $x*e=e*x=x$;
  • for every $x\in{G}$ there exists a corresponding unique element denoted $x^{-1}$ such that $x*x^{-1}=x^{-1}*x=e$.

Definition 6 A "Group" $G$ consists of

  • a monoid $G$
equipped with
  • an inversion operator $(-)^{-1}:G\to{G}$;
such that
  • for every $x\in{G}$ there exists a corresponding unique element denoted $x^{-1}$ such that $x*x^{-1}=x^{-1}*x=e$.

You can see the appeal in the latter approach, it takes up less space! It has the added bonus (or nightmarish drawback, depending on your point of view) that it sets up a sort of taxonomy of objects, in the same sense that inheritance in object oriented programming sets up a taxonomy of classes.

The Choice is Yours

You can check properties (they're true or false, as we previously remarked). You can choose structures from a set of possibilities. You can also choose stuff from a category of possibilities.

But we cannot do it willy-nilly, each step depends on the following ones. We can't demand properties hold for some structure we don't have, so once we choose our structure we are constrained to a subcategory of possibilities.

(If the reader is wondering "Gee whiz what's a category?" Don't worry, we'll get to it in the next exciting blog post.)

There is a dictionary available if the reader is a logician, namely:

Mathematical Logic Our Category Theoretic Scheme
Types ([2]) Stuff
Predicates ([2]) Structure
Axioms Properties

If one isn't a logician, but is still interested in this sort of categorical logic type thing, see:

Now revisiting a remark made earlier, there is some fudge factor in choosing structure and properties. For example, we have two definitions of a monoid:

Stuff
a set $M$ equipped with
Structure
a law of composition $*:M\times{M}\to{M}$ and
an element $e\in{M}$
Properties
such that for all $x,y,z\in{M}$, $(x*y)*z=x*(y*z)$ and
$e*x=x*e=x$
or...
Stuff
a set $M$ equipped with
Structure
a law of composition $*:M\times{M}\to{M}$
Properties
such that (1) for all $x,y,z\in{M}$, $(x*y)*z=x*(y*z)$; and
(2) there exists an $e\in{M}$ such that $\forall y\in{M}$ satisfies $y*e=e*y=y$ (which automatically implies $e$ being unique).

These definitions give the same concept of a monoid, but they give different morphisms between monoids. This gives a different category of monoids.

As this is probably gibberish to the reader, lets explain some category theory. We have defined objects. We usually have some notions of "mapping" a given type of object to the same type of object. E.g. we have continuous functions map topological spaces to topological spaces, monoid homomorphisms map monoids to monoids, a linear transformation maps a vector space to a vector space, and so on. Can we generalize this notion of mapping for our objects?

Definition 7 A "morphism" between mathematical objects consists of
  • a map from stuff to corresponding stuff
such that
  • the structure is preserved.

In our consideration of two definitions of a monoid, we have two different morphisms - as previously stated. For the first definition, a morphism between monoids is:

  • a function $f:M\to{M^{\prime}}$
  • preserving the law of composition and identity: $f(x*y)=f(x)\star f(y)$ and $f(e)=e^{\prime}$.
for the second definition a morphism between monoids is:
  • a function $f:M\to{M^{\prime}}$
  • preserving multiplication $f(x*y)=f(x)\star f(y)$.
They're different but the same! Huh??

Observe the first type of morphism is contained in the second type, we just need to prove that the second type of morphism is actually the second. Observe for all $z\in{M}$ we have $e*z=z*e=z$ if and only if for all $y\in{M^{\prime}}$ we have $e* f^{-1}(y)=f^{-1}(y)=f^{-1}(y)*e$. This should not be too surprising since we've identified $f^{-1}(y)$ corresponds intuitively to $z$.

Now here's the subtle step: we apply $f$ to our second condition. That is, we have for all $y\in{M^{\prime}}$ we have $e* f^{-1}(y)=f^{-1}(y)=f^{-1}(y)*e$ if and only if for all $y\in{M^{\prime}}$ we have $f^{-1}(f(e)\star y)=f^{-1}(y)=f^{-1}(y\star f(e))$. All we did was apply $f^{-1}(f(-))$ to our current situation, which should effectively do nothing.

This means that the argument to the morphism $f^{-1}(-)$ must be the same. In other words we have for all $y\in{M^{\prime}}$ our desired situation $f(e)\star y = y = y\star f(e)$. This is true if and only if $f(e)=e^{\prime}$.

What does it all mean?! It means we have shown mathematically the second type of morphism is the same as the first.

Moral: whether we have something counted as structure or property doesn't affect the isomorphisms (invertible morphisms), so the groupoid (category with only isomorphisms) of mathematical objects is more robust than the category.

Similar reasoning can be made for stuff that can be reinterpreted as structure. We can always reinterpret properties as structure and structure as stuff, but not vice-versa!

Addendum (3 August 2009 at 4:44 PM)

I'd just like to remark that we have a clear cut hierarchy of properties, structure, and stuff, which can be put in terms of category theory:

Properties
{False,True} = the 0-category of all (-1)-categories.
Structure
Set = the 1-category of all (0)-categories.
Stuff
Cat = the 2-category of all categories.

Where we have a (-1)-category be a truth value, a 0-category be a set, a 1-category be a category, and a 2-category as itself.

The important thing is that we have a pattern, which can be extended thanks to higher category theory. This is difficult to think about (and I haven't gotten to writing higher category theory notes!) so we'll end our addendum here.

Addendum 5 August 2009 at 7:41 PM

For some more on this, one might want to see

It discusses various aspects of "extra structure" in a category theoretic manner.