I have been re-reading the following book:

Alain Connes and Matilde Marcolli,

*Noncommutative Geometry, Quantum Fields, and Motives*,

Colloquium Publications, Vol.55, American Mathematical Society, 2008.

It turns out that Dr Marcolli has taught a course on related material back in 2008! It is mostly dealing with the first chapter of the book.

## Hopf Algebras and Feynman Calculations

There is a nice review of Hopf algebras used in Feynman diagram calculations:

Kurusch Ebrahimi-Fard, Dirk Kreimer,

"Hopf algebra approach to Feynman diagram calculations".

Eprint arXiv:hep-th/0510202v2, 30 pages.

For another specifically reviewing the noncommutative approach discussed in Connes and Matilde's book, see:

Herintsitohaina Ratsimbarison,

"Feynman diagrams, Hopf algebras and renormalization."

Eprint arXiv:math-ph/0512012v2, 12 pages.

What is a "Hopf algebra", anyways?

Pierre Cartier,

"A primer of Hopf algebras."

Eprint `[math.osu.edu]`, 81 pages.

### Hopf Algebras

What the deuce is a "Hopf algebra"? That's a very good question, and I'm very glad you asked it.
Wikipedia has its definition, which may or may not be enlightening.

Lets consider a concrete example. Consider a finite group G, and the field of complex number ℂ. We assert the collection Hom(G,ℂ) is a Hopf algebra.

Recall we have multiplication of group elements. This is a mapping G×G→G.

Now, observe we have functoriality to give us a mapping Hom(G×G→G,ℂ) = ℂ^{G}→ℂ^{G}×ℂ^{G}. Lets call this thing Δ

Great, but what does it do? Good question!

Take some f∈Hom(G,ℂ) then what is Δ(f)?

It is a function of two variables, [Δ(f)](x,y). Functoriality demands, if we fix one of the arguments to be the identity element e∈G of the group, then [Δ(f)](e,y)=f(y) and [Δ(f)](x,e)=f(x).

It follows logically that [Δ(f)](x,y)=f(xy).

We also need to consider the antipode map S:ℂ^{G}→ℂ^{G}. We have [S(f)](x) be determined by the Hopf property, and a long story short [S(f)](x)=f(x^{-1}).

Note that the antipode map is a **generalization** of the "group inverse" notion.

The other algebraic structure is a triviality, lets consider other interesting applications!

### Feynman Diagrams

Now, I have written some notes `[pdf]` on the basic algorithm evaluating Feynman diagrams and producing a number (the "probability amplitude").

As I understand it (and I don't!!) Ebrahimi-Fard and Kreimer suggest considering the Hopf algebra of "Feynman graphs" (which are just considered as colored graphs representing physical processes).

The basic algorithm to evaluating Feynman diagrams are based on the "Feynman rules" (what we assign to each edge, vertex, etc.). So Feynman rules are linear and multiplicative maps, associating to each Feynman graph (again, seen as a collection of vertices and edges) its corresponding **Feynman integral**.

So these maps are the important things, which enable us to algorithmically do stuff.

Lets stop! I said "Feynman integrals" are assigned to each graph...am I drunk, or is that correct?

Yes yes, the answer is "yes" ;)

What a horrible joke...but what I mean is: the scattering process of electrons, for example, is the infinite sum taking into account all the virtual processes.

Usually we only care up to a few orders.

Of course, this is my understanding of the Hopf algebra treatment of Feynman diagrams...and I openly admit: **I could be completely wrong!**

So to figure it out, I'll stop rambling, and continue reading.