CSS

Friday, May 25, 2012

Vertex algebras

Think about your favorite Lie algebra Lie(G). We have a mapping on it, namely, the adjoint representation:

ad:Lie(G) → End[Lie(G)]

where "End[Lie(G)]" are the endomorphisms of the Lie algebra Lie(G).

Normally this is of the form "ad(u)v∈Lie(G)" and is shorthand for "ad(u)=[u,-]".

The Jacobi identity looks like:

ad(u)ad(v)-ad(v)ad(u)=ad(ad(u)v).

This is the most important identity. Vertex operator algebras are an algebra with a similar property.

A vertex operator algebra consists of a vector space V equipped with a mapping usually denoted

Y:V→(End V)[[x,x-1]].

In this form, it looks like left-multiplication operator...or that's the intuition anyways. So if "v∈V", we should think Y(v,x) belongs to "(End V)[[x,x-1]]" and acts on the left.

Really through currying this should be thought of as "V⊗V→V[[x,x-1]]", i.e., a sort of multiplication operator with a parameter "x". (This is related to the "state-operator correspondence" physicists speak of with conformal field theories.)

Just like a Lie algebra, the Vertex Operator algebra satisfies a Jacobi identity and it is the most important defining property for the VOA.

Lets stop and look at this structure again:

Y:V→(End V)[[x,x-1]].

What's the codomain exactly? Well, it's a formal distribution (not a mere formal power series!).

So what does one look like? Consider δ(z-1) = Σ zn where the summation ranges over n∈ℤ. This series representation is a formal distribution, and behaves in the obvious way. Lets prove this!

Desired Property: δ(z-1) vanishes almost everywhere.

Consider the geometric series f(z) = Σzn where n is any non-negative integer (n=0,1,...).

Observe that δ(z-1) = f(z) + z-1f(z-1). Lets now substitute in the resulting geometric series:

δ(z-1) = [1/(1-z)] + z-1[1/(1-z-1)]

and after some simple arithmetic we see for z≠1 we have δ(z-1)=0.

Desired Property: for any Laurent polynomial f(z) we have δ(z-1)f(z)=δ(z-1)f(1).

This turns out to be true, thanks to the magic of infinite series; but due to html formatting, I omit the proof. The proof is left as an exercise to the reader (the basic sketch is consider δ(z-1)zn, then prove linearity, and you're done).

Friday, May 18, 2012

Finite Field with Four Elements

Small note to myself on notational problems when facing finite groups.

Recall the finite field with four elements is ℤ2[x]/(1+x+x2).

People often write ω = 1+x and ω=x. Observe then that ω2 = ω, and ω2 = ω. Moreover ωω=1 and 1+ω+ω=0.

I have only seen this ω notation specified in Pless' Error Correcting Codes, Third ed., page 102 et seq.

Friday, March 16, 2012

End notes and Foot notes in LaTeX

So I was writing up notes for a reading group on Afghanistan, and it has apparently become fashionable to use endnotes in the humanities. Being fond of Edward Gibbon, I use footnotes excessively. Irritating everyone, I use both while making it indistinguishable whether I refer to a footnote or endnote by a superscripted number.

How to do this in LaTeX? Quite simple:

\documentclass{article}

\usepackage{endnotes}

\makeatletter
\newcommand*{\dupcntr}[2]{%
    \expandafter\let\csname c@#1\expandafter\endcsname\csname c@#2\endcsname
}
\dupcntr{endnote}{footnote}
\renewcommand\theendnote{\thefootnote}
\makeatother

\begin{document}
Blah blah blah.
\end{document}

It turns out to work quite well.

I modified some macros from a TeX Stackexchange discussion on "slave counters"...so I get only partial credit.

Tuesday, February 21, 2012

Metapost and Labels

This is just a quick note to myself. When I want to write a label with a smaller font, I should use \scriptstyle...but it is tricky since it requires math mode!

So an example, consider this diagram describing an experiment for gravitational redshift:

numeric u;
u = 1pc;
beginfig(0)
  path earth;
  pair clock[];

  earth = fullcircle scaled u;

  clock[0] = (0,2u);
  clock[1] = (0,4u);

  draw (0,0)--(0,5u) dashed evenly;

  for i=0 upto 1:
    label(btex $\bullet$ etex, clock[i]);
  endfor;

  fill earth withcolor 0.75white;
  draw earth;
  label.rt(btex ${\scriptstyle\rm Earth}$ etex, (.5u,0));
  label.rt(btex ${\scriptstyle\rm Satellite\ 1}$ etex, clock[0]);
  label.rt(btex ${\scriptstyle\rm Satellite\ 2}$ etex, clock[1]);
endfig;
end;

Just remember to use "\ " for spaces. Otherwise it will all run together horribly!

Tuesday, February 14, 2012

Feynman Diagrams and Motives

I have been re-reading the following book:

Alain Connes and Matilde Marcolli,
Noncommutative Geometry, Quantum Fields, and Motives,
Colloquium Publications, Vol.55, American Mathematical Society, 2008.

It turns out that Dr Marcolli has taught a course on related material back in 2008! It is mostly dealing with the first chapter of the book.

Hopf Algebras and Feynman Calculations

There is a nice review of Hopf algebras used in Feynman diagram calculations:

Kurusch Ebrahimi-Fard, Dirk Kreimer,
"Hopf algebra approach to Feynman diagram calculations".
Eprint arXiv:hep-th/0510202v2, 30 pages.

For another specifically reviewing the noncommutative approach discussed in Connes and Matilde's book, see:

Herintsitohaina Ratsimbarison,
"Feynman diagrams, Hopf algebras and renormalization."
Eprint arXiv:math-ph/0512012v2, 12 pages.

What is a "Hopf algebra", anyways?

Pierre Cartier,
"A primer of Hopf algebras."
Eprint [math.osu.edu], 81 pages.

Hopf Algebras

What the deuce is a "Hopf algebra"? That's a very good question, and I'm very glad you asked it. Wikipedia has its definition, which may or may not be enlightening.

Lets consider a concrete example. Consider a finite group G, and the field of complex number ℂ. We assert the collection Hom(G,ℂ) is a Hopf algebra.

Recall we have multiplication of group elements. This is a mapping G×G→G.

Now, observe we have functoriality to give us a mapping Hom(G×G→G,ℂ) = ℂG→ℂG×ℂG. Lets call this thing Δ

Great, but what does it do? Good question!

Take some f∈Hom(G,ℂ) then what is Δ(f)?

It is a function of two variables, [Δ(f)](x,y). Functoriality demands, if we fix one of the arguments to be the identity element e∈G of the group, then [Δ(f)](e,y)=f(y) and [Δ(f)](x,e)=f(x).

It follows logically that [Δ(f)](x,y)=f(xy).

We also need to consider the antipode map S:ℂG→ℂG. We have [S(f)](x) be determined by the Hopf property, and a long story short [S(f)](x)=f(x-1).

Note that the antipode map is a generalization of the "group inverse" notion.

The other algebraic structure is a triviality, lets consider other interesting applications!

Feynman Diagrams

Now, I have written some notes [pdf] on the basic algorithm evaluating Feynman diagrams and producing a number (the "probability amplitude").

As I understand it (and I don't!!) Ebrahimi-Fard and Kreimer suggest considering the Hopf algebra of "Feynman graphs" (which are just considered as colored graphs representing physical processes).

The basic algorithm to evaluating Feynman diagrams are based on the "Feynman rules" (what we assign to each edge, vertex, etc.). So Feynman rules are linear and multiplicative maps, associating to each Feynman graph (again, seen as a collection of vertices and edges) its corresponding Feynman integral.

So these maps are the important things, which enable us to algorithmically do stuff.

Lets stop! I said "Feynman integrals" are assigned to each graph...am I drunk, or is that correct?

Yes yes, the answer is "yes" ;)

What a horrible joke...but what I mean is: the scattering process of electrons, for example, is the infinite sum taking into account all the virtual processes.

Usually we only care up to a few orders.

Of course, this is my understanding of the Hopf algebra treatment of Feynman diagrams...and I openly admit: I could be completely wrong!

So to figure it out, I'll stop rambling, and continue reading.