↑↑ Home | ↑ Science |

Renormalisation is a way of dealing with divergences in theories. It is important and widely used in quantum field theory. Today quantum field theory is the accepted method for describing elementary particles and their interactions. But before renormalisation was invented, people were much more skeptical about it.

So is there an easy way to understand how this important method works in principle? I think so, and here it is. A theory in need of renormalisation usually features infinite results for physical quantities that obviously should be finite. These infinities result from a mathematical limit, such as an integral over intermediate momenta in certain Feynman graphs.

The first step in renormalisation, called regularisation, forcibly prevents this by adding an additional parameter for most values of which the divergence does not occur. This may be a cutoff (cutoff regularisation), an explicit dimension (dimensional regularisation) or other parameter. Now the desired physical quantity can be computed, without divergence but dependent on the cutoff.

Then the end result is obtained by taking the regularisation parameter to its physical limit. The result is still finite. How can this be? Didn't we just do the same thing as without regularisation? No, because the two limits — the one in the computation of the quantity and the one of the regularisation parameter — do not commute. The following non-commuting diagram illustrates this.

So one could sum up the renormalisation procedure in one sentence as follows:

Renormalisation consists of replacing a theory with one in which its divergent limit is commuted with one introduced for that purpose, so that the commuted limits yield finite results. |
---|