# Sokhatsky–Weierstrass–Plemelj theorem

Whether complex analysis is relevant to machine learning and AI depends on your point of view. If you look at ML from a quantum computing point of view there is no doubt but a classic scikit-learn pass has indeed little to do with Cauchy-integrals. Somewhere in between sits the kernel methods where one has convolutions with a kernel to transform a learning problem into another (simpler) one. Kernels as presented in machine learning (see e.g. an intro to kernel methods) are just one example of how one can use a neat trick in Hilbert spaces to solve differential equations. It’s also a classic approach to make sense of undefined things (usually infinite integrals) and singularities. So, although at first sight the theorem below might seem far away from any ML task it’s never far off if you look at the foundations of AI and it’s exotic extension in the quantum realm.

In words the theorem is something like: if you integrate a function with a $1/x$ singularity, just integrate as if it’s not there and add the value of the function at the singularity.

In abbreviated form it’s:

$$P\frac{1}{x} = \frac{1}{x \pm i\epsilon}\pm i\pi\,\delta(0)$$

and in full it’s

$$P\int_a^b\frac{f(x)}{x}\,dx = \lim_{\epsilon\rightarrow 0}\;\int_a^b\frac{f(x)\,dx}{x \pm i\epsilon}\pm i\pi\,f(0)$$

The Cauchy principal value of the first integral is defined as

$$P\int_a^b\frac{f(x)}{x}\,dx = \lim_{\epsilon\rightarrow 0}\int_a^{-\epsilon}\frac{f(x)}{x}\,dx + \lim_{\epsilon\rightarrow 0}\int_{+\epsilon}^{b}\frac{f(x)}{x}\,dx$$

For instance, the integral

$$\int_{-\infty}^{+\infty} \frac{e^{-x^2}}{x}\,dx$$

does not make sense since it has a singularity in the origin: However, since any portion symmetric around the origin gives a zero integral the Cauchy principal value gives zero and the integral is meaningful. This coincides with a Lebesgue zero-measure integral interpretation.

Continuing with this example, you can compute the integral

$$\lim_{\epsilon\rightarrow 0}\int_a^b\frac{e^{-x^2}\,dx}{x \pm i\epsilon} = -i\pi$$

and thus leads to zero for the principal value. There are various proofs of the theorem, the one on Wikipedia being the easiest one. A rigorous proof is hard to find and likely demands a lot of complex measure-theory. Like many of the regularization tricks one usually picks up the benefits witout asking too many questions.