Path integrals and SDEs in neuroscience – part two

In the previous post we defined path integrals through a simple ‘time-slicing’ approach. And used them to compute moments of simple stochastic DEs. In this follow-up post we will examine how expansions can be used to approximate moments, how we can use the moment generating functional to compute probability densities, and how these methods may be helpful in some cases in neuroscience.

Perturbative approaches

For a general, non-linear, SDE the series will not terminate and must be truncated at some point. It is then necessary to determine which terms will contribute the sum, and to include these terms up to a given order. We mention briefly three such possibilities, though do not discuss them in any detail. One way of doing this is if some terms in $S_{I}$ ($v_{mn}\int x^{n}\tilde{x}^{m},m\ge2$) are small. Then we can simply let each such vertex contribute a small parameter $\alpha$ and perform an expansion in orders of $\alpha$ (known as a ‘weak coupling expansion’ 1).

Another option is to perform a weak noise, or loop, expansion. Here we scale the entire exponent in the MGF by some factor $h$

$$Z=\int\mathcal{D}x(t)\mathcal{D}\tilde{x}(t)e^{-\frac{1}{h}(S-\int\tilde{J}x-\int J\tilde{x})}$$

Then each vertex of $S_{I}$ gains a factor of $1/h$ and each edge of $S_{F}$ gains a factor $h$ which implies we can expand in powers of $h$. In performing this expansion, if we let $E$ denote the number of external edges of a diagram, $I$ the number of internal edges and $V$ the number of vertices then each connected graph has a factor of \(h^{I+E-V}\) and, in fact, it can be shown by induction that: \(L=I-V+1\) where $L$ is the number of loops the diagram contains. Thus, each graph collects a factor of $h^{E-L+1}$. This allows us to order the expansion in terms of the number of loops in each diagram. Diagrams which contain no loops are trees, or classical diagrams. Such diagrams form the basis of the semi-classical approximation.

This expansion is of course only valid when the contribution of the higher loop number diagrams is smaller than that of the lower loop number diagrams. The Ginzburg criterion says when this expansion is indeed valid.

Some other examples

We present two further examples which demonstrate how these methods are used.

Example 1

A simple extension of the OU process so that it is now mean-reverting (to something not zero, as in the previous case) is the SDE

$$\dot{x}(t)+a(b+x(t))-\sqrt{D}\eta(t)=0.$$

This problem is obviously very similar to the above problem and is of course solved almost identically. This time the action of the process is

$$S=\int\left[\tilde{x}(t)(\dot{x}(t)+a(b+x(t)))+\tilde{x}(t)y\delta(t-t_{0})-\frac{D}{2}\tilde{x}^{2}(t)\right]\, dt$$

such that the free action is as before:

$$S_{F}=\int\tilde{x}(t)\left[\dot{x}(t)+ax(t))\right]\, dt$$

(the linear, homogenous part, for which a Green’s function can be calculated) and the interacting action is:

$$S_{I}=\int\left[\tilde{x}(t)\left(y\delta(t-t_{0})+ba\right)-\frac{D}{2}\tilde{x}^{2}(t)\right]\, dt.$$

The only details that change are thus that the vertex linear in $\tilde{x}(t)$ changes from

$$\int\tilde{x}(t)y\delta(t-t_{0})dt\to\int\tilde{x}(t)\left(y\delta(t-t_{0})+ba\right)dt,$$

which adds an extra term to the expression for the mean:

$$\langle x(t)\rangle=H(t-t_{0})\left(ye^{-a(t-t_{0})}+b(1-ye^{-a(t-t_{0})})\right).$$

Since only this internal vertex is affected, and the second order vertex ($D\tilde{x}(t)^{2}/2$) is unaffected, the solution for the second-order cumulant will in fact be the same as our original example:

$$\langle x(t)x(s)\rangle_{C}=D\frac{e^{2a(t-s)}-e^{2a(t+s-2t_{0})}}{2a}$$

Example 2

Consider the harmonic oscillator with noise:

$$\ddot{x}+2\gamma\dot{x}+\omega^{2}x=\sqrt{D}\eta(t)$$

where $\eta$ is a white noise process. Subject to initial conditions $x(0)=x_{0}$ and $\dot{x}(0)=v_{0}$ ($t_{0}=0$). The action for this process is

$$S=\int dt\,\left(\tilde{x}\left[\ddot{x}+2\gamma\dot{x}+\omega^{2}x+v_{0}\delta(t)+x_{0}\delta'(t)\right]+\frac{D}{2}\tilde{x}^{2}\right)$$

which we will split into free and interacting components:

$$\begin{aligned} S_{F} & = \int dt\,\left(\tilde{x}\left[\ddot{x}+2\gamma\dot{x}+\omega^{2}x\right]\right)\\ S_{I} & = \int dt\,\left(\tilde{x}\left[v_{0}\delta(t)+x_{0}\delta'(t)\right]+\frac{D}{2}\tilde{x}^{2}\right)\end{aligned}$$

The free action gives the propagator as the Green’s function:

$$\left(\frac{d^{2}}{dt^{2}}+2\gamma\frac{d}{dt}+\omega^{2}\right)G(t,t')=\delta(t-t')$$

which can be shown to be

$$G(t,t')=\frac{1}{\omega_{1}}H(t-t')e^{-\gamma(t-t')}\sin[\omega_{1}(t-t')],$$

for $\omega_{1}=\sqrt{\omega^{2}-\gamma^{2}}$. Once $G$ is determined the mean and covariance can be immediately calculated through the diagrams and calculations of the Figure 1.

img txt
Figure 1. Computation of first and second cumulant of Brownian motion process with Gaussian white noise. Diagrams (with internal vertices labeled adjacent to diagram), and the equivalent integral to evaluate to obtain each cumulant.

We find, as expected, the following mean and covariance:

$$\begin{aligned} \langle x(t)\rangle & = \int[\delta(t')v_{0}+\delta'(t')x_{0}]G(t,t')dt'\\ & = v_{0}G(t,0)+x_{0}G'(t,0)\\ & = e^{-\gamma t}\left(\frac{\gamma x_{0}+v_{0}}{\omega_{1}}\sin[\omega_{1}t]+\cos[\omega_{1}t]\right)\end{aligned}$$

and (assuming $t_{1}<t_{2}$)

$$\begin{aligned} \langle x(t_{1})x(t_{2})\rangle_{C} & = D\int G(t_{1},t)G(t_{2},t)dt\\ & = \frac{D}{\omega_{1}^{2}}\int_{0}^{\infty}e^{-\gamma(t_{1}+t_{2}-2t)}H(t_{1}-t)H(t_{2}-t)\sin[\omega_{1}(t_{1}-t)]\sin[\omega_{1}(t_{2}-t)]dt\\ & = \frac{D{\rm e}^{-\gamma\,{\it t_{1}}-\gamma\,{\it t_{2}}}}{4\omega_{1}^{2}\omega^{2}\gamma}\left({\rm e}^{2\,\gamma\,{\it t1}}\left[\cos\left(\omega_{1}\,{\it t_{1}}\right)\cos\left(\omega\,{\it t_{2}}\right)\omega_{1}^{2}+\cos\left(\omega_{1}\,{\it t_{1}}\right)\sin\left(\omega_{1}\,{\it t_{2}}\right)\gamma\,\omega_{1}-\cos\left(\omega_{1}\,{\it t_{2}}\right)\sin\left(\omega_{1}\,{\it t_{1}}\right)\gamma\,\omega_{1}\right]\right.\\ & + {\rm e}^{2\,\gamma\,{\it t_{1}}}\sin\left(\omega_{1}\,{\it t_{1}}\right)\sin\left(\omega_{1}\,{\it t_{2}}\right)\omega_{1}^{2}-\cos\left(\omega_{1}\,{\it t_{1}}\right)\cos\left(\omega_{1}\,{\it t_{2}}\right)\omega_{1}^{2}-\omega_{1}\,\cos\left(\omega_{1}\,{\it t_{1}}\right)\sin\left(\omega_{1}\,{\it t_{2}}\right)\gamma\\ & + \left.\omega_{1}\,\sin\left(\omega_{1}\,{\it t_{1}}\right)\cos\left(\omega_{1}\,{\it t_{2}}\right)\gamma-2\,\gamma^{2}\sin\left(\omega_{1}\,{\it t_{1}}\right)\sin\left(\omega_{1}\,{\it t_{2}}\right)-\sin\left(\omega_{1}\,{\it t_{1}}\right)\sin\left(\omega_{1}\,{\it t_{2}}\right)\omega^{2}\right)\end{aligned}$$

which, with some rearrangement, simplifies to the variance2:

$$\langle x(t)^{2}\rangle_{C}=\frac{D}{4\gamma\omega^{2}}\left[1-\exp(-2\gamma t)\left\{ 1+\frac{\gamma}{\omega_{1}}\left(\sin(2\omega_{1}t)+\frac{2\gamma}{\omega_{1}}\sin^{2}(\omega_{1}t)\right)\right\} \right].$$

Connection to Fokker-Planck Equation

So far we have considered the moment generating functional, and the probability density functional $P[x(t)]$, however often of interest is the probability density $p(x,t)$. This can be computed from the above framework with the following derivation.

Let $U(x_{1},t_{1}|x_{0},t_{0})$ be the transition probability between a start point $x_{0},t_{0}$ to $x_{1},t_{1}$, then

$$\begin{aligned} U(x_{1},t_{1}|x_{0},t_{0}) & = \int\mathcal{D}x(t)\delta(x(t_{1})-x_{1})P[x(t)]\\ & = \frac{1}{2\pi i}\int d\lambda\int\mathcal{D}x(t)e^{-\lambda(x(t_{1})-x_{1})}P[x(t)]\\ & = \frac{1}{2\pi i}\int d\lambda e^{-\lambda(x_{1}-x_{0})}Z_{CM}(\lambda)\end{aligned}$$

where $Z_{CM}$ gives the moments of $x(t_{1})-x_{0}$ given $x(t_{0})=x_{0}$

$$Z_{CM}=\int\mathcal{D}xe^{\lambda(x(t_{1})-x_{0})}P[x(t)]$$

Using the following two relations:

$$\begin{aligned} Z_{CM}(\lambda) & = 1+\sum_{n=1}^{\infty}\frac{1}{n!}\langle(x(t_{1})-x_{0})^{n}\rangle_{x(t_{0})=x_{0}}\\ \frac{1}{2\pi i}\int d\lambda\, e^{-\lambda(x_{1}-x_{0})}\lambda^{n} & = \left(-\frac{\partial}{\partial x_{1}}\right)^{n}\delta(x_{1}-x_{0})\end{aligned}$$

then $U$ becomes

$$U(x_{1},t_{1}|x_{0},t_{0})=\left(1+\sum_{n=1}^{\infty}\frac{1}{n!}\left(-\frac{\partial}{\partial x_{1}}\right)^{n}\langle(x(t_{1})-x_{0})^{n}\rangle_{x(t_{0})=x_{0}}\right)\delta(x_{1}-x_{0}).$$

From here we can derive a relation for $p(x,t)$:

$$\begin{aligned} p(y,t+\Delta t) & = \int U(x,t+\Delta t|y',t)p(y',t)\, dy'\\ & = \int\left(1+\sum_{n=1}^{\infty}\frac{1}{n!}\left(-\frac{\partial}{\partial y}\right)^{n}\langle(x(t_{1})-y')^{n}\rangle_{x(t)=y'}\right)\delta(y-y')p(y',t)\, dy'\\ & = \left(1+\sum_{n=1}^{\infty}\frac{1}{n!}\left(-\frac{\partial}{\partial y}\right)^{n}\langle(x(t_{1})-y)^{n}\rangle_{x(t)=y}\right)p(y,t)\end{aligned}$$

and thus a PDE for $p(x,t)$:

$$\begin{aligned} \frac{\partial p(y,t)}{\partial t}\Delta t & = \sum_{n=1}^{\infty}\frac{1}{n!}\left(-\frac{\partial}{\partial y}\right)^{n}\langle(x(t_{1})-y)^{n}\rangle_{x(t)=y}p(y,t)+O(\Delta t^{2})\\ \frac{\partial p(y,t)}{\partial t} & = \sum_{n=1}^{\infty}\frac{1}{n!}\left(-\frac{\partial}{\partial y}\right)^{n}D_{n}(y,t)p(y,t)\end{aligned}$$

as $\Delta t\to0$. This is the Kramers-Moyal expansion where the $D_{n}$ are

$$D_{n}(y,t)=\lim_{\Delta t\to0}\left.\frac{\langle(x(t+\Delta t)-y)^{n}\rangle}{\Delta t}\right|_{x(t)=y}$$

and are computed from the SDE. For example, for the Ito process

$$dx=f(x,t)dt+g(x,t)dB_{t}$$ we can compute $D_{1}(y,t)=f(y,t)$

and $D_{2}(y,t)=g(y,t)^{2}$, $D_{n}=0$ for $n>2$. Hence the PDE becomes a Fokker-Planck equation

$$\frac{\partial p(y,t)}{\partial t}=\left(\frac{\partial}{\partial y}D_{1}(y,t)+\frac{1}{2}\frac{\partial^{2}}{\partial y^{2}}D_{2}(y,t)\right)p(y,t)$$

Compute $p(x,t)=U(x,t|0,0)$ as

$$\begin{aligned} p(x,t) & = \frac{1}{2\pi i}\int d\lambda\, e^{-\lambda x}Z_{CM}(\lambda)\\ & = \frac{1}{2\pi i}\int d\lambda\, e^{-\lambda x}\exp\left[\sum_{n=1}\frac{1}{n!}\lambda^{n}\langle x(t)^{n}\rangle_{C}\right]\end{aligned}$$

For OU, we know the cumulants hence

$$p(x,t)=\sqrt{\frac{a}{\pi D(1-e^{-2a(t-t_{0})})}}\exp\left(\frac{-a(x-ye^{-a(t-t_{0})})^{2}}{D(1-e^{-2a(t-t_{0})})}\right)$$

Statistical mechanics of the neocortex

Having spent some time on how path integrals can be used as calculation devices for studying stochastic DEs, we now turn to some specific examples of their use in neuroscience.

Neural field models

A neural field model represents a continuum approximation to neural activity (particularly in models of cortex). They are often expressed as integro-differential equations:

$$dU=\left[-U+\int_{-\infty}^{\infty}w(x-y)F(U(y,t))dy\right]dt$$

where $U=U(x,t)$ may be either the mean firing rate or a measure of synaptic input at position $x$ and time $t$. The function $w(x,y)=w(|x-y|)$ is a weighting function often taken to represent the synaptic weight as a function of distance from $x$. $F(U)$ is a measure of the firing rate as a function of inputs. For tractability, $F$ may often be taken to be a heaviside function, or a sigmoid curve. It is called a field because each continuous point $x$ is assigned a value $U$, instead of modelling the activity of individual neurons. A number of spatio-temporal pattern forming systems may be studied in the context of these models. The formation of ocular-dominance columns, geometric hallucinations, persistent ‘bump models’ of activity associated with working memory, and perceptual switching in optical illusions are all examples of pattern formation that can be modelled by such a theory. Refer to Bressloff 2012 for a comprehensive review.

The addition of additive noise to the above model:

$$dU=\left[-U+\int_{-\infty}^{\infty}w(x-y)F(U(y,t))dy\right]dt+g(U)dW(x,t)\label{eq:neuralfield}$$

for $dW(x,t)$ a white noise process has been studied by Bressloff from both a path integral approach, and by studying a perturbation expansion of the resulting master equation more directly. We describe briefly how the path integral approach is formulated, and the results that can be computed as a result. More details are found in Bressloff 2009.

As above, this is discretized in both time and space to give:

$$U_{i+1,m}-U_{i,m}=\left[-U_{i,m}+\Delta d\sum_{n}w_{mn}F(U_{i,n})\right]\Delta t+\frac{\sqrt{\Delta t}}{\sqrt{\Delta d}}g(U_{i,m})dW_{i,m}+\Phi_{m}\delta_{i,0}$$

for initial condition function $\Phi(x)=U(x,0).$ Where each noise process is a zero-mean, delta correlated process:

$$\langle dW_{i,m}\rangle=0,\quad\langle dW_{i,m}dW_{j,n}\rangle=\delta_{i,j}\delta_{m,n}.$$

Let $U$ and $W$ represent vectors with components $U_{i,m}$ and $W_{i,m}$ such that we can write down the probability density function conditioned on a particular realization of $W$:

$$P(U|W)=\prod_{n}\prod_{i=1}^{N}\delta\left(U_{i+1,m}-U_{i,m}+\left[U_{i,m}-\Delta d\sum_{n}w_{mn}F(U_{i,n})\right]\Delta t-\frac{\sqrt{\Delta t}}{\sqrt{\Delta d}}g(U_{i,m})dW_{i,m}-\Phi_{m}\delta_{i,0}\right)$$

where we again use the Fourier representation of the delta function:

$$P(U|W)=\int\prod_{n}\prod_{i=1}^{N}\frac{d\tilde{U}_{j,n}}{2\pi}\exp\tilde{-iU_{i,m}}\left(U_{i+1,m}-U_{i,m}+\left[U_{i,m}-\Delta d\sum_{n}w_{mn}F(U_{i,n})\right]\Delta t-\frac{\sqrt{\Delta t}}{\sqrt{\Delta d}}g(U_{i,m})dW_{i,m}-\Phi_{m}\delta_{i,0}\right).$$

Knowing the density for the random vector $W$ we can write the probability of a vector $U$:

$$P(U)=\int\prod_{n}\prod_{i=1}^{N}\frac{d\tilde{U}_{j,n}}{2\pi}\exp\tilde{-iU_{i,m}}\left(U_{i+1,m}-U_{i,m}+\left[U_{i,m}-\Delta d\sum_{n}w_{mn}F(U_{i,n})\right]\Delta t+\frac{\Delta t}{2\Delta d}g^{2}(U_{i,m})\tilde{U}_{i,m}-\Phi_{m}\delta_{i,0}\right).$$

Taking the continuum limit gives the density:

$$P[U]=\int\mathcal{D}\tilde{U}e^{-S[U,\tilde{U}]},$$

for action

$$S[U,\tilde{U}]=\int dx\int_{0}^{T}dt\tilde{U}\left[U_{t}(x,t)+U(x,t)-\int w(x-y)F(U(y,t))dy-\Phi(x)\delta(t)-\frac{1}{2}\tilde{U}^{2}g^{2}(U(x,t))\right].$$

Given the action, the moment generating functional and propagator can be defined as previously. In linear cases the moments can be computed exactly.

The weak-noise expansion

If the noise term is scaled by a small parameter, $g(U)\to\sigma g(U)$ for $\sigma\ll1.$ (For instance, in the case of a Langevin approximation to the master equation, it is the case that $\sigma\approx1/N$ for $N$ the number of neurons.) Rescaling variables $\tilde{U}\to\tilde{U}/\sigma^{2}$ and $\tilde{J}\to\tilde{J}/\sigma^{2}$ then the generating functional becomes:

$$Z=\int\mathcal{D}U\mathcal{D}\tilde{U}e^{-\frac{1}{\sigma^{2}}S[U,\tilde{U}]}e^{\frac{1}{\sigma^{2}}\int dx\int_{0}^{T}dt[\tilde{U}J+\tilde{J}U]},$$

which can be thought of in terms of a loop expansion described above. Performing the expansion in orders of $\sigma$ allows for a ‘semi-classical’ expansion to be performed. The corrections to the deterministic equations take the form

$$\frac{\partial v}{\partial t}=-v(x,t)+\int w(x-y)F(v(y,t))dy+\frac{\sigma^{2}}{2}\int w(x-y)C(x,y,t)F''(v(y,t))dy+O(\sigma^{4})$$

for $C(x,y,t)$ the second-order cumulant (covariance) function. The expression for $C(x,y,t)$ is derived and studied in more detail in Buice et al 2010.

Mean-field Wilson-Cowan equations and corrections

Another approach using path integrals has been extensively studied by Buice and Cowan (Buice2007, see also Bresslof 2009). Here, we envision a network of neurons which exist in one of either two or three states, depending on the time scales of interest relative to the time scales of the neurons being studied. Each neuron in the network is modeled as a Markov process which transitions between active and quiescent states (and refractory, if it’s relevant).

For the two state model, assume that each neuron in the network creates spikes and that these spikes have an impact on the network dynamics for an exponentially distributed time given by a decay rate $\alpha.$ Let $n_{i}$ denote the number of ‘active’ spikes at a given time for neuron $i$ and let $\mathbf{n}$ denote the state of all neurons at a given time. We assume that the effect of neuron $j$ on neuron $i$ is given by the function

$$f(\sum_{ij}w_{ij}n_{j}+I)$$

for some firing rate function $f$ and some external input $I$. Then the master equation for the state of the system is:

$$\frac{dP(\mathbf{n},t)}{dt}=\sum_{i}\alpha(n_{i}+1)P(\mathbf{n}_{i+},t)-\alpha n_{i}P(\mathbf{n},t)+f\left(\sum_{ij}w_{ij}n_{j}+I\right)[P(\mathbf{n}_{i-},t)-P(\mathbf{n},t)]$$

where we denote by \(\mathbf{n}_{i\pm}\) the state of the network \(\mathbf{n}\) with one more or less active spike in neuron $i$. The assumption is made that each neuron is identical and that the weight function \(w_{ij}=w_{|i-j|}\), that is, it only depends on the distance between the two neurons. Of interest is the mean activity of neuron $i$:

$$a_{i}(t)=\langle n_{i}(t)\rangle.$$

Using an operator representation it is possible to derive a stochastic field theory in the continuum limit ($N\to\infty$ and $n_{i}(t)\to n(x,t)$) to give moments of $n_{i}(t)$ in terms of the the interaction between two fields $\varphi(x,t)$ and $\tilde{\varphi}(x,t)$. The details of the derivation are contained in the Appendices of Buice and Cowan 2007. These fields can be related to quantities of interest through

$$a(x,t)=\langle n(x,t)\rangle=\langle\varphi(x,t)\rangle$$

and

$$\langle n(x_{1},t_{1})n(x_{2},t_{2})\rangle=\langle\varphi(x_{1},t_{1})\varphi(x_{2},t_{2})\rangle+\langle\varphi(x_{1},t_{1})\tilde{\varphi}(x_{2},t_{2})\rangle a(x_{2},t_{2})$$

for $t_{1}>t_{2}$. The propagator, as before, is

$$G(x_{1},t_{1};x_{2},t_{2})=\langle\varphi(x_{1},t_{1})\tilde{\varphi}(x_{2},t_{2})\rangle$$

and the generating function is given by

$$Z[J,\tilde{J}]=\int\mathcal{D}\varphi\mathcal{D}\tilde{\varphi}e^{-S[\varphi,\tilde{\varphi}]+J\tilde{\varphi}+\tilde{J}\varphi}.$$

For the master equation above the action is given by:

$$S[\varphi,\tilde{\varphi}]=\int dx\left(\int_{0}^{t}dt\tilde{\varphi}\partial_{t}\varphi+\alpha\tilde{\varphi}\varphi-\tilde{\varphi}f\left(w\star[\tilde{\varphi}\varphi+\varphi]+I\right)\right)-\int dx\,\bar{n}(x)\tilde{\varphi}(x,0),$$

for convolution $\star$ and initial condition vector $\bar{n}(x).$ This now allows the action to be divided into a free and interacting action and for perturbation expansions to be performed.

The loop expansion provides a useful expansion and, as described in Section 2, amounts to ordering Feynman diagrams by the number of loops contained in them. The zeroth order, mean field theory, corresponds to Feyman diagrams containing zero loops. Such diagrams are called tree diagrams. It can be shown that the dynamics of this expansion obey: \(\partial_{t}a_{0}(x,t)+\alpha a_{0}(x,t)-f(w\star a_{0}(x,t)+I)=0\) which are also a simple form of the well known Wilson-Cowan equations. That these equations are recovered as the zeroth order expansion of the continuum limit of the master equation gives confidence that the higher-order terms of the expansion will indeed correspond to relevant dynamics. The ‘one-loop’ correction is given by:

$$\partial_{t}a_{1}(x,t)+\alpha a_{1}(x,t)-f(w\star a_{1}(x,t)+I)+h\mathcal{N}(a_{1},\Delta)=0$$

for

$$\mathcal{N}(a,\Delta)=\int dx_{1}dx_{2}dx'dt'dx''f^{(2)}(x,t)w(x-x_{1})w(x-x_{2})f^{(1)}(x',t')w(x'-x'')\Delta(x_{1}-x',t-t')\Delta(x_{2}-x'',t-t')a(x'',t')$$

and the ‘tree-level’ propagator

Summary

We have described how path integrals can be used to compute moments and densities of a stochastic differential equation, and how they can be used to perform perturbation expansions around ‘mean-field’, or classical, solutions.

Path integral methods, once one is accustomed to their use, can provide a quick and intuitive way of solving particular problems. However, it is worth highlighting that there are few examples of problems which can be solved with path integral methods but not with other, perhaps more standard, methods. Thus, while they are a powerful and general tool, their utility is often countered by the fact that, for many problems, simpler solution techniques exist.

Further, it should be highlighted that the path integral as it was defined here – as a limit of finite-dimensional integration ($\int\prod_{i}^{N}dx_{i}\to\int\mathcal{D}x(t)$) – does not result in a valid measure. In some cases the Weiner measure may equivalently be used, but in other cases the path integral as formulated by Feynman remains a mathematically unjustified entity.

With these caveats in mind, their principle benefit, then, may instead come from the intuition that they bring to novel mathematical and physical problems. When unsure how to proceed, having as many different ways of approaching a problem can only be beneficial. Indeed, in 1965 Feynman said in his Nobel acceptance lecture: “Theories of the known, which are described by different physical ideas may be equivalent in all their predictions and are hence scientifically indistinguishable. However, they are not psychologically identical when trying to move from that base into the unknown. For different views suggest different kinds of modifications which might be made and hence are not equivalent in the hypotheses one generates from them in one’s attempt to understand what is not yet understood.”

  1. e.g. in QED this coupling is related to change of electron ($e$): $\alpha\approx1/137=\text{fine structure constant}$ 

  2. The expression for the mean and variance I was able to verify (pp. 83-85 @Gitterman2005). The expression for the covariance I was unable to locate in another source to verify; that it reduces to the correct variance is encouraging, however. 

Updated: