Path integrals and SDEs in neuroscience

An introduction to the relation between path integrals and stochastic differential equations, and how to use Feynman diagrams.

Introduction

The path integral was first considered by Wiener in the 1930s in his study of diffusion and Brownian motion. It was later co-opted by Dirac and by Richard Feynman in Lagrangian formulations of quantum mechanics. They provide a quite general and powerful approach to tackling problems not just in quantum field theory but in stochastic differential equations more generally. There is an associated learning curve to being able to make use of path integral methods, however, and for many problems simpler solution techniques exist.

Nonetheless, it is interesting to think about their application to neuroscience. In the following two posts I will describe how path integrals can be defined and used to solve simple SDEs, and why such ‘statistical mechanics’ tools may be useful in studying the brain. The following largely follows material from the paper “Path Integral Methods for Stochastic Differential Equations”, by Carson Chow and Michael Buice, 2012. This post will assume some familiarity with probability and stochastic processes.

Statistical mechanics in the brain

The cerebral cortex is the outer-most layer of the mammalian brain. In a human brain the *neocortex *consists of approximately 30 billion neurons. Of all parts of the human brain, its neural actvity is the most correlated with our *high-order *behaviour: language, self-control, learning, attention, memory, planning. Lesion and stroke studies make clear that the cortex has signficant functional localization, however, despite this localization, individual neurons from different regions of cortex in general require expert training to distinguish – these differences in functionality appear to arise largely from differences in connectivity.

The interplay between structural homogeniety and functional heterogeneity of different cortical regions poses definitive challenges to obtaining quantitative models of the large-scale activity of the cortex. Since structured neural activity is observed on spatial scales involving thousands to billions of neurons, and given that this activity is associated with particular functions and pathologies, dynamical models of large-scale cortical networks are definitely necessary to an understanding of these functions and dysfunctions. Examples of large-scale activities include wave-like activity during development, bump models of working memory, avalanches in awake and sleeping states, and pathological oscillations responsible for epileptic seizure.

A particular challenge to building such models is noise: it is well known that significant neural variability at both the individual and population level exists in response to repeated stimuli. The spike trains of individual cortical neurons are in general very noisy, such that their firing is often well approximated by a Poisson process. The primary source of cell intrinsic noise is fluctuations in ion channel activity, which arises from a finite number of ion channels opening and closing. While the primary source of extrinsic noise is from uncorrelated synaptic inputs – a neuron may contain thousands of synapses whose inputs often do not contain meaningful, correlated input. Population responses similarly exhibit highly variable responses. Models of cortical networks must account for this variability, or demonstrate that it is irrelevant to the particular questions being asked.

Methods from statistical mechanics lend themselves well to modelling both of these factors – statistical, but meaningful, connectivity, and noisy, but meaningful, neural responses to stimulus – in networks with large numbers of neurons. With these thoughts in mind, let’s see how path integrals may be used to study SDEs relevant to tackling the above issues.

A path integral representation of stochastic differential equations

We will begin by describing in some detail how they are constructed and manipulated. In general, we would like to study SDEs that may be of the form:

$$\frac{d\mathbf{x}}{dt}=\mathbf{f}(\mathbf{x})+\mathbf{g}(\mathbf{x})\mathbf{\eta}(t)$$

for some noise process $\eta(t).$ Such a process may be characterized by either its probability density function (pdf, $p(x,t)$) or, equivalently, by its moment heirarchy

$$\langle x(t)\rangle,\quad\langle x(t)x(t')\rangle,\dots$$

A generic SDE in the above form may be studied as either a Langevin equation, or can be written as a Fokker-Planck equation, but perturbation methods in either of these forms may be difficult to apply. The path integral approach is able to provide more mechanical methods for performing particular types of perturbation expansions. In the following sections we will derive a path integral formulation of a moment generating funcational of an SDE, using the Ornstein-Uhlenbeck process as an example. This will be used to demonstrate the use of perturbation techinques using Feynman diagrams. We will also derive the pdf $p(x,t)$ of such a process.

Path integrals

A path integral, loosely, is an integral in which the domain of integration is not a subset of a finite dimensional space (say $\mathbb{R}^{n}$) but instead an infinite dimensional function space. For instance, if we can define the probability density associated with a particular realization of a random trajectory according to a given SDE, then the probability that a particle travels from a point $\mathbf{a}$ to a point $\mathbf{b}$ can be computed by marginalizing (summing) over all paths connecting these two points, subject to a suitable normalization. Before taking this further it is useful to review some relevant concepts.

Moment generating functions

The moment generating function (MGF) forms a crucial component to this framework. Recall that for a single random variable $X$, the moments ($\langle X\rangle=\int x^{n}P(x)\, dx$) are obtained from the MGF

$$Z(\lambda)=\langle e^{\lambda x}\rangle=\int e^{\lambda x}P(x)\, dx$$

by taking derivatives

$$\langle X^{n}\rangle=\left.\frac{1}{Z(0)}\frac{d^{n}}{d\lambda^{n}}Z(\lambda)\right|_{\lambda=0},$$

and that the MGF contains all information about RV $X$, as an alternative to studying the pdf directly.

In a similar fashion we can define \(W(\lambda)=\log Z(\lambda),\) so that

$$\langle X^{n}\rangle_{C}=\frac{d^{n}}{d\lambda^{n}}\left.W(\lambda)\right|_{\lambda=0}$$

are the cumulants of RV $X$.

For an $n$-dimensional random variable $\mathbf{x}=(x_{1},\dots,x_{n})$, the generating function is

$$Z(\mathbf{\lambda})=\langle e^{\mathbf{\lambda}\cdot\mathbf{x}}\rangle=\int\prod_{i=1}^{n}dx_{i}e^{\mathbf{\lambda}\cdot\mathbf{x}}P(\mathbf{x})$$

for $\lambda=(\lambda_{1},\dots,\lambda_{n})$. Here, the $k$-th order moments are obtained via

$$\left\langle \prod_{i=1}^{k}x_{(i)}\right\rangle =\left.\frac{1}{Z(0)}\prod_{i=1}^{k}\frac{\partial^{n}}{\partial\lambda_{(i)}}Z(\lambda)\right|_{\lambda=0}.$$

And, as before, the cumulant generating function is $W(\lambda)=\log Z(\lambda)$.

Stochastic processes

Instead of considering random variables in $n$ dimensions, we can consider ‘infinite dimensional’ random variables through a time-slicing limiting process. That is, we identify with each $x_{i}$ in $\mathbf{x}$ a time $t=ih$ such that $x_{i}=x(ih)$, and we let total time $T=nh$, thereby splitting the interval $[0,T]$ into $n$ segments of length $h.$ From here, leaving any questions of convergence, etc, aside for the time being, we can take the limit $n\to\infty$ (with $h=T/n$) such that $x_{i}\to x(ih)=x(t)$, $\lambda_{i}\to\lambda(t)$ and $P(\mathbf{x})\to P[x(t)]=\exp(-S[x(t)])$ for some functional $S[x]$ that we will call the action. Thus we envision that to compute the MGF, instead of summing over all points in $\mathbb{R}^{n}$ $\left(\int\prod_{i=1}^{n}dx_{i}\right)$, we are instead summing over all paths using a differential denoted $\int\mathcal{D}x(t)$:

$$Z[\lambda]=\int\mathcal{D}x(t)\, e^{-S[x]+\int\lambda(t)x(t)\, dt}.$$

From this formula, moments can now be obtained via

$$\left\langle \prod_{i=1}^{k}x(t_{(i)})\right\rangle =\frac{1}{Z[0]}\left.\prod_{i=1}^{k}\frac{\delta}{\delta\lambda(t_{(i)})}Z[\lambda]\right|_{\lambda(t)=0},$$

with the cumulant generating functional again being

$$W[\lambda]=\log(Z[\lambda]).$$

Generic Gaussian processes

The most important random process we consider is the Gaussian. Recall that in one dimension the RV $X\sim N(a,\sigma^{2})$ has MGF

$$Z(\lambda)=\int_{-\infty}^{\infty}\exp\left[\frac{-(x-a)^{2}}{2\sigma^{2}}+\lambda x\right]\, dx=\sqrt{2\pi}\sigma\exp(\lambda a+\lambda^{2}\sigma^{2}/2),$$

which is obtained by a ‘completing the square’ manipulation, and has cumulant GF

$$W(\lambda)=\lambda a+\frac{1}{2}\lambda^{2}\sigma^{2}+\log(Z(0)),$$

so that the cumulants are $\langle x\rangle_{C}=a,\langle x^{2}\rangle_{C}=\text{var}{X}=\sigma^{2}$ and $\langle x^{k}\rangle_{C}=0$ for all $k>2$.

The $n$ dimensional Gaussian RV $X\sim N(0,K)$, with covariance matrix $K$, has MGF

$$Z(\lambda)=\int_{-\infty}^{\infty}e^{-\frac{1}{2}\sum_{jk}x_{j}K_{jk}^{-1}x_{k}+\sum_{j}\lambda_{j}x_{j}}\, dx$$

This integral can also be integrated exactly. Indeed, since $K$ is symmetric positive definite (and so is $K^{-1}$), we can diagonalise in orthonormal coordinates, making each dimension independent, and allowing the integration to be performed one dimension at a time. This provides $Z(\lambda)=[2\pi\det(K)]^{n/2}e^{\frac{1}{2}\sum_{jk}\lambda_{j}K_{jk}\lambda_{k}}.$ In an analogous fashion, through the same limiting process described above, the infinite dimensional case is

$$Z[\lambda]=\int\mathcal{D}x(t)e^{-\frac{1}{2}\int x(s)K^{-1}(s,t)x(t)dsdt+\int\lambda(t)x(t)dt}=Z[0]e^{\frac{1}{2}\int\lambda(s)K(s,t)\lambda(t)dsdt}.$$

Importantly for perturbation techniques, higher order (centered) moments of multivariate Gaussian random variables can be expressed simply as a sum of products of their second moments. This result is known as Wick’s theorem:

$$ \left\langle \prod_{i=1}^{k} x_{(i)} \right\rangle = \begin{cases} 0, & k\text{ odd}\\ \sum_{\sigma\in A}K_{\sigma(1)\sigma(2)}K_{\sigma(3)\sigma(4)}\cdots K_{\sigma(k-1)\sigma(k)}, & k\text{ even}\end{cases} $$

for $A={\text{all pairings of }x_{(i)}}$. Only even moments are non-zero. Note that this means that the covariance matrix $K$ is the key to determining all higher order moments. Wick’s theorem lies at the heart of calculations utilizing Feynman diagrams.

Applications to SDEs

The previous construction for generic Gaussian processes can be adapted to construct a moment generating functional for generic SDEs of the form

$$\frac{dx}{dt}=f(x,t)+g(x)\eta(t)+y\delta(t-t_{0}),$$

for $t\in[0,T]$. The process involves the same time-slicing approach, in which the above SDE is discretized in time steps $h$

$$x_{i+1}-x_{i}=f_{i}(x_{i})h+g_{i}(x_{i})w_{i}\sqrt{h}+y\delta_{i,0}$$

under the Ito interpretation. We assume that each $w_{i}$ is a Guassian with $\langle w_{i}\rangle=0$ and $\langle w_{i}w_{j}\rangle=\delta_{ij}$ such that $w_{i}$ describes a Guassian white noise process. Then the PDF of $\mathbf{x}$ given a particuar instantiation of a random walk ${w_{i}}$ is

$$P[x|w;y]=\prod_{i=0}^{n}\delta[x_{i+1}-x_{i}+f_{i}(x_{i})h-g_{i}(x_{i})w_{i}\sqrt{h}-y\delta_{i,0}].$$

If we take the Fourier transform of the PDF:

$$P[x|w;y]=\int\prod_{j=0}^{N}\frac{dk_{j}}{2\pi}e^{-i\sum_{j}k_{j}(x_{j+1}-x_{j}-f_{j}(x_{j})h-g_{j}(x_{j})w_{j}\sqrt{h}-y\delta_{j,0})}$$

where we’ve made use of the fact that the Dirac delta function has Fourier transform:

$$\mathcal{F}\{\delta(x-x_{0});x\to k\}=\frac{1}{2\pi}e^{-ix_{0}k}.$$

Marginalizing over all random trajectories ${w}$ and evaluating the resulting Gaussian integral gives:

$$P[x|y]=\int\prod_{j=0}^{N}\frac{dk_{j}}{2\pi}e^{-\sum_{j}(ik_{j})\left(\frac{x_{j+1}-x_{j}}{h}-f_{j}(x_{j})-y\delta_{j,0}/h\right)h+\sum_{j}\frac{1}{2}g_{j}^{2}(x_{j})(ik_{j})^{2}h}$$

Again we take the continuum limit by letting $h\to0$ with $N=T/h$, and by replacing $ik_{j}$ with $\tilde{x}(t)$ and ${\displaystyle \frac{x_{j+1}-x_{j}}{h}}$ with $\dot{x}(t)$:

$$P[x(t)|y,t_{0}]=\int\mathcal{D}\tilde{x}(t)e^{-\int[\tilde{x}(t)(\dot{x}(t)-f(x(t),t)-y\delta(t-t_{0}))-\frac{1}{2}\tilde{x}^{2}g^{2}(x(t),t)]dt}.$$

The function $\tilde{x}(t)$ represents a function of the wave numbers $k_{j}$, thus we can write down a moment generating functional for both its position and its conjugate space:

$$Z[J,\tilde{J}]=\int\mathcal{D}x(t)\mathcal{D}\dot{x}(t)e^{-S[x,\tilde{x}]+\int\tilde{J}x\, dt+\int J\tilde{x}\, dt}$$

More generally, instead of $g(x)\eta(t)$ with $\eta(t)$ a white noise process, an SDE having a noise process with cumulant $W[\lambda(t)]$ will have the PDF:

$$\begin{aligned} P[x(t)|y,t_{0}] & = \int\mathcal{D}\eta(t)\delta[\dot{x}(t)-f(x,t)-\eta(t)-y\delta(t-t_{0})]e^{-S[\eta(t)]}\\ & = \int\mathcal{D}\eta(t)\mathcal{D}\tilde{x}(t)e^{-\int\tilde{x}(t)(\dot{x}(t)-f(x,t)-y\delta(t-t_{o}))\, dt+W[\tilde{x}(t)]}\end{aligned}$$

If $\eta(t)$ is delta correlated ($\langle\eta(t)\eta(t’)\rangle=\delta(t-t’)$) then $W[\tilde{x}(t)]$ can be Taylor expanded in both $x(t)$ and $\tilde{x}(t)$:

$$W[\tilde{x}(t)]=\sum_{n=1,m=0}^{\infty}\frac{v_{nm}}{n!}\int\tilde{x}^{n}(t)x^{m}(t)\, dt.$$

Note that the summation over $n$ starts at one because $W[0]=\log(Z[0])=0$.

The Ornstein-Uhlenbeck process

As an example, consider the Orstein-Uhlenbeck process

$$\dot{x}(t)+ax(t)-\sqrt{D}\eta(t)=0$$

which has the action

$$S[x,\tilde{x}]=\int\left[\tilde{x}(t)(\dot{x}(t)+ax(t)-y\delta(t-t_{0}))-\frac{D}{2}\tilde{x}^{2}(t)\right]\, dt.$$

The moments could found immedately, since action is quadratic in $\tilde{x}(t)$[^1], however we instead demonstrate how to study the problem through a perturbation expansion. In this case the perturbation will truncate to the exact, and already known, solution. The idea is to break the action into a ‘free’ and ‘interacting’ component. The terminology comes from quantum field theory in which free terms typically represent a particle without any interaction with a field or potential, and would have a quadratic action. The free action can therefore be evaluated exactly, and the interaction term can be expressed as an ‘asymptotic series’ around this solution. Let the action be written

$$\begin{aligned} S & = S_{F}+S_{I}\\ & = \int\tilde{x}(t)\left[\dot{x}(t)+ax(t)\right]\, dt+\int\tilde{x}(t)y\delta(t-t_{0})-\frac{D}{2}\tilde{x}^{2}(t)\, dt\end{aligned}$$

We define the function $G$, known as the the linear response function or correlator or propagator, to be the Green’s function of the linear differential operator corresponding to the free action:

$$\left(\frac{d}{dt}+a\right)G(t,t')=\delta(t-t')$$

Note that $G(t,t’)$ is in fact exactly equivalent to $K(t,t’)$ from the generic Gaussian stochastic process derived previously. Note also that, in general, the ‘inverse’ of a Green’s function$G(t,t’)$ is an integral operator satisfying:

$$\mathcal{L}G=\int dt''G^{-1}(t,t'')G(t'',t')=\delta(t-t')=\left(\frac{d}{dt}+a\right)G(t,t'),$$

for some $G^{-1}(t,t’)$. The operator $\mathcal{L}$ would indeed be such an inverse for the following choice of $G^{-1}$:

$$G^{-1}(t,t')=\left(\frac{d}{dt}+a\right)\delta(t-t').$$

The free generating functional, then, is

$$Z_{F}[J,\tilde{J}]=\int\mathcal{D}x(t)\mathcal{D}\tilde{x}(t)e^{-\int dtdt'\tilde{x}(t)G^{-1}(t,t')x(t)+\int\tilde{x}(t)J(t)\, dt+\int x(t)\tilde{J}(t)\, dt}.$$

So, analogous to the multivariate Gaussian case, we can evaluate this integral exactly to obtain:

$$Z_{F}[J,\tilde{J}]=e^{\int\tilde{J}G(t,t')J\, dtdt'}.$$

For the OU process we can in fact solve the linear differential equation for Green’s function $G$: \(G(t,t')=H(t-t')e^{-a(t-t')}.\) The free moments are then given by

$$\left\langle \prod_{ij}x(t_{i})\tilde{x}(t_{j})\right\rangle _{F}=\left.\prod_{ij}\frac{\delta}{\delta\tilde{J}(t_{i})}\frac{\delta}{\delta J(t_{j})}e^{\int\tilde{J}(t)G(t,t')J(t')\, dtdt'}\right|_{J=\tilde{J}=0}.$$

Importantly, note that

$$\left\langle x(t_{1})\tilde{x}(t_{2})\right\rangle _{F}=\left.\frac{\delta}{\delta\tilde{J}(t_{1})}\frac{\delta}{\delta J(t_{2})}e^{\int\tilde{J}(t)G(t,t')J(t')\, dtdt'}\right|_{J=\tilde{J}=0}=G(t_{1},t_{2})$$

and $\langle\tilde{x}(t_{1})\tilde{x}(t_{2})\rangle_{F}=\langle x(t_{1})x(t_{2})\rangle_{F}=0$.

Since the only non-zero second order moments are those in which an $x(t)$ is paired with an $\tilde{x}(t’)$ then Wick’s theorem means that all non-zero higher order free moments must have equal numbers of $x$’s as $\tilde{x}$’s. This is important in performing the expansions below.

Using Feynman diagrams

We have split the action into, loosely, linear and non-linear parts[^2] $S=S_{F}+S_{I}$ so that the MGF can be written:

$$\begin{aligned} Z[J,\tilde{J}] & = \int\mathcal{D}x(t)\mathcal{D}\tilde{x}(t)e^{-S_{F}-S_{I}+\int\tilde{J}x+\int J\tilde{x}}\\ & = \int\mathcal{D}x(t)\mathcal{D}\tilde{x}(t)P_{F}[x(t),\tilde{x(t)}]e^{-S_{I}+\int\tilde{J}x+\int J\tilde{x}}\\ & = \int\mathcal{D}x(t)\mathcal{D}\tilde{x}(t)P_{F}[x(t),\tilde{x(t)}]\sum_{n=0}^{\infty}\frac{1}{n!}(-S_{I}+\int\tilde{J}x+\int J\tilde{x})^{n}\\ & = \sum_{n=0}^{\infty}\frac{1}{n!}\left\langle \mu^{n}\right\rangle _{F}\end{aligned}$$

with $\mu=S_{I}+\int\tilde{J}x\, dt+\int J\tilde{x}\, dt$.

We have now expressed the MGF in terms of a sum of free moments, which we know how to evaluate. To proceed, expand $S_{I}$:

$$S_{I}=\sum_{m\ge0,n\ge0}V_{mn}=\sum_{m\ge0,m\ge n}v_{mn}\int x^{m}\tilde{x}^{n}\, dt.$$

In evaluating the expression for $Z$, there exists a diagrammatic way to visualize each term that we need to consider for a desired moment. Recall that the only free moments that are going to be non-zero are the ones containing equal numbers of $x(t)$ and $\tilde{x(t)}$ terms. Wick’s theorem then expresses these moments as the sum of the product of all possible pairings between the $x(t)$ and $\tilde{x}(t)$ terms. Thus each term of the multinomial expansion

$$\left\langle \left(\sum_{n\ge0,m\ge0}v_{mn}\int x^{m}\tilde{x}^{n}\, dt+\int\tilde{J}xdt+\int J\tilde{x}dt\right)^{n}\right\rangle _{F}$$

can be thought of in terms of these pairings. The idea is that with each $V_{mn}$ in $S_{I}$ we associate an internal vertex having $m$ entering edges and $n$ exiting edges. The $\int{J}\tilde{x}$ and $\int\tilde{J}{x}$ terms contribute, respectively, entering and exiting external vertices. Edges connecting vertices then correspond to a pairing between an $x(t)$ and $\tilde{x}(t)$. Finally, since

$$\left\langle \prod_{i=1}^{N}\prod_{j=1}^{M}x(t_{i})\tilde{x}(t_{j})\right\rangle =\frac{1}{Z[0,0]}\left.\frac{\delta}{\delta J(t_{i})}\frac{\delta}{\delta\tilde{J}(t_{j})}Z\right|_{J=\tilde{J}=0}$$

then only the terms in the expansion for $Z$ having $N$ entering and $M$ exiting external vertices (and thus $N$ and $M$ auxillary terms) will contribute to that moment. These terms are represented by *Feynman diagrams, *which is a graph composed of a combination of these vertices and in which each of the $N$ external vertices is connected (paired with) $M$ external vertices, possibly through a number of the internal vertices. Moments can be simply computed by writing down all possible diagrams with the requiste number of external vertices.

As an example, the coupling between external vertex $\int\tilde{J}x\, dt$ and internal vertex $\int\delta(t-t_{0})y\tilde{x}(t)\, dt$ in $Z$ can be evaluated as:

$$\begin{aligned} Z & = \left\langle \int dtdt'\,\tilde{J}(t)x(t)y\delta(t'-t_{0})\tilde{x}(t')\right\rangle _{F}+\text{all other terms}\\ & = \int dtdt'\,\tilde{J}(t)y\delta(t'-t_{0})\left\langle x(t)\tilde{x}(t')\right\rangle _{F}+\text{all other terms}\\ & = \int dt\, y\tilde{J}(t)G(t,t_{0})+\text{all other terms}.\end{aligned}$$

But this is best explained diagrammatically. In our case we have:

$$S_{I}=\int dt\, y\delta(t-t_{0})\tilde{x}(t)+\int dt\,\frac{D}{2}\tilde{x}^{2}(t),$$

and the relevant vertices are illustrated in Figure 1. The process for then computing the first and second moment for the OU process is illustrated in Figure 2. We can see that each term will be written as an integral involving the auxillary functions $J$, $\tilde{J}$ and the propagator $G$. In general, each vertex in each diagram is assigned temporal index $t_{k}$.

img txt
Figure 1. Vertices involved in evaluating moments of example OU process. First two vertices are internal vertices and are a part of the interacting action $S_{I}$, the next two vertices are external vertices associated with an auxillary variable $J$, $\tilde{J.}$ Each edge of a Feynman diagram contributes a propagator $G(t,t')$.
img txt
Figure 2. Computation of first and second cumulant using Feynman diagrams. Mean is given by functional derivative with respect to one auxillary function $\tilde{J}$, evaluated at zero. The only term non-zero term is represented by a diagram containing one exiting vertex, and no entering vertex. In this case the only diagram possible is composed of the internal vertex representing the initial condition paired with the exiting vertex. Evaluating the free moment and taking the functional derivative of this term gives the mean in terms of $G(t,t')$. In a similar fashion, the second cumulant is also calculated.

In OU, in fact only a finite number of diagrams can be considered and the exact mean and covariance can be determined. This is a result of the linearity of the SDE: a linear SDE can be written to have no $x$ terms in $S_{I}$, which means all internal vertices have no entering edges and that all moments in $x$ must correspond to a finite number of diagrams (in contrast to internal vertices with both entering and exiting edges which can then be combined in an infinite number of ways). In this case, from Figure 2, the mean and covariance are given by:

$$\langle x(t)\rangle=yH(t-t_{0})e^{-a(t-t_{0})}$$

and

$$\langle x(t)x(s)\rangle_{C}=D\frac{e^{2a(t-s)}-e^{2a(t+s-2t_{0})}}{2a}.$$

In summary, we’ve seen how to construct a path integral formulation of a generic SDE. And have seen how to construct Feynman diagrams perform perturbation expansions for the solution. In a follow-up post we will consider more examples of how they can be used.

Updated: