All published worksheets from http://sagenb.org
Image: ubuntu2004
Embedding Flows in a Competition on a Manifold
Introduction
Different dynamics are produced, seemingly by the same underlying system, in different conditions (Huys, others), motivating the question, here in neural systems, of how a homogeneous substrate, such as a sheet of firing rate neurons, can be switched to different dynamics.
Example of Phase Flow Competition
For a first concrete example, we will take the first four equations 3.5 (and the competition component from 3.4) from Pillai 2008 corresponding to a double Fitzhugh-Nagumo excitator system in which the phase flow composed of competes with that of :
in which , , and produce different phase flows, i.e. a combination of monostable, bistable or limit cycle.
Fig 1: Controlling Flows
The parameters and control the competition between the and dynamics. As is raised (lowered) the functional subnetwork's activity is inhibited (enhanced).
Formalities of Functional and Neural Systems
In terms of order parameters, we have
and in terms firing rate nodes we have
or
where is a connectivity matrix. Based on the coordinate transform , from which we have , and an adjoint system , we write
and expand into :
.
At this point, let the neural interaction function (taken to be a sigmoid) be approximated as (vector multiplication is element-wise unless otherwise denoted) and expand the form:
In the formulation of functional system dynamics above, flows are a static property of the vector field and each flow has a corresponding coefficient that determines its role in competition with other flows. In the neural network formulation, asymmetries in the connectivity matrix produce flows. In order to recruit multiple flows in a neural network, we first assume that, as in the functional form, the capability to produce a particular flow or set of flows is a static property of the vector field, i.e. the connectivity matrix does not change. The neural network analog of the biased competition present in the equations used next is a linear combination of baseline excitability, in the neuron, corresponding to the phase flow:
which after the derivation above takes the form
where and . Comparing this form to that of the functional dynamics
we should be able to fit from the network formulation to that of the functional form in order to create the desired manifold and similarly to produce the flows and flow competition. In the procedure to fit the network formulation, we want to minimize the angle between the functional vector field, , and the network vector field, ,
( is evaluated element-wise) evaluated over some region of the space spanned by by manipulating parameters of the network vector field , , , and . In particular, and will be set when fitting the manifold, while and will be set in order to produce the flows and their competition.
Embedding a Single Flow
First, we'll attempt to embed a simpler excitator limit-cycle flow into a network of five firing rate neurons. From the above, we have
whose behavior we want to map into
by fitting the form (where and ) as shown above. In the following we put subscripts or to denote the form from which the expression is taken, i.e. vs .
First, we select a coordinate transform and one of its possible adjoints:
Creating the Manifold
We describe the functional manifold as
and the neural network manifold
Next, we take the angles between the corresponding components of the two vector fields, and sum the square of their errors,
where is a relevant region of the phase space. Elsewhere (Pillai 2008, Huys), fitting has done by numerically evaluating the vector field and finding a numerical angle between high-dimensional vectors. However, because the expressions for the vector fields used here are known, we can integrate to obtain an expression of the sum squared angle in terms of the fitting parameters: (click %hide to see code)
Next, we can minimize the expression for the sum squared angle to a value near zero (using a guess for starting):
Just to check, we evalute the expression at the parameters found:
Now, we verify the foregoing method works by sampling the two vector fields over a region of space, and compute the angle of the high-dimensional vector (click %hide to see code).
Close enough.
Creating the Flow
The next step is to fit
to
As before, we create the functional flow
and the neural flow
substitute our previous values for and set , and in the functional flow,
compute ,
minimize using free variables
Most of that was wrong, so Now, I'm gonna try again to translate the neural form to the functional form