Contact
CoCalc Logo Icon
StoreFeaturesDocsShareSupport News AboutSign UpSign In
| Download

All published worksheets from http://sagenb.org

Views: 168695
Image: ubuntu2004

Bistability in apoptosis by receptor clustering

 

Kenneth L. Ho

[email protected]

Courant Institute of Mathematical Sciences and Program in Computational Biology

New York University, New York, NY, USA

 

Heather A. Harrington

[email protected]

Department of Mathematics and Centre for Integrative Systems Biology at Imperial College

Imperial College London, London, UK

 

About this worksheet

This worksheet contains all computations referenced in:

Ho KL, Harrington HA (2010 Oct) Bistability in apoptosis by receptor clustering. PLoS Comput Biol 6(10): e1000956.

which were performed using Sage version:

version()
'Sage Version 4.5, Release Date: 2010-07-16'

This worksheet takes advantage of the symbolic focus of Sage, additionally using PyLab (NumPy/SciPy + matplotlib) for numerical computation and plotting. Some worksheet cells make use of previously computed results; cells are therefore meant to be evaluated in order of their appearance.

import pylab as p from mpl_toolkits.mplot3d import Axes3D from numpy.random import lognormal from scipy.optimize import fmin_slsqp, leastsq from scipy.stats import linregress

As a preliminary, we first set some default options for all plots.

golden_ratio = (1 + p.sqrt(5)) / 2 p.rc('axes', labelsize='x-small') p.rc('legend', fontsize='xx-small') p.rc('savefig', dpi=150) p.rc('xtick', labelsize='xx-small') p.rc('ytick', labelsize='xx-small')

Model formulation and analysis

Label each cluster by the tuple (L,X,Y,Z)(L, X, Y, Z), where LL is FasL, and XX, YY, and ZZ are three posited forms of Fas, denoting closed, open and unstable, and open and stable receptors, respectively. Within each cluster, define the reactions: XkckoYZkuY,jY+(ij)Zks(i)(jk)Y+(ij+k)Z,{i=2,,m,j=1,,i,k=1,,jL+jY+(ij)Zkl(i)L+(jk)Y+(ij+k)Z,{i=2,,n,j=1,,i,k=1,,j.\begin{align*} X &\mathop{\longleftrightarrow}^{k_{o}}_{k_{c}} Y\\ Z &\xrightarrow{k_{u}} Y,\\ jY + \left( i - j \right) Z &\xrightarrow{k_{s}^{\left( i \right)}} \left( j - k \right) Y + \left( i - j + k \right) Z, \quad \begin{cases} i = 2, \dots, m,\\ j = 1, \dots, i,\\ k = 1, \dots, j \end{cases}\\ L + jY + \left( i - j \right) Z &\xrightarrow{k_{l}^{\left( i \right)}} L + \left( j - k \right) Y + \left( i - j + k \right) Z, \quad \begin{cases} i = 2, \dots, n,\\ j = 1, \dots, i,\\ k = 1, \dots, j. \end{cases} \end{align*}

With the nondimensionalizations ξ=xs,η=ys,ζ=zs,λ=ls,τ=kc t,\xi = \frac{x}{s}, \quad \eta = \frac{y}{s}, \quad \zeta = \frac{z}{s}, \quad \lambda = \frac{l}{s}, \quad \tau = k_{c}  t, following the convention that lowercase letters denote the concentrations of their uppercase counterparts, and where ss is a characteristic concentration and tt is time, and κo=kokc,κu=kukc,κs(i)=ks(i)si1kc,κl(i)=kl(i)sikc,\kappa_{o} = \frac{k_{o}}{k_{c}}, \quad \kappa_{u} = \frac{k_{u}}{k_{c}}, \quad \kappa_{s}^{\left( i \right)} = \frac{k_{s}^{\left( i \right)} s^{i - 1}}{k_{c}}, \quad \kappa_{l}^{\left( i \right)} = \frac{k_{l}^{\left( i \right)} s^{i}}{k_{c}}, the mass-action dynamics are dξdτ=νo,dηdτ=νo+νuνsνl,dζdτ=νu+νs+νl\begin{align*} \frac{d \xi}{d \tau} &= -\nu_{o},\\ \frac{d \eta}{d \tau} &= \nu_{o} + \nu_{u} - \nu_{s} - \nu_{l},\\ \frac{d \zeta}{d \tau} &= -\nu_{u} + \nu_{s} + \nu_{l} \end{align*} where νo=κoξη,νu=κuζ,νs=i=2mκs(i)j=1iηjζijk=1jk,νl=λi=2nκl(i)j=1iηjζijk=1jk.\begin{align*} \nu_{o} &= \kappa_{o} \xi - \eta,\\ \nu_{u} &= \kappa_{u} \zeta,\\ \nu_{s} &= \sum_{i = 2}^{m} \kappa_{s}^{\left( i \right)} \sum_{j = 1}^{i} \eta^{j} \zeta^{i - j} \sum_{k = 1}^{j} k,\\ \nu_{l} &= \lambda \sum_{i = 2}^{n} \kappa_{l}^{\left( i \right)} \sum_{j = 1}^{i} \eta^{j} \zeta^{i - j} \sum_{k = 1}^{j} k. \end{align*} Therefore, at steady state, ξ=σζ1+κo,η=κoξ,\xi_{\infty} = \frac{\sigma - \zeta_{\infty}}{1 + \kappa_{o}}, \quad \eta_{\infty} = \kappa_{o} \xi_{\infty}, where σ=ξ+η+ζ\sigma = \xi + \eta + \zeta, and ζ\zeta_{\infty} is given by solving dζ/dτ=0d \zeta / d \tau = 0 with (ξ,η)(ξ,η)(\xi, \eta) \mapsto (\xi_{\infty}, \eta_{\infty}), a polynomial in ζ\zeta_{\infty} of degree max{m,n}\max \{ m, n \}.

We first define some basic functions for working with the model.

sigma = var('sigma') kappa_o = var('kappa_o') kappa_u = var('kappa_u') xi = var('xi') eta = var('eta') zeta = var('zeta') lmbda = var('lambda') rho = var('rho') def zeta_tau(m, n): kappa_s, kappa_l = {}, {} for i in [2..m]: kappa_s[i] = var('kappa_s%i' %i) for i in [2..n]: kappa_l[i] = var('kappa_l%i' %i) xi = (sigma - zeta) / (1 + kappa_o) eta = kappa_o * xi nu_u = kappa_u * zeta v = {} for i in [2..max(m,n)]: v[i] = sum([eta^j*zeta^(i-j)*k for j in [1..i] for k in [1..j]]) nu_s = sum([kappa_s[i]*v[i] for i in [2..m]]) nu_l = lmbda * sum([kappa_l[i]*v[i] for i in [2..n]]) return -nu_u + nu_s + nu_l def base_params(): return {var('sigma') : 1, var('kappa_o') : 0.002, var('kappa_u') : 0.001, var('kappa_s2'): 0.1, var('kappa_s3'): 0.5, var('kappa_l2'): 1, var('kappa_l3'): 5} def valid_roots(rts, sigma): r = p.intersect1d(p.find(rts.imag == 0), p.intersect1d(p.find(rts.real >= 0), p.find(rts.real <= sigma))) return p.array(map(lambda x: x.real, rts[r])) def ss_plot(x, y, s, sc='b', uc='b', sls='-', uls='--', scale='linear', **kwargs): n = p.size(x) i = 0 while (i < n): r = p.find(s[i+1:] != s[i]) if p.size(r) == 0: j = n else: j = r[0] + i + 1 if s[i]: c, l = sc, sls else: c, l = uc, uls if scale == 'linear': p.plot(x[i:j], y[i:j], color=c, ls=l, **kwargs) elif scale == 'log': p.semilogy(x[i:j], y[i:j], color=c, ls=l, **kwargs) i = j def ss_plot3d(ax, x, z, s, y, xmin, xmax, sc='b', uc='b', sls='-', uls='--', **kwargs): X, Z = [], [] n = p.size(x) i = 0 r = p.intersect1d(p.find(x >= xmin), p.find(x <= xmax)) if p.size(r) == 0: return i = r[0] if i > 0: if x[i-1] < xmin: x0 = xmin r = p.index_exp[i-1:i+1] else: x0 = xmax r = p.index_exp[i+1:i-1:-1] X += [x0] Z += list(p.interp([x0], x[r], z[r])) while (i < n): rs = p.find(s[i+1:] != s[i]) if p.size(rs) == 0: js = n else: js = rs[0] + i + 1 rv = p.union1d(p.find(x[i+1:] < xmin), p.find(x[i+1:] > xmax)) if p.size(rv) == 0: jv = n else: jv = rv[0] + i + 1 if js < jv: X += list(x[i:js]) Z += list(z[i:js]) else: X += list(x[i:jv]) Z += list(z[i:jv]) if x[jv] < xmin: x0 = xmin r = p.index_exp[jv+1:jv-1:-1] else: x0 = xmax r = p.index_exp[jv-1:jv+1] X += [x0] Z += list(p.interp([x0], x[r], z[r])) if s[i]: ax.plot(X, [y]*len(X), Z, color=sc, ls=sls, **kwargs) else: ax.plot(X, [y]*len(X), Z, color=uc, ls=uls, **kwargs) X, Z = [], [] i = min(js, jv) if i == jv: r = p.intersect1d(p.find(x[i+1:] >= xmin), p.find(x[i+1:] <= xmax)) if p.size(r) == 0: return i += r[0] + 1 if x[i-1] < xmin: x0 = xmin r = p.index_exp[i-1:i+1] else: x0 = xmax r = p.index_exp[i+1:i-1:-1] X += [x0] Z += list(p.interp([x0], x[r], z[r]))

We then set baseline parameter values, and use Sage to perform various symbolic computations that will be useful later.

m, n = 3, 3 params = base_params() zt = zeta_tau(m, n).simplify_full() dzt = zt.diff(zeta).simplify_full() ztl = zt.solve(lmbda)[0].rhs().simplify_full() dztln = ztl.diff(zeta).simplify_rational().numerator().simplify_full() zt_poly = map(lambda x: x[0].simplify_full(), zt.coeffs(zeta)[::-1]) dztln_poly = map(lambda x: x[0].simplify_full(), dztln.coeffs(zeta)[::-1])

Our first figure shows the behavior of the model with λ\lambda at various values of σ\sigma. The steady-state curves show bistability and hysteresis, with irreversible bistability at high σ\sigma. The stability of each steady state is computed by evaluating (/ζ)dζ/dτ(\partial / \partial \zeta) d \zeta / d \tau at ζ=ζ\zeta = \zeta_{\infty} (stable, solid lines; unstable, dashed lines).

var_s = [(0.5 , 'b', 0.80, 0.20), (0.75, 'g', 0.65, 0.49), (1 , 'r', 0.30, 0.57), (2 , 'c', 0.10, 0.89)] lambda_min, lambda_max = 0, 0.5 nzeta = 200 dzts = dzt.subs(sigma=rho).subs(params).subs(rho=sigma) ztls = ztl.subs(sigma=rho).subs(params).subs(rho=sigma) poly = map(lambda x: x.subs(sigma=rho).subs(params).subs(rho=sigma), zt_poly) figw = 3.27 figh = figw / golden_ratio p.rc('figure', figsize=(figw, figh)) p.rc('figure.subplot', left=0.15, right=0.95, bottom=0.15, top=0.95) p.figure() p.subplot(111) for (s, c, x, y) in var_s: rts = p.roots(map(lambda x: x.subs({sigma: s, lmbda: 0}), poly)) zeta_min = min(valid_roots(rts, s)) zeta_span = p.linspace(zeta_min, s, nzeta+1)[:-1] lambda_z = [ztls.subs({sigma: s, zeta: z}) for z in zeta_span] stable_z = [bool(dzts.subs({sigma: s, lmbda: lambda_z[i], zeta : zeta_span[i]}) < 0) for i in range(nzeta)] ss = p.array([(lambda_z[i], zeta_span[i], stable_z[i]) for i in range(nzeta)], dtype=[('lambda', float), ('zeta', float), ('stable', bool)]) ss_plot(ss['lambda'], ss['zeta'], ss['stable'], sc=c, uc=c, scale='log') p.annotate(r'$\sigma = %g$' % s, (x, y), xycoords='axes fraction', size='xx-small', color=c) p.xlabel(r'FasL ($\lambda$)') p.ylabel(r'Active Fas ($\zeta_{\infty}$)') p.xlim([lambda_min, lambda_max]) p.savefig('sage.png') p.close()

The steady-state surface given as a parameterization over (λ,σ)(\lambda, \sigma) is shown below (blue, stable; red, unstable).

sigma_min, sigma_max = 0, 3 nsigma = 50 sigma_span = p.linspace(sigma_min, sigma_max, nsigma+1)[1:] figw = 3.27 figh = figw / 1.2 p.rc('figure', figsize=(figw, figh)) ax = Axes3D(p.figure()) for s in sigma_span: rts = p.roots(map(lambda x: x.subs({sigma: s, lmbda: 0}), poly)) zeta_min = min(valid_roots(rts, s)) zeta_span = p.linspace(zeta_min, s, nzeta+1)[:-1] lambda_z = [ztls.subs({sigma: s, zeta: z}) for z in zeta_span] stable_z = [bool(dzts.subs({sigma: s, lmbda: lambda_z[i], zeta : zeta_span[i]}) < 0) for i in range(nzeta)] ss = p.array([(lambda_z[i], zeta_span[i], stable_z[i]) for i in range(nzeta)], dtype=[('lambda', float), ('zeta' , float), ('stable', bool)]) ss_plot3d(ax, ss['lambda'], ss['zeta'], ss['stable'], s, lambda_min, lambda_max, sc='b', uc='r', sls='-', uls='-') ax.set_xlabel('\n\n' + r'FasL ($\lambda$)') ax.set_ylabel('\n\n' + r'Total Fas ($\sigma$)') ax.set_zlabel('\n\n' + r'Active Fas ($\zeta_{\infty}$)') ax.axis([-0.08, 0.1, -0.1, 0.08]) p.savefig('sage.png') p.close()

To further study the qualitative structure of the model, we look for the boundaries of the bistable regime in (λ,σ)(\lambda, \sigma)-space. We approach this by considering the boundaries in σ\sigma at a given value of λ\lambda, which we compute using a binary search algorithm. The two monostable regimes are characterized by the value of 2λ/ζ2\partial^{2} \lambda / \partial \zeta_{\infty}^{2}, which is negative on the lower (life) branch and positive on the upper (death) branch.

def sigma_thresh(params, sigma_min, sigma_max, lambda_span, tol): def lower_thresh(sigma_min, sigma_max, lmbda): s = (sigma_max + sigma_min) / 2 if sigma_max - sigma_min < tol: return s rts = p.roots(map(lambda x: x.subs({sigma: s, var('lambda'): lmbda}), poly)) rts = valid_roots(rts, s) nrts = p.size(rts) if nrts > 1 or (nrts == 1 and lz2.subs(sigma=s).subs(zeta=rts[0]) > 0): return lower_thresh(sigma_min, s, lmbda) else: return lower_thresh(s, sigma_max, lmbda) def upper_thresh(sigma_min, sigma_max, lmbda): s = (sigma_max + sigma_min) / 2 if sigma_max - sigma_min < tol: return s rts = p.roots(map(lambda x: x.subs({sigma: s, var('lambda'): lmbda}), poly)) rts = valid_roots(rts, s) nrts = p.size(rts) if nrts > 1 or (nrts == 1 and lz2.subs(sigma=s).subs(zeta=rts[0]) < 0): return upper_thresh(s, sigma_max, lmbda) else: return upper_thresh(sigma_min, s, lmbda) nlambda = p.size(lambda_span) bistable = p.recarray(nlambda, dtype=[('upper', float), ('lower', float)]) lz2 = ztl.diff(zeta, 2).subs(sigma=rho).subs(params).subs(rho=sigma) for i in range(nlambda): bistable['upper'][i] = upper_thresh(sigma_min, sigma_max, lambda_span[i]) bistable['lower'][i] = lower_thresh(sigma_min, sigma_max, lambda_span[i]) return bistable

We use this function to compute the σ\sigma-boundaries below.

sigma_min, sigma_max = 0, 4 nlambda = 100 tol = 1e-2 lambda_span = p.linspace(lambda_min, lambda_max, nlambda) bistable = sigma_thresh(params, sigma_min, sigma_max, lambda_span, tol)

The monostable data are then filled in with the values of ζ\zeta_{\infty}.

nsigma = 100 sigma_span = p.linspace(sigma_min, sigma_max, nsigma) [lambda_mesh, sigma_mesh] = p.meshgrid(lambda_span, sigma_span) zeta_mesh = p.empty((nsigma, nlambda)) b = mean([bistable['upper'], bistable['lower']]) for i in range(nsigma): s = sigma_span[i] for j in range(nlambda): l = lambda_span[j] rts = p.roots(map(lambda x: x.subs({sigma: s, lmbda: l}), poly)) rts = valid_roots(rts, s) if p.size(rts) == 1: zeta_mesh[i,j] = rts[0] else: if s >= b[j]: zeta_mesh[i,j] = max(rts) else: zeta_mesh[i,j] = min(rts)

This gives the following steady-state diagram, where the monostable values are shown as a heat map on ζ\zeta_{\infty}.

figw = 3.27 figh = figw / golden_ratio p.rc('figure', figsize=(figw, figh)) p.rc('figure.subplot', left=0.15, right=0.95, bottom=0.15, top=0.95) p.figure() p.subplot(111) p.pcolor(lambda_mesh, sigma_mesh, zeta_mesh) p.fill_between(lambda_span, bistable['upper'], bistable['lower'], facecolor=[0.75]*3) p.annotate('bistable', xy=(0.15, 0.37), xytext=(0.25, 0.45), xycoords='axes fraction', textcoords='axes fraction', arrowprops=dict(arrowstyle="-"), size='xx-small') p.annotate('monostable high', (0.40, 0.65), xycoords='axes fraction', size='xx-small') p.annotate('monostable low', (0.10, 0.08), xycoords='axes fraction', color='w', size='xx-small') p.xlabel(r'FasL ($\lambda$)') p.ylabel(r'Total Fas ($\sigma$)') p.savefig('sage.png') p.close()

Sensitivity and robustness analyses

We then turned to the bistability thresholds λ±\lambda_{\pm} defining the bistable regime. These are computed numerically using the following function.

def thresholds(params): rts = p.roots(map(lambda x: x.subs(params), dztln_poly)) rts = valid_roots(rts, params[sigma]) nrts = p.size(rts) if nrts == 0: return p.nan, p.nan elif nrts == 1: thresh = ztl.subs(params).subs(zeta=rts[0]).n() return thresh, thresh elif nrts == 2: zeta_on, zeta_off = min(rts), max(rts) thresh_on = ztl.subs(params).subs(zeta=zeta_on).n() thresh_off = ztl.subs(params).subs(zeta=zeta_off).n() return thresh_on, thresh_off

This next figure summarizes the definition and significance of λ±\lambda_{\pm}.

sigma_min, sigma_max = 0, 0.8 lambda_min, lambda_max = 0.1, 0.35 dzts = dzt.subs(params) ztls = ztl.subs(params) rts = p.roots(map(lambda x: x.subs(params).subs({lmbda: 0}), zt_poly)) zeta_min = min(valid_roots(rts, params[sigma])) zeta_span = p.linspace(zeta_min, params[sigma], nzeta+1)[:-1] lambda_z = [ztls.subs(zeta=z) for z in zeta_span] stable_z = [bool(dzts.subs({lmbda: lambda_z[i], zeta: zeta_span[i]}) < 0) for i in range(nzeta)] ss = p.array([(lambda_z[i], zeta_span[i], stable_z[i]) for i in range(nzeta)], dtype=[('lambda', float), ('zeta', float), ('stable', bool)]) thresh = {} thresh['lambda_on'], thresh['lambda_off'] = thresholds(params) figw = 3.27 figh = figw / golden_ratio p.rc('figure', figsize=(figw, figh)) p.rc('figure.subplot', left=0.1, right=0.95, bottom=0.1, top=0.95) p.figure() p.subplot(111) ss_plot(ss['lambda'], ss['zeta'], ss['stable'], sc='k', uc='k', lw=2) p.axvspan(thresh['lambda_on'], thresh['lambda_off'], color=[0.5, 0.75, 0.5], zorder=0) p.axvline(thresh['lambda_on' ], color='r', lw=5, zorder=0) p.axvline(thresh['lambda_off'], color='b', lw=5, zorder=0) p.annotate(r'activation threshold ($\lambda_{+}$)', (0.76, 0.5), xycoords='axes fraction', size='xx-small', rotation='vertical', va='center') p.annotate(r'deactivation threshold ($\lambda_{-}$)', (0.22, 0.5), xycoords='axes fraction', size='xx-small', rotation='vertical', va='center') p.annotate(r'bistable regime', (mean([thresh['lambda_on'], thresh['lambda_off']]), 0.35), size='xx-small', ha='center') p.arrow(0.83, 0.2, 0, 0.6, ec='r', fc='r', lw=2, head_length=0.02, head_width=0.02, transform=p.gca().transAxes) p.arrow(0.19, 0.8, 0, -0.6, ec='b', fc='b', lw=2, head_length=0.02, head_width=0.02, transform=p.gca().transAxes) p.xlabel(r'FasL ($\lambda$)') p.ylabel(r'Active Fas ($\zeta_{\infty}$)') p.xlim([lambda_min, lambda_max]) p.ylim([sigma_min, sigma_max]) p.xticks([]) p.yticks([]) p.savefig('sage.png') p.close()

For the following analyses, we need to generate synthetic data resulting from variability in the model parameters. Thus, we need to discuss parameter sampling. The method that we will use involves drawing parameters from log-normal distributions. Each distribution is characterized by a variation coefficient DD, equal to the ratio of the standard deviation to the median of the distribution. Mathematically, for a specified variation DD, the corresponding standard deviation is s=log(1+1+4D22).s = \sqrt{\log \left( \frac{1 + \sqrt{1 + 4 D^{2}}}{2} \right)}.

The cell below defines the various functions that we will need to perform the desired analyses.

def lognorm_params(med, D): m = p.log(med) s = p.sqrt(p.log((1 + p.sqrt(1 + 4*D**2)) / 2)) return m, s def generate_params(n, mu, D): params = p.recarray(n, dtype=[('sigma', float), ('kappa_o', float), ('kappa_u', float), ('kappa_s2', float), ('kappa_s3', float), ('kappa_l2', float), ('kappa_l3', float)]) for param in ['sigma', 'kappa_o', 'kappa_u', 'kappa_s2', 'kappa_s3', 'kappa_l2', 'kappa_l3']: m, s = lognorm_params(mu[var(param)], D) params[param] = lognormal(m, s, n) return params def compute_thresholds(params, tol): n = p.size(params) thresh = p.recarray(n, dtype=[('lambda_on' , float), ('lambda_off', float), ('zeta_on' , float), ('zeta_off' , float)]) for i in range(n): d = {sigma : params[i]['sigma' ], kappa_o : params[i]['kappa_o' ], kappa_u : params[i]['kappa_u' ], kappa_s2: params[i]['kappa_s2'], kappa_s3: params[i]['kappa_s3'], kappa_l2: params[i]['kappa_l2'], kappa_l3: params[i]['kappa_l3']} thresh[i]['lambda_on'], thresh[i]['lambda_off'] = thresholds(d) for type in ['on', 'off']: if type == 'on' : offset = -tol elif type == 'off': offset = tol rts = p.roots(map(lambda x: x.subs(d) .subs({lmbda: thresh[i]['lambda_on'] + offset}), zt_poly)) rts = valid_roots(rts, d[sigma]) if p.size(rts) > 0: if type == 'on' : thresh[i]['zeta_%s' % type] = min(rts) elif type == 'off': thresh[i]['zeta_%s' % type] = max(rts) else: thresh[i]['zeta_%s' % type] = p.nan return thresh def is_bistable(thresh): return not p.isnan(thresh['lambda_on']) and thresh['lambda_on'] >= 0 def fraction_bistable(thresh): return float(p.size(p.find(map(lambda x: is_bistable(x), thresh)))) / p.size(thresh) def compute_sensitivities(ref_params, ref_thresh, params, thresh): sens = {} for param in ['sigma', 'kappa_o', 'kappa_u', 'kappa_s2', 'kappa_s3', 'kappa_l2', 'kappa_l3']: sens[param] = {} for type in ['lambda_on', 'lambda_off', 'zeta_on', 'zeta_off']: r = p.find(~p.isnan(thresh[type])) sens[param][type] = \ p.array(linregress(params[param][r] / ref_params[var(param)], thresh[type][r] / ref_thresh[type]), dtype=[('slope' , float), ('intercept', float), ('r' , float), ('2tailprob', float), ('stderr' , float)]) return sens

We then generate 500500 parameter sets at D=0.25D = 0.25 about baseline median values, on which we conduct the sensitivity analysis.

tol = 1e-6 nparams = 500 D = 0.25 rts = p.roots(map(lambda x: x.subs(params) .subs({lmbda: thresh['lambda_on'] - tol}), zt_poly)) rts = valid_roots(rts, params[sigma]) thresh['zeta_on'] = min(rts) rts = p.roots(map(lambda x: x.subs(params) .subs({lmbda: thresh['lambda_on'] + tol}), zt_poly)) rts = valid_roots(rts, params[sigma]) thresh['zeta_off'] = max(rts) data_params = generate_params(nparams, params, D) data_thresh = compute_thresholds(data_params, tol) sens = compute_sensitivities(params, thresh, data_params, data_thresh)
/usr/local/sage2/local/lib/python2.6/site-packages/scipy/stats/stats.py:420: DeprecationWarning: scipy.stats.mean is deprecated; please update your code to use numpy.mean. Please note that: - numpy.mean axis argument defaults to None, not 0 - numpy.mean has a ddof argument to replace bias in a more general manner. scipy.stats.mean(a, bias=True) can be replaced by numpy.mean(x, axis=0, ddof=1). axis=0, ddof=1).""", DeprecationWarning)

The result is shown below.

nvars = len(['sigma', 'kappa_o', 'kappa_u', 'kappa_s2', 'kappa_s3', 'kappa_l2', 'kappa_l3']) locs = p.arange(nvars) w = 0.3 figw = 3.27 figh = 1.75 * figw / golden_ratio p.rc('figure', figsize=(figw, figh)) p.rc('figure.subplot', left=0.15, right=0.95, bottom=0.15, top=0.95, hspace=0.1) p.figure() ax = p.subplot(111) p.subplot(211) for (i, v) in enumerate(zip(['lambda_on', 'lambda_off'], [r'FasL activation ($\lambda_{+}$)', r'FasL deactivation ($\lambda_{-}$)'], ['r', 'b'])): type = v[0] l = v[1] c = v[2] s, err = [], [] for param in ['sigma', 'kappa_o', 'kappa_u', 'kappa_s2', 'kappa_s3', 'kappa_l2', 'kappa_l3']: s += [sens[param][type]['slope']] err += [sens[param][type]['stderr']] p.bar(locs + i*w, s, width=w, yerr=err, color=c, ecolor='k', label=l) p.axhline(0, c='k') p.xticks(locs + w) p.gca().set_xticklabels([]) p.xlim([-w, nvars-1+3*w]) p.legend(loc='best').draw_frame(False) p.subplot(212) for (i, v) in enumerate(zip(['zeta_on', 'zeta_off'], [r'Fas activation ($\zeta_{+}$)', r'Fas deactivation ($\zeta_{-}$)'], [[1, 0.5, 0], 'g'])): type = v[0] l = v[1] c = v[2] s, err = [], [] for param in ['sigma', 'kappa_o', 'kappa_u', 'kappa_s2', 'kappa_s3', 'kappa_l2', 'kappa_l3']: s += [sens[param][type]['slope']] err += [sens[param][type]['stderr']] p.bar(locs + i*w, s, width=w, yerr=err, color=c, ecolor='k', label=l) p.axhline(0, c='k') p.xticks(locs + w, [r'$\sigma$', r'$\kappa_{o}$', r'$\kappa_{u}$', r'$\kappa_{s}^{(2)}$', r'$\kappa_{s}^{(3)}$' , r'$\kappa_{l}^{(2)}$', r'$\kappa_{l}^{(3)}$'], size='x-small') p.xlim([-w, nvars-1+3*w]) p.legend(loc='best').draw_frame(False) p.xlabel('Parameters') p.text(-0.15, 0.5, 'Sensitivity ($\Sigma$)', va='center', rotation='vertical', size='x-small', transform=ax.transAxes) p.savefig('sage.png') p.close()

The λ±\lambda_{\pm} do not appear particularly robust, but there does seem to be some sense of robustness of bistability:

html('fraction of data bistable:') pretty_print(fraction_bistable(data_thresh))
fraction of data bistable:
\newcommand{\Bold}[1]{\mathbf{#1}}0.994

This suggests the following robustness analysis, where we compute the bistable fraction ff of parameters drawn widely over varying DD. The data fit the exponential form f^=f+(1f)eD/D0,\hat{f} = f_{\infty} + \left( 1 - f_{\infty} \right) e^{-D / D_{\mathrm{0}}}, where D0D_{\mathrm{0}} is the characteristic decay length in DD. The fitted value of ff_{\infty} provides evidence for a nontrivial asymptotic bistable fraction.

nparams = 250 D_min, D_max = 0, 5 nD = 25 D = p.linspace(D_min, D_max, nD) bistable = p.empty(nD) for i in range(nD): if D[i] == 0: data_thresh = [thresh] else: data_params = generate_params(nparams, params, D[i]) data_thresh = compute_thresholds(data_params, tol) bistable[i] = fraction_bistable(data_thresh) bistable_fit = lambda x: x[0] + (1 - x[0])*p.exp(-D/x[1]) fit_params = leastsq(lambda x: bistable_fit(x) - bistable, p.rand(2))[0] html(r'asymptotic bistable fraction ($f_{\infty}$):') pretty_print(fit_params[0])
asymptotic bistable fraction (f_{\infty}):
\newcommand{\Bold}[1]{\mathbf{#1}}\hbox{0.427825851848}

The data are plotted below.

figw = 3.27 figh = figw / golden_ratio p.rc('figure', figsize=(figw, figh)) p.rc('figure.subplot', left=0.15, right=0.95, bottom=0.15, top=0.95) p.figure() p.subplot(111) p.plot(D, bistable, '.-', label=r'data ($f$)') p.plot(D, bistable_fit(fit_params), 'r--', label=r'fit ($\hat{f}$)') p.legend(loc='best').draw_frame(False) p.xlabel(r'Variation coefficient ($D$)') p.ylabel(r'Bistable fraction') p.savefig('sage.png') p.close()

Cell-level cluster integration

For the cell-level integration, we draw parameters for 100100 clusters at D=0.25D = 0.25 about baseline median values. These define heterogeneous clusters, which we integrate into a normalized measure of cell activation as ζcell=iζ,iiσi,\zeta_{\infty}^{\mathrm{cell}} = \frac{\sum_{i} \zeta_{\infty, i}}{\sum_{i} \sigma_{i}}, where the subscript ii denotes reference to cluster ii.

nparams = 100 D = 0.25 lambda_min, lambda_max = 0, 0.8 data_params = generate_params(nparams, params, D) data_thresh = compute_thresholds(data_params, tol) lambda_span = p.linspace(lambda_min, lambda_max, nlambda) sigma_tot = p.sum(data_params['sigma']) signal = p.recarray(nlambda, dtype=[('upper', float), ('lower', float)]) for i in range(nlambda): for type in ['upper', 'lower']: signal[type][i] = 0 l = lambda_span[i] for j in range(nparams): d = {sigma : data_params[j]['sigma' ], kappa_o : data_params[j]['kappa_o' ], kappa_u : data_params[j]['kappa_u' ], kappa_s2: data_params[j]['kappa_s2'], kappa_s3: data_params[j]['kappa_s3'], kappa_l2: data_params[j]['kappa_l2'], kappa_l3: data_params[j]['kappa_l3']} rts = p.roots(map(lambda x: x.subs(d).subs({lmbda: l}), zt_poly)) rts = valid_roots(rts, data_params[j]['sigma']) if is_bistable(data_thresh[j]) and l > data_thresh[j]['lambda_on']: for type in ['upper', 'lower']: signal[type][i] += max(rts) elif l < data_thresh[j]['lambda_off']: for type in ['upper', 'lower']: signal[type][i] += min(rts) else: signal['upper'][i] += max(rts) signal['lower'][i] += min(rts) for type in ['upper', 'lower']: signal[type][i] /= sigma_tot

The cell-level hysteresis curve is shown as follows, from which we see a graded response, despite the threshold properties of individual clusters as seen earlier. Furthermore, we plot the thresholds λ±\lambda_{\pm} on the threshold plane, which shows a significant linear dependence between the λ±\lambda_{\pm} (the valid region λ+>λ\lambda_{+} > \lambda_{-} is shown in green).

figw = 3.27 figh = 2.1 * figw / golden_ratio p.rc('figure', figsize=(figw, figh)) p.rc('figure.subplot', left=0.15, right=0.95, bottom=0.1, top=0.95, hspace=0.25) p.figure() p.subplot(211) p.plot(lambda_span, signal['upper'], 'b', lambda_span, signal['lower'], 'b') p.xlim(lambda_min, lambda_max) p.xlabel(r'FasL ($\lambda$)') p.ylabel(r'Normalized cell activation ($\zeta_{\infty}^{\mathrm{cell}}$)') p.subplot(212) p.scatter(data_thresh['lambda_off'], data_thresh['lambda_on']) p.xlim(xmin=0) p.ylim(ymin=0) xmin, xmax = p.xlim() ymin, ymax = p.ylim() xymax = max(xmax, ymax) p.fill((0, 0, xymax), (0, xymax, xymax), ec='k', fc=[0.5, 0.75, 0.5], zorder=0) p.xlim([0, xmax]) p.ylim([0, ymax]) p.xlabel(r'Deactivation threshold ($\lambda_{-}$)') p.ylabel(r'Activation threshold ($\lambda_{+}$)') p.savefig('sage.png') p.close()

Model discrimination

In the following, we demonstrate model discrimination between our model, hereafter termed the cluster model, and the prevailing crosslinking model using steady-state invariants. To do this, we need to discuss the crosslinking model, which we do now.

The crosslinking model that we consider has the reactions $$L + R \mathop{\longleftrightarrow}^{k_{+}}_{k_{-}} C_{1}, \quad C_{1} + R \mathop{\longleftrightarrow}^{2 k_{+}}_{2 k_{-}} C_{2}, \quad C_{2} + R \mathop{\longleftrightarrow}^{k_{+}}_{3 k_{-}} C_{3},ParseError: KaTeX parse error: Can't use function '$' in math mode at position 8: where $̲L$ is FasL, $R$…\lambda = \frac{l}{s}, \quad \rho = \frac{r}{s}, \quad \gamma_{i} = \frac{c_{i}}{s}, \quad \kappa = \frac{k_{-}}{k_{+} s}, \quad \tau = k_{+} st,themassactiondynamicsare the mass-action dynamics are \frac{d \lambda}{d \tau} = -\nu_{1}, \quad \frac{d \rho}{d \tau} = -\nu_{1} - \nu_{2} - \nu_{3}, \quad \frac{d \gamma_{1}}{d \tau} = \nu_{1} - \nu_{2}, \quad \frac{d \gamma_{2}}{d \tau} = \nu_{2} - \nu_{3}, \quad \frac{d \gamma_{3}}{d \tau} = \nu_{3},where where \nu_{1} = 3 \lambda \rho - \kappa \gamma_{1}, \quad \nu_{2} = 2 \rho \gamma_{1} - 2 \kappa \gamma_{2}, \quad \nu_{3} = \rho \gamma_{2} - 3 \kappa \gamma_{3}.Therefore,atsteadystate, Therefore, at steady state, \gamma_{1, \infty} = 3 \lambda_{\infty} \left( \frac{\rho_{\infty}}{\kappa} \right), \quad \gamma_{2, \infty} = 3 \lambda_{\infty} \left( \frac{\rho_{\infty}}{\kappa} \right)^{2}, \quad \gamma_{3, \infty} = \lambda_{\infty} \left( \frac{\rho_{\infty}}{\kappa} \right)^{3},where where \lambda_{\infty} = \frac{\Lambda}{1 + 3 \left( \rho_{\infty} / \kappa \right) + 3 \left( \rho_{\infty} / \kappa \right)^{2} + \left( \rho_{\infty} / \kappa \right)^{3}}ParseError: KaTeX parse error: Can't use function '$' in math mode at position 6: and $̲\rho_{\infty}$ …\rho_{\infty}^{4} + \left( 3 \Lambda + 3 \kappa - \sigma \right) \rho_{\infty}^{3} + 3 \kappa \left( 2 \Lambda + \kappa - \sigma \right) \rho_{\infty}^{2} + \kappa^{2} \left( 3 \Lambda + \kappa - 3 \sigma \right) \rho_{\infty} - \kappa^{3} \sigma = 0,where where \Lambda = \lambda + \gamma_{1} + \gamma_{2} + \gamma_{3}, \quad \sigma = \rho + \gamma_{1} + 2 \gamma_{2} + 3 \gamma_{3}.Itiseasytoshowthatonlyonenonnegativerootsexists,namely, It is easy to show that only one nonnegative roots exists, namely, \rho_{\infty} = \frac{1}{2} \left[ \sqrt{\left( 3 \Lambda + \kappa - \sigma \right)^{2} + 4 \kappa \sigma} - \left( 3 \Lambda + \kappa - \sigma \right) \right].$$

Lambda = var('Lambda') kappa = var('kappa') rho_eq = rho^4 + \ (3*Lambda + 3*kappa - sigma)*rho^3 + \ 3*kappa*(2*Lambda + kappa - sigma)*rho^2 + \ kappa^2*(3*Lambda + kappa - 3*sigma)*rho - \ kappa^3*sigma rho_infty = solve(rho_eq, rho) rho_infty = rho_infty[1].rhs()

As baseline parameters for the crosslinking model, we set σ=1\sigma = 1 and κ=0.1\kappa = 0.1. The following figure indicates that the resulting behavior is consistent with that for the cluster model, where we set ζγ1+2γ2+3γ3=σρ.\zeta \equiv \gamma_{1} + 2 \gamma_{2} + 3 \gamma_{3} = \sigma - \rho.

params = {} params['cluster'] = base_params() params['crosslink'] = {sigma: 1, kappa: 0.1} lambda_min, lambda_max = 0, 0.5 lambda_span = p.linspace(lambda_min, lambda_max, nlambda) ss = rho_infty.subs(params['crosslink']) ss = params['crosslink'][sigma] - p.array([ss.subs(Lambda=l).n() for l in lambda_span]) figw = 3.27 figh = figw / golden_ratio p.rc('figure', figsize=(figw, figh)) p.rc('figure.subplot', left=0.15, right=0.95, bottom=0.15, top=0.95) p.figure() p.subplot(111) p.plot(lambda_span, ss) p.xlabel(r'Total FasL ($\Lambda$)') p.ylabel(r'Active Fas ($\zeta_{\infty}$)') p.savefig('sage.png') p.close()

Hyperactive mutants

Denote hyperactive mutant Fas by ZΔZ_{\Delta}. To amend the cluster model for interaction with ZΔZ_{\Delta}, we replace the ligand-independent and -dependent cluster-stabilization reactions with $$jY+kZ+(ijk)ZΔks(i)(jk)Y+(k+k)Z+(ijk)ZΔ,{i=2,,m,j=1,,i,k=0,,ij,k=1,,j,L+jY+kZ+(ijk)ZΔkl(i)L+(jk)Y+(k+k)Z+(ijk)ZΔ,{i=2,,n,j=1,,i,k=0,,ij,k=1,,j,\begin{align*} jY + kZ + \left( i - j - k \right) Z_{\Delta} &\xrightarrow{k_{s}^{\left( i \right)}} \left( j - k' \right) Y + \left( k + k' \right) Z + \left( i - j - k \right) Z_{\Delta}, \quad \begin{cases} i = 2, \dots, m,\\ j = 1, \dots, i,\\ k = 0, \dots, i - j,\\ k' = 1, \dots, j, \end{cases}\\ L + jY + kZ + \left( i - j - k \right) Z_{\Delta} &\xrightarrow{k_{l}^{\left( i \right)}} L + \left( j - k' \right) Y + \left( k + k' \right) Z + \left( i - j - k \right) Z_{\Delta}, \quad \begin{cases} i = 2, \dots, n,\\ j = 1, \dots, i,\\ k = 0, \dots, i - j,\\ k' = 1, \dots, j, \end{cases} \end{align*}respectively.Thecorrespondingvelocitiesarenow respectively. The corresponding velocities are now \begin{align*} \nu_{s} &= \sum_{i = 2}^{m} \kappa_{s}^{\left( i \right)} \sum_{j = 1}^{i} \eta^{j} \sum_{k = 0}^{i - j} \zeta^{k} \zeta_{\Delta}^{i - j - k} \sum_{k' = 1}^{j} k',\\ \nu_{l} &= \lambda \sum_{i = 2}^{n} \kappa_{l}^{\left( i \right)} \sum_{j = 1}^{i} \eta^{j} \sum_{k = 0}^{i - j} \zeta^{k} \zeta_{\Delta}^{i - j - k} \sum_{k' = 1}^{j} k'. \end{align*}$$

We compute the updated form of dζ/dτd \zeta / d \tau below, where we set ζΔzΔs=Δσ\zeta_{\Delta} \equiv \frac{z_{\Delta}}{s} = \Delta \overline{\sigma} for σ=σ+ζΔ\overline{\sigma} = \sigma + \zeta_{\Delta} the total receptor (wildtype and mutant) concentration, and Δ\Delta the mutant population fraction.

Delta = var('Delta') sigma_bar = var('sigma_bar') def zeta_tau_mutant(m, n): kappa_s, kappa_l = {}, {} for i in [2..m]: kappa_s[i] = var('kappa_s%i' %i) for i in [2..n]: kappa_l[i] = var('kappa_l%i' %i) zeta_Delta = Delta*sigma_bar sigma_wt = sigma_bar - zeta_Delta xi = (sigma_wt - zeta) / (1 + kappa_o) eta = kappa_o * xi nu_u = kappa_u * zeta v = {} for i in [2..max(m,n)]: v[i] = sum([eta^j*zeta^k*zeta_Delta^(i-j-k)*kp for j in [1..i] for k in [0..i-j] for kp in [1..j]]) nu_s = sum([kappa_s[i]*v[i] for i in [2..m]]) nu_l = lmbda * sum([kappa_l[i]*v[i] for i in [2..n]]) return -nu_u + nu_s + nu_l

We use Sage to perform some preliminary symbolic computations on the modified dynamics function.

m, n = 3, 3 ztm = zeta_tau_mutant(m, n).simplify_full() dztm = ztm.diff(zeta).simplify_full() ztml = ztm.solve(lmbda)[0].rhs().simplify_full() ztm_poly = map(lambda x: x[0].simplify_full(), ztm.coeffs(zeta)[::-1])

The following plot shows the response curve of the active wildtype fraction φ=ζ/σ\varphi = \zeta_{\infty} / \sigma for various levels of Δ\Delta at σ=1\overline{\sigma} = 1 (stable, solid lines; unstable, dashed lines). That the response curve is sensitive to Δ\Delta under the cluster model is in contrast to that under the crosslinking model, which predicts an invariant φ\varphi-response curve, due to the absence of receptor interactions.

lambda_min, lambda_max = 0, 0.3 nzeta, nlambda = 200, 100 lambda_span = p.linspace(lambda_min, lambda_max, nlambda) figw = 3.27 figh = figw / golden_ratio p.rc('figure', figsize=(figw, figh)) p.rc('figure.subplot', left=0.15, right=0.95, bottom=0.15, top=0.95) p.figure() var_d = [(0.00, 'b', 0.80, 0.32), (0.01, 'g', 0.57, 0.39), (0.05, 'r', 0.33, 0.50), (0.10, 'c', 0.08, 0.69)] dztms = dztm.subs(params['cluster']).subs(sigma_bar=params['cluster'][sigma]) ztmls = ztml.subs(params['cluster']).subs(sigma_bar=params['cluster'][sigma]) poly = map(lambda x: x.subs(params['cluster']).subs(sigma_bar=params['cluster'][sigma]), ztm_poly) p.subplot(111) for (d, c, x, y) in var_d: sigma_wt = (1 - d)*params['cluster'][sigma] rts = p.roots(map(lambda x: x.subs({Delta: d, lmbda: 0}), poly)) zeta_min = min(valid_roots(rts, sigma_wt)) zeta_span = p.linspace(zeta_min, sigma_wt, nzeta+1)[:-1] lambda_z = [ztmls.subs({Delta: d, zeta: z}) for z in zeta_span] stable_z = [bool(dztms.subs({lmbda: lambda_z[i], Delta: d, zeta : zeta_span[i]}) < 0) for i in range(nzeta)] ss = p.array([(lambda_z[i], zeta_span[i] / sigma_wt,# + d*params['cluster'][sigma], stable_z[i]) for i in range(nzeta)], dtype=[('lambda', float), ('zeta', float), ('stable', bool)]) ss_plot(ss['lambda'], ss['zeta'], ss['stable'], sc=c, uc=c, scale='log') p.annotate(r'$\Delta = %g$' % d, (x, y), xycoords='axes fraction', size='xx-small', color=c) p.xlabel(r'FasL ($\lambda$)') p.ylabel(r'Active wildtype Fas fraction ($\varphi$)') p.xlim([lambda_min, lambda_max]) p.savefig('sage.png') p.close()

Steady-state invariants

The steady-state invariant for each model is defined as dζ/dτd \zeta / d \tau. However, the invariants must be expressed only in terms of experimentally accessible parameters, which we took as (Λ,σ,ζ)(\Lambda, \sigma, \zeta_{\infty}). Thus, for the cluster invariant, we make the identification λΛ\lambda \mapsto \Lambda, and for the crosslinking invariant, we identify ρσζ\rho_{\infty} \mapsto \sigma - \zeta_{\infty}. The following cell defines the model invariants.

rho_kappa = rho_infty / kappa lambda_infty = Lambda / (1 + 3*rho_kappa + 3*rho_kappa^2 + rho_kappa^3) gamma_1_infty = 3 * lambda_infty * rho_kappa gamma_2_infty = 3 * lambda_infty * rho_kappa^2 gamma_3_infty = lambda_infty * rho_kappa^3 inv = {} inv['cluster'] = zt.subs({lmbda: Lambda}) inv['crosslink'] = 3*lambda_infty*rho - kappa*gamma_1_infty + \ 2*rho*gamma_1_infty - 2*kappa*gamma_2_infty + \ rho*gamma_2_infty - 3*kappa*gamma_3_infty inv['crosslink'] = inv['crosslink'].subs(rho=sigma - zeta)

We implement a function for generating the experimentally accessible values (Λ,σ,ζ)(\Lambda, \sigma, \zeta_{\infty}) below.

def generate_ss(model, n, params, Lambda_med, Lambda_D, sigma_med, sigma_D): m, s = lognorm_params(Lambda_med, Lambda_D) Lambda_exp = lognormal(m, s, n) m, s = lognorm_params(sigma_med, sigma_D) sigma_exp = lognormal(m, s, n) if model == 'cluster': poly = map(lambda x: x.subs(sigma=rho).subs(params).subs(rho=sigma), zt_poly) zeta_exp = p.empty(n) for i in range(n): rts = p.roots(map(lambda x: x.subs({sigma: sigma_exp[i], lmbda: Lambda_exp[i]}), poly)) rts = valid_roots(rts, sigma_exp[i]) if p.size(rts) > 1: if p.rand() < 0.5: zeta_exp[i] = min(rts) else: zeta_exp[i] = max(rts) else: zeta_exp[i] = rts[0] elif model == 'crosslink': rho_exp = rho_infty.subs(kappa=params[kappa]) rho_exp = p.array([rho_exp.subs({Lambda: Lambda_exp[i], sigma : sigma_exp[i]}) for i in range(n)]) zeta_exp = sigma_exp - rho_exp return p.array(zip(Lambda_exp, sigma_exp, zeta_exp), dtype=[('Lambda', float), ('sigma', float), ('zeta', float)])

As demonstration that model discrimination using steady state invariants is practical, we generate synthetic data for Λ\Lambda and σ\sigma; ζ\zeta_{\infty} is then calculated using the above function. For the cluster model, if bistability is encountered, one of the two stable values of ζ\zeta_{\infty} is chosen at random.

ndata = 100 Lambda_med, Lambda_D = 0.2, 0.5 sigma_med, sigma_D = 1, 0.5 data_ss = {} for model in ['cluster', 'crosslink']: data_ss[model] = generate_ss(model, ndata, params[model], Lambda_med, Lambda_D, sigma_med, sigma_D)

We then compute the invariants given the experimental data.

inv_data = {} for model in ['cluster', 'crosslink']: inv_data[model] = {} for data in ['cluster', 'crosslink']: inv_data[model][data] = [inv[model].subs({Lambda: data_ss[data]['Lambda'][i], sigma : data_ss[data]['sigma' ][i], zeta : data_ss[data]['zeta' ][i]}) for i in range(ndata)]

For each model-data pair, we fit the model invariant to the data by minimizing over the model parameter space using SLSQP. This gives the best possible fit for a given model to the data.

tol, acc = 1e-3, 1e-9 inv_min = {} for model in ['cluster', 'crosslink']: inv_min[model] = {} f_ieqcons = {} f_ieqcons['cluster'] = lambda x: p.array([x[i] - tol for i in [0..5]]) f_ieqcons['crosslink'] = lambda x: p.array([x[0] - tol]) for data in ['cluster', 'crosslink']: inv_min['cluster'][data] = \ fmin_slsqp(lambda x: p.norm(map(lambda y: y.subs({kappa_o : x[0], kappa_u : x[1], kappa_s2: x[2], kappa_s3: x[3], kappa_l2: x[4], kappa_l3: x[5]}), inv_data['cluster'][data])) / p.sqrt(ndata), p.ones(6), f_ieqcons=f_ieqcons['cluster'], full_output=True, acc=acc) inv_min['crosslink'][data] = \ fmin_slsqp(lambda x: p.norm(map(lambda y: y.subs(kappa=x[0]), inv_data['crosslink'][data])) / p.sqrt(ndata), p.ones(1), f_ieqcons=f_ieqcons['crosslink'], full_output=True, acc=acc)
Optimization terminated successfully. (Exit mode 0) Current function value: 2.53465893094e-06 Iterations: 43 Function evaluations: 361 Gradient evaluations: 43 Optimization terminated successfully. (Exit mode 0) Current function value: 0.0625788982552 Iterations: 4 Function evaluations: 12 Gradient evaluations: 4 Optimization terminated successfully. (Exit mode 0) Current function value: 0.000316194660318 Iterations: 29 Function evaluations: 235 Gradient evaluations: 29 Optimization terminated successfully. (Exit mode 0) Current function value: 2.13658254836e-11 Iterations: 7 Function evaluations: 37 Gradient evaluations: 7

The results are summarized below, from which we see that the model can be correctly identified from the data.

w = 0.3 figw = 3.27 figh = figw / golden_ratio p.rc('figure', figsize=(figw, figh)) p.rc('figure.subplot', left=0.15, right=0.95, bottom=0.1, top=0.95) p.figure() p.subplot(111) p.bar(0, inv_min['cluster']['cluster'][1], width=w, log=True, label='cluster data') p.bar(w, inv_min['cluster']['crosslink'][1], width=w, color='g', log=True, label='crosslinking data') p.bar(1, inv_min['crosslink']['cluster'][1], width=w, log=True) p.bar(1 + w, inv_min['crosslink']['crosslink'][1], width=w, color='g', log=True) p.xlim([-w, 1+3*w]) p.legend(loc='best').draw_frame(False) p.xticks([w, 1+w], ['cluster model', 'crosslinking model']) p.ylabel(r'Invariant error ($\epsilon$)') p.savefig('sage.png') p.close()