CoCalc Public FilesExp_fit_GL3_inversion.sagews
Author: Greg Hammett
Views : 69
Compute Environment: Ubuntu 18.04 (Deprecated)

## Study of a method to use exponential basis functions to handle positivity issues.

Study how to reconstruct an exponential representation of the solution, given a set of moments of the solution.

GWH 6/13/2017

# initializations
%auto
typeset_mode(True) # typeset all output in spreadsheet by default
reset() # reset SageMath to it's initial state (clears all variables)

var('f_av fx_av m g A')

($\displaystyle f_{\mathit{av}}$, $\displaystyle \mathit{fx}_{\mathit{av}}$, $\displaystyle m$, $\displaystyle g$, $\displaystyle A$)
A=Matrix([[1/2,1/2],[-1/2,1/2]])

A

$\displaystyle \left(\begin{array}{rr} \frac{1}{2} & \frac{1}{2} \\ -\frac{1}{2} & \frac{1}{2} \end{array}\right)$
Ainv = A^(-1); Ainv

$\displaystyle \left(\begin{array}{rr} 1 & -1 \\ 1 & 1 \end{array}\right)$
# m is a vector of moments, [<f> = f_0, <xf> = f_1/3]:
m = vector([1.0,0.1])

# map the exponential function to the moments, using 3rd order Gauss-Lobatto quadrature:
def F_exp_to_mom(g):
g0 = g[0]
g1 = g[1]
gavg = (g0+g1)/2
tol = 1.e-6
g0 = max(g0, tol*gavg)
g1 = max(g1, tol*gavg)
return vector( [ g0/6+sqrt(g0*g1)*4/6 + g1/6,  -g0/6+g1/6 ] )

# map the exponential function to the moments, using 3rd order Gauss-Lobatto quadrature:
def F_exp2_to_mom(g):
g0 = exp(g[0])
g1 = exp(g[1])
return vector( [ g0/6+sqrt(g0*g1)*4/6 + g1/6,  -g0/6+g1/6 ] )

# g=(gL,gR), where gL and gR are the values of the exponential function
# at the left and right boundary.  This then specifies the exponential representation.
g = vector([1.0,1.0])

# test the exponential to moments map:
F_exp_to_mom(g)

$\displaystyle \left(1.00000000000000,\,0.000000000000000\right)$
# The exact inverse function from moments to exponential representation,
# for 3rd order Gauss-Lobattao quadrature.
# (can only be worked out analytically in 1D for 1st and 3rd order Gauss-Lobatto quadrature)
def F_mom_to_exp(m):
r=m[1]/m[0]
beta = (2*r+sqrt(3*r^2+1))/(1-r)
gL = m[0]*6/(1+4*beta+beta^2)
gR = gL*beta^2
return vector([gL,gR])

# Can perhaps rearrange the equations in F_mom_to_exp to be in a simpler or more robust form.  Examples:
%var r
eq1 = F_mom_to_exp(vector([1,r])); eq1

$\displaystyle \left(\frac{6}{\frac{{\left(2 \, r + \sqrt{3 \, r^{2} + 1}\right)}^{2}}{{\left(r - 1\right)}^{2}} - \frac{4 \, {\left(2 \, r + \sqrt{3 \, r^{2} + 1}\right)}}{r - 1} + 1},\,\frac{6 \, {\left(2 \, r + \sqrt{3 \, r^{2} + 1}\right)}^{2}}{{\left(r - 1\right)}^{2} {\left(\frac{{\left(2 \, r + \sqrt{3 \, r^{2} + 1}\right)}^{2}}{{\left(r - 1\right)}^{2}} - \frac{4 \, {\left(2 \, r + \sqrt{3 \, r^{2} + 1}\right)}}{r - 1} + 1\right)}}\right)$
eq00 = factor(expand(eq1[0])); eq00

$\displaystyle \frac{3 \, {\left(r - 1\right)}^{2}}{3 \, r + 2 \, \sqrt{3 \, r^{2} + 1} + 1}$
eq10 = factor(expand(eq1[1])); eq10

$\displaystyle \frac{3 \, {\left(7 \, r^{2} + 4 \, \sqrt{3 \, r^{2} + 1} r + 1\right)}}{3 \, r + 2 \, \sqrt{3 \, r^{2} + 1} + 1}$


# verify solution:
m = vector([1.0,0.1])
F_mom_to_exp(m)
F_exp_to_mom(F_mom_to_exp(m))

$\displaystyle \left(0.729778313018444,\,1.32977831301844\right)$
$\displaystyle \left(1.00000000000000,\,0.100000000000000\right)$
m = vector([1.0,0.99999])
F_mom_to_exp(m)
F_exp_to_mom(F_mom_to_exp(m))

$\displaystyle \left(3.75002812515923 \times 10^{-11},\,5.99994000003750\right)$
$\displaystyle \left(1.00281889884175,\,0.999989500011250\right)$
# A very approximate function from moments to exponential representation,
# from the 1st order Gauss-Lobattao quadrature.
#
def F_mom_to_exp_GL1(m):
return vector([m[0]-m[1],m[0]+m[1]])

def F_x3(x,m):
f0 = m[0]
xavg = m[1]
# the normalization in the denominator is to ensure the average f
# is correct when evaluated with 3rd order Gauss-Lobatto quadrature
#return f0*(1+xavg*x)^3/((1/6)*(1-xavg)^3 + 4/6 + (1/6)*(1+xavg)^3)
# However, that doesn't give enough enhancement of f at the boundary,
# so try the simpler expression:
return f0*(1+xavg*x)^3

def F_x3rt(m):
f0 = m[0]
xavg = m[1]
return f0*(1+xavg)^3 / ( (1/6)*(1-xavg)^3 + (4/6)*(1-xavg)^1.5 * (1+xavg)^1.5 + (1/6)*(1+xavg)^3 )

def F_mom_to_exp_x3(m):
return F_x3(1,m)

# Make a plot
eq1 = F_mom_to_exp(vector([1,x]))[1]
eq2 = F_mom_to_exp_GL1(vector([1,x]))[1]
eq_x3 = F_x3(1,vector([1,x]))
eq_x3rt = F_x3rt(vector([1,x]))
P = plot(eq1,legend_label='exponential g @x=1')
P = P + plot(1+3*x,color="green",legend_label='linear g @x=1')
P = P + plot(eq2,color="gray",legend_label='rough [email protected]=1')
P = P + plot(eq_x3, color='red', legend_label='(1+<x>)^3')
P = P + plot(eq_x3rt, color='orange', legend_label='(1+<x>)^3 w exp interp')
P.axes_labels(['<x> = <xf>/<f>','g_R'])
P.show()


The above plot demonstrates that the approximation for g_R($\langle x \rangle$) base on 1st order Gauss-Lobatto quadrature is not 1st order accurate. This is because calculating $\langle x f(x) \rangle$ involves $\langle x^2 \rangle$ terms, which requires more than 1st order accuracy to be accurate. At least it is a postive definite approximation.

# choose a set of moments:
m = vector([1.0,0.7])

g_exact = F_mom_to_exp(m)
print "g_exact =", g_exact

# iniia guess
g = Ainv * m
print raw g_int =", g

#resal solutio o xactly give the corect ensty:
g= g/F_xp_o_momg)[]

# Note: I wonder if rescaling the solution alters the
# convergence rate of Newton in some way.  I think
# is should be okay, but perhaps think a little more.

%var err_est, s, frac
rel_frac = 0.5

print "g_init  =", g

g_exact = (0.0432467291003422, 4.24324672910034) raw g_init = (0.300000000000000, 1.70000000000000) g_init = (0.370631843401647, 2.10024711260933)
# Repeatedly run this cell to do iterations of a preconditioned iterative solver:

g_old = g
r = F_exp_to_mom(g) - m
print "previous g=", g
print "residual =", r, "|res| =",abs(r)
print "error =", g-g_exact

err_est = Ainv*r

print "err_est =", err_est

# Do enough back tracking to ensure that no element of g
# drops by more than "frac" in a single iteration
# (this keeps g from going negative):
s = 1
if (err_est[0] > rel_frac*g[0]): s = rel_frac*g[0]/err_est[0]
if (err_est[1] > rel_frac*g[1]): s = min(s,rel_frac*g[1]/err_est[1])

g = g - s*err_est

# The positivity backtracking is not enough, also have to check the error
# to see if we have to backtrack more:
#

res_new = abs(F_exp_to_mom(g) - m)

if(res_new > abs(r)):
print "s, res, res_new=", s, res_new, abs(r)
s = s/2

g = g - s*err_est
res_new = abs(F_exp_to_mom(g) - m)

if(res_new > abs(r)):
print "s, res, res_new=", s, abs(r), res_new
s = s/2

g = g - s*err_est
res_new = abs(F_exp_to_mom(g) - m)

# older method of limiting:

# print "raw g = ", g
#epsilon=1.e-6
#g[0] = max(epsilon, g[0])
#g[1] = max(epsilon, g[1])

print "s =", s
print "g before rescaling:", g
# rescale solution to exactly correct density:
g = g / ( F_exp_to_mom(g)[0] )
print "g                 :", g, "res = ", F_exp_to_mom(g) - m

previous g= (0.370631843401647, 2.10024711260933) residual = (2.22044604925031e-16, -0.411730788465385) |res| = 0.411730788465385 error = (0.327385114301305, -2.14299961649101) err_est = (0.411730788465386, -0.411730788465385) s, res, res_new= 0.450090027009003 0.411730788465385 0.653946807970084 s = 0.225045013504502 g before rescaling: (-0.0926579608504118, 2.56353691686139) g : (-0.216266827161853, 5.98338221814460) res = (2.22044604925031e-16, 0.297229889097817)
# The above fails to converge for a slope of 0.7.  Might be able to improve the backtracking or the estimated Jacobian.

g_old
"old res", abs(F_exp_to_mom(g_old) - m)
g_new = g_old-0.5*err_est
g_new = g_new / ( F_exp_to_mom(g_new)[0] )
abs(F_exp_to_mom(g_new) - m)

$\displaystyle \left(0.0405996888408913,\,4.29003378752084\right)$
(old res, $\displaystyle 0.00823901644665892$)
$\displaystyle 0.00264206777375164$



#### try SciPy numerical nonlinear solvers

def fun(g):
return F_exp2_to_mom(g)-m

from scipy import optimize

m = vector([1.0,0.99])
g_exact = F_mom_to_exp(m)
g_init = vector([log(m[0]-m[1]), log(m[0]+m[1])])
# sol = optimize.root(fun, g_init, jac=false, method='hybr', tol=1.e-1)
sol = optimize.root(fun, g_init, jac=false, method='hybr', options={'xtol':1.e-3})

sol

fjac: array([[ -9.99999851e-01, 5.45450257e-04], [ -5.45450257e-04, -9.99999851e-01]]) fun: array([ 2.87526888e-07, 6.24956084e-08]) message: 'The solution converged.' nfev: 14 qtf: array([ -2.54857935e-05, -2.58130665e-06]) r: array([-0.00508839, -0.96073911, -1.00869966]) status: 1 success: True x: array([-10.18360123, 1.78171556])
# number of iterations for various values of the slope (<x>), with GL1 initial guess:
# tol=1.e-3 (residual --> 1.e-17)
#
# slope = 0.1,    nfev = 7
# slope = 0.8,    nfev = 12 res ~ 1.e-6
# slope = 0.9999, nfev = 17 res ~ 1.e-7
# slope = 0.9999, nfev = 9 if tol = 0.1, then res jumps up to ~ 1.e-3

# number of iterations for various values of the slope (<x>), with GL1 initial guess:
# tol=default (residual --> 1.e-17)
#
# slope = 0.1,    nfev = 10
# slope = 0.2,    nfev = 11
# slope = 0.7,    nfev = 15
# slope = 0.8,    nfev = 15
# slope = 0.9,    nfev = 16
# slope = 0.99,   nfev = 17
# slope = 0.999,  nfev = 18
# slope = 0.9999, nfev = 20

# number of iterations for various values of the slope (<x>), simplest initial guess
# g_init = (1,1)
#
# slope = 0.1,    nfev = 10
# slope = 0.7,    nfev = 18
# slope = 0.8,    nfev = 18
# slope = 0.9,    nfev = 23
# slope = 0.99,   nfev = 31
# slope = 0.999,  nfev = 33
# slope = 0.9999, nfev = 39

# Could use the the Jacobian matrix, which isn't that hard to work out:
moms = F_exp_to_mom(vector([g0,g1])); moms

$\displaystyle \left(\frac{1}{6} \, g_{0} + \frac{1}{6} \, g_{1} + \frac{2}{3} \, \sqrt{g_{0} g_{1}},\,-\frac{1}{6} \, g_{0} + \frac{1}{6} \, g_{1}\right)$
diff(moms[0],g0)
diff(moms[0],g1)
diff(moms[1],g0)
diff(moms[1],g1)

$\displaystyle \frac{g_{1}}{3 \, \sqrt{g_{0} g_{1}}} + \frac{1}{6}$
$\displaystyle \frac{g_{0}}{3 \, \sqrt{g_{0} g_{1}}} + \frac{1}{6}$
$\displaystyle -\frac{1}{6}$
$\displaystyle \frac{1}{6}$
var('r, m0, m1, g0, g1, beta')

($\displaystyle r$, $\displaystyle m_{0}$, $\displaystyle m_{1}$, $\displaystyle g_{0}$, $\displaystyle g_{1}$, $\displaystyle \beta$)
var('feq(x,y)')

Error in lines 1-1 Traceback (most recent call last): File "/projects/sage/sage-7.5/local/lib/python2.7/site-packages/smc_sagews/sage_server.py", line 995, in execute exec compile(block+'\n', '', 'single') in namespace, locals File "", line 1, in <module> File "/projects/sage/sage-7.5/local/lib/python2.7/site-packages/smc_sagews/sage_salvus.py", line 3034, in var return var0(*args, **kwds) File "/projects/sage/sage-7.5/local/lib/python2.7/site-packages/smc_sagews/sage_salvus.py", line 2996, in var0 v = sage.all.SR.var(name, **kwds) File "sage/symbolic/ring.pyx", line 776, in sage.symbolic.ring.SymbolicRing.var (/projects/sage/sage-7.5/src/build/cythonized/sage/symbolic/ring.cpp:9863) raise ValueError('The name "'+s+'" is not a valid Python identifier.') ValueError: The name "feq(x" is not a valid Python identifier.
def int2d_GL3(n,m,f):
'''Function to calculate <x^n*y^m*f(x,y)> with 3rd order Gauss-Lobatto quadrature.
'''
# (Note, this works because 0^0 is defined as 1.)
int = (  (1/6)*((-1)^m*(-1)^n*f(-1,-1)/6 + (0)^m*(-1)^n*f(0,-1)*4/6 + (1)^m*(-1)^n*f(1,-1)/6)
+ (4/6)*((-1)^m*(1)^n*f(-1,0)/6  + (-1)^m*(1)^n*f(0,0)*4/6  + (-1)^m*(1)^n*f(1,0)/6)
+ (1/6)*((-1)^m*(0)^n  *f(-1,0)/6  + (0)^m *(0)^n*f(0,0)*4/6  + (1)^m *(0)^n*f(1,0)/6))
return int

_=var("g00 g10 g01 g11")fexp(x,y) = g00^((1-x)/2*(1-y)/2) * g01^((1+x)/4*(1-y)/2)  *
f10^((1+x)/2*(1+y)/2) * g11^((1-x)/2+*(1+y)/2)

# verify this givs all 9 values for Gauss-Lobatto integration:
fexp(-1,-1)
fexp(0,-1)
fexp(1,-1)
fexp(-1,1)
fexp(0,0)
fexp(1,0)
fexp(-1,1)
fexp(0,1)
fexp(1,1)


g00 sqrt(g00)*sqrt(gg10) gg10 sqrt(g00)*sqrt(g01) g00^(1/4)*g01^(1/4)*g10^(1/4)*g11^(1/4) sqrt(g10)*sqrt(g11) f01 sqrt(f01)*sqrt(f11)
($\displaystyle f_{00}$, $\displaystyle f_{10}$, $\displaystyle f_{01}$, $\displaystyle f_{11}$)
fexp()

var('f00 f10 f01 f11')

eq22 = f01 == int2d_GL3(0,1,fexp)

eq23 = f11 == int2d_GL3(1,1,fexp)

eq20
eq21
eq22
eq23

f00 == 1/36*g00 + 5/36*sqrt(g00)*sqrt(g01) + 1/9*sqrt(g00)*sqrt(g10) + 1/36*g10 + 5/9*g00^(1/4)*g01^(1/4)*g10^(1/4)*g11^(1/4) + 5/36*sqrt(g10)*sqrt(g11) f10 == -1/36*g00 + 1/36*sqrt(g00)*sqrt(g01) - 1/9*sqrt(g00)*sqrt(g10) - 1/36*g10 + 1/9*g00^(1/4)*g01^(1/4)*g10^(1/4)*g11^(1/4) + 1/36*sqrt(g10)*sqrt(g11) f01 == -1/36*g00 - 5/36*sqrt(g00)*sqrt(g01) + 1/36*g10 - 1/9*g00^(1/4)*g01^(1/4)*g10^(1/4)*g11^(1/4) + 1/12*sqrt(g10)*sqrt(g11) f11 == 1/36*g00 - 1/36*sqrt(g00)*sqrt(g01) - 1/36*g10 - 1/9*g00^(1/4)*g01^(1/4)*g10^(1/4)*g11^(1/4) - 1/36*sqrt(g10)*sqrt(g11)
diff(eq20,g00) ; diff(eq20, g10) ; diff(eq20,g01) ; diff(eq20, g11)

0 == 5/72*sqrt(g01)/sqrt(g00) + 1/18*sqrt(g10)/sqrt(g00) + 5/36*g01^(1/4)*g10^(1/4)*g11^(1/4)/g00^(3/4) + 1/36 0 == 1/18*sqrt(g00)/sqrt(g10) + 5/36*g00^(1/4)*g01^(1/4)*g11^(1/4)/g10^(3/4) + 5/72*sqrt(g11)/sqrt(g10) + 1/36 0 == 5/72*sqrt(g00)/sqrt(g01) + 5/36*g00^(1/4)*g10^(1/4)*g11^(1/4)/g01^(3/4) 0 == 5/36*g00^(1/4)*g01^(1/4)*g10^(1/4)/g11^(3/4) + 5/72*sqrt(g10)/sqrt(g11)
eq20 = f00 == int2d_GL3(0,0,fexp)

eq21 = f10 == int2d_GL3(1,0,fexp)

eq
fL^((1-x)/2*(1-y)/2)

fL^(1/4*(x - 1)*(y - 1))
# 1st order Gauss-Lobattao moments:

eq10 = f001 == (-g00-g10+g01+g11)/4
eq11 = f10 == (-g00+g10-g01+g11)/4
eq13 = f11 == (g00-g10-g01+g11)/4
eq10 = f00 == (g00+g10+g01+g11)/4

corner_GL1 = solve([eq10,eq11,eq12,eq13], g00, g10, g01, g11)

(16+2*4*4+16+2*(-8)+2*4*(-2)+4)

$\displaystyle 36$
corner_GL1

[[$\displaystyle g_{00} = f_{00} - f_{01} - f_{10} + f_{11}$, $\displaystyle g_{10} = f_{00} - f_{01} + f_{10} - f_{11}$, $\displaystyle g_{01} = f_{00} + f_{01} - f_{10} - f_{11}$, $\displaystyle g_{11} = f_{00} + f_{01} + f_{10} + f_{11}$]]
corner_GL1[0][j].substitute(f00=1, 10=-1, f01=-1, f10=0, f11=0)

$\displaystyle g_{10} = 0$
for j in range(4):
corner_GL1[0][1].substitute(f00=1, f0f01=1, f11=1)

$\displaystyle g_{00} = 4$
$\displaystyle g_{10} = 0$
$\displaystyle g_{01} = 0$
$\displaystyle g_{11} = 4$

Test out iteration with limiters

%var xn, xnp1, lam

lam = 0.2
xn = 0

xnp1 = (1 - (1 - xn - 2 * lam) -lam*(1+xn)^3 ) / (1 - lam*(1+xn)^3)
xn = xnp1; x

$\displaystyle x$

Here are the full iteration equations, for 1D passive advection, for a cell that has no flux into it:

reset() # reset all variables

%var f_0_n, F_R, lam, f_0_np1
eq30 = f_0_np1 == f_0_n - lam * F_R
eq30

$\displaystyle f_{0_{\mathit{np}_{1}}} = -F_{R} \mathit{lam} + f_{0_{n}}$
%var f_1_np1, f_1_n, lam, beta
eq31 = f_1_np1 == f_1_n - 3*lam*F_R + 6*lam*beta*f_0_n
eq31

$\displaystyle f_{1_{\mathit{np}_{1}}} = 6 \, \beta f_{0_{n}} \mathit{lam} - 3 \, F_{R} \mathit{lam} + f_{1_{n}}$
%var r_np1, r_n, R_R
eq32 = r_np1 == eq31.rhs() / eq30.rhs()
eq32

$\displaystyle r_{\mathit{np}_{1}} = -\frac{6 \, \beta f_{0_{n}} \mathit{lam} - 3 \, F_{R} \mathit{lam} + f_{1_{n}}}{F_{R} \mathit{lam} - f_{0_{n}}}$
%var theta, x_n, x_np1
eq33 = eq32.substitute(f_1_n=f_0_n*3*x_n, F_R = theta*f_0_n*(1+3*x_n))
eq33

$\displaystyle r_{\mathit{np}_{1}} = \frac{3 \, {\left(f_{0_{n}} \mathit{lam} \theta {\left(3 \, x_{n} + 1\right)} - 2 \, \beta f_{0_{n}} \mathit{lam} - f_{0_{n}} x_{n}\right)}}{f_{0_{n}} \mathit{lam} \theta {\left(3 \, x_{n} + 1\right)} - f_{0_{n}}}$
eq34 = x_np1 == (eq33.rhs())/3
eq34

$\displaystyle x_{\mathit{np}_{1}} = \frac{f_{0_{n}} \mathit{lam} \theta {\left(3 \, x_{n} + 1\right)} - 2 \, \beta f_{0_{n}} \mathit{lam} - f_{0_{n}} x_{n}}{f_{0_{n}} \mathit{lam} \theta {\left(3 \, x_{n} + 1\right)} - f_{0_{n}}}$
eq34 = eq34.lhs() == factor(eq34.rhs())
eq34

$\displaystyle x_{\mathit{np}_{1}} = \frac{3 \, \mathit{lam} \theta x_{n} - 2 \, \beta \mathit{lam} + \mathit{lam} \theta - x_{n}}{3 \, \mathit{lam} \theta x_{n} + \mathit{lam} \theta - 1}$
eq35 = eq34.substitute(x_np1 = x_n)
eq35

$\displaystyle x_{n} = \frac{3 \, \mathit{lam} \theta x_{n} - 2 \, \beta \mathit{lam} + \mathit{lam} \theta - x_{n}}{3 \, \mathit{lam} \theta x_{n} + \mathit{lam} \theta - 1}$
# search for fixed points:
eq36 = solve(eq35, x_n)
eq36

[$\displaystyle x_{n} = \frac{\theta - \sqrt{-6 \, \beta \theta + 4 \, \theta^{2}}}{3 \, \theta}$, $\displaystyle x_{n} = \frac{\theta + \sqrt{-6 \, \beta \theta + 4 \, \theta^{2}}}{3 \, \theta}$]
eq37 = eq36[1].rhs()
eq37

$\displaystyle \frac{\theta + \sqrt{-6 \, \beta \theta + 4 \, \theta^{2}}}{3 \, \theta}$
eq37 = eq37.substitute(beta=1)
eq37

$\displaystyle \frac{\theta + \sqrt{4 \, \theta^{2} - 6 \, \theta}}{3 \, \theta}$
P = plot(eq37,(theta,1.5,5))
P.axes_labels(["theta","x_fixed"])
show(P)

# The above is a plot of the fixed point as a function of theta, but it doesn't really make much sense, because theta should be a function of x and needs to reduce to theta=1 when |x|<<1.

eq32

$\displaystyle r_{\mathit{np}_{1}} = -\frac{6 \, \beta f_{0_{n}} \mathit{lam} - 3 \, F_{R} \mathit{lam} + f_{1_{n}}}{F_{R} \mathit{lam} - f_{0_{n}}}$
%var F_R_rel
eq40 = factor(eq32.substitute(f_1_n=f_0_n*3*x_n, F_R = f_0_n*F_R_rel)); eq40

$\displaystyle r_{\mathit{np}_{1}} = \frac{3 \, {\left(F_{R_{\mathit{rel}}} \mathit{lam} - 2 \, \beta \mathit{lam} - x_{n}\right)}}{F_{R_{\mathit{rel}}} \mathit{lam} - 1}$
eq41 = x_np1 == factor((eq40.rhs()/3))
eq41

$\displaystyle x_{\mathit{np}_{1}} = \frac{F_{R_{\mathit{rel}}} \mathit{lam} - 2 \, \beta \mathit{lam} - x_{n}}{F_{R_{\mathit{rel}}} \mathit{lam} - 1}$
eq42 = eq41.substitute(F_R_rel = (1+x_n/3)^(3*3)); eq42

$\displaystyle x_{\mathit{np}_{1}} = \frac{\mathit{lam} {\left(x_{n} + 3\right)}^{9} - 39366 \, \beta \mathit{lam} - 19683 \, x_{n}}{\mathit{lam} {\left(x_{n} + 3\right)}^{9} - 19683}$
eq42_lin = eq41.substitute(F_R_rel = (1+3*x_n)); eq42_lin

$\displaystyle x_{\mathit{np}_{1}} = -\frac{2 \, \beta \mathit{lam} - \mathit{lam} {\left(3 \, x_{n} + 1\right)} + x_{n}}{\mathit{lam} {\left(3 \, x_{n} + 1\right)} - 1}$
eq42_blim =  eq41.substitute(F_R_rel = (1+x_n)^3,beta=min_symbolic(1,(1-x_n)/(2*lam))); eq42_blim
eq42_blim = eq42_blim.substitute(lam=0.05); eq42_blim

$\displaystyle x_{\mathit{np}_{1}} = \frac{\mathit{lam} {\left(x_{n} + 1\right)}^{3} - 2 \, \mathit{lam} \min\left(1, -\frac{x_{n} - 1}{2 \, \mathit{lam}}\right) - x_{n}}{\mathit{lam} {\left(x_{n} + 1\right)}^{3} - 1}$
$\displaystyle x_{\mathit{np}_{1}} = \frac{0.0500000000000000 \, {\left(x_{n} + 1\right)}^{3} - x_{n} - 0.100000000000000 \, \min\left(1, -10.0000000000000 \, x_{n} + 10.0000000000000\right)}{0.0500000000000000 \, {\left(x_{n} + 1\right)}^{3} - 1}$
eq42 = eq42.substitute(beta=1,lam=0.05)
eq42_lin = eq42_lin.substitute(beta=1,lam=0.05)
eq42
eq42_lin

$\displaystyle x_{\mathit{np}_{1}} = \frac{0.0500000000000000 \, {\left(x_{n} + 3\right)}^{9} - 19683 \, x_{n} - 1968.30000000000}{0.0500000000000000 \, {\left(x_{n} + 3\right)}^{9} - 19683}$
$\displaystyle x_{\mathit{np}_{1}} = -\frac{0.850000000000000 \, x_{n} + 0.0500000000000000}{0.150000000000000 \, x_{n} - 0.950000000000000}$
# Plot everything together here.  Can plot individual plots below.
x_min=-1
x_max=1
P1 = plot(eq42.rhs(),(x_n,x_min,x_max),legend_label="extrap (1+<x>/3)^(3*3)")
P2 = plot(eq42_lin.rhs(),(x_n,x_min,x_max),color="red",legend_label="linear extrap")
P3 = plot(eq42_blim.rhs(),(x_n,x_min,x_max),color="purple",legend_label="cubic+beta_lim")
P4 = plot(x,(x_min,x_max),color="green",legend_label="fixed point")
P=P1+P2+P3+P4
P.axes_labels(["$<x>_{n}$","$<x>_{n+1}$"])
# \langle and \rangle apparently not implemented in the LaTeX labels
# P.axes_labels(["$\langle x \rangle_{n}$","$\langle x \rangle_{n+1}$"])
show(P)
P.save("x_avg_map.png")
P.save("x_avg_map.pdf")

%md(hide=true)
Conclusions:

This is a plot of $\langle x \rangle$ on the next time step as a function of $\langle x \rangle$ on the previous time step, for lambda = v*dt/dx = 0.05.

The linear extrapolation to the boundary flux has the problem that $\langle x \rangle$ increases every time step and will eventually exceed the bound $\langle x \rangle \le 1$.

Using the cubic extrapolation to the boundary (which is a kind of Pade approximation to the exponential extrapolation), does slow down the rate per step by which $\langle x \rangle$ increases, but it still eventually exceeds the limit.

The only way to rigorously satisfy the limit on $\langle x \rangle$ is to limit beta to be less than 1, where beta is the mulitplier on the interior volume average term $\langle v(x) f(x) \rangle$.  I think the way to interpret this is that when $\langle x_n \rangle$ approaches 1, then the Euler time step is breaking down, because it takes very little time for the whole solultion to advect out of the right boundary.  I think it is okay to apply a limiter there, as if we were reverting to a 2cd order centered method, because the order of accuracy for the time integration only holds in smooth regions, where $|\langle x_n \rangle| \ll 1$, and this limiter on beta preserves an important feature of the solution while retaining at least first-order accuracy in time.



Conclusions:

This is a plot of $\langle x \rangle$ on the next time step as a function of $\langle x \rangle$ on the previous time step, for lambda = v*dt/dx = 0.05.

The linear extrapolation to the boundary flux has the problem that $\langle x \rangle$ increases every time step and will eventually exceed the bound $\langle x \rangle \le 1$.

Using the cubic extrapolation to the boundary (which is a kind of Pade approximation to the exponential extrapolation), does slow down the rate per step by which $\langle x \rangle$ increases, but it still eventually exceeds the limit.

The only way to rigorously satisfy the limit on $\langle x \rangle$ is to limit beta to be less than 1, where beta is the mulitplier on the interior volume average term $\langle v(x) f(x) \rangle$. I think the way to interpret this is that when $\langle x_n \rangle$ approaches 1, then the Euler time step is breaking down, because it takes very little time for the whole solultion to advect out of the right boundary. I think it is okay to apply a limiter there, as if we were reverting to a 2cd order centered method, because the order of accuracy for the time integration only holds in smooth regions, where $|\langle x_n \rangle| \ll 1$, and this limiter on beta preserves an important feature of the solution while retaining at least first-order accuracy in time.

%var x, y, mx, my, mxy
e100 = integrate((1+mx*x+mx*y+mxy*x*y)^3*(1+mxy*x*y)^6,x,-1,1)

e101 = simplify(e100)
e101

$\displaystyle \frac{1}{840} \, {\left(120 \, \mathit{mx}^{3} \mathit{mxy}^{6} + 315 \, \mathit{mx}^{2} \mathit{mxy}^{7} + 280 \, \mathit{mx} \mathit{mxy}^{8} + 84 \, \mathit{mxy}^{9}\right)} y^{9} + \frac{1}{840} \, {\left(120 \, \mathit{mx}^{3} \mathit{mxy}^{6} - 315 \, \mathit{mx}^{2} \mathit{mxy}^{7} + 280 \, \mathit{mx} \mathit{mxy}^{8} - 84 \, \mathit{mxy}^{9}\right)} y^{9} + \frac{1}{120} \, {\left(12 \, {\left(3 \, \mathit{mx} + 10\right)} \mathit{mxy}^{8} + 120 \, \mathit{mx}^{3} \mathit{mxy}^{5} + 40 \, {\left(2 \, \mathit{mx}^{2} + 9 \, \mathit{mx}\right)} \mathit{mxy}^{7} + 45 \, {\left(\mathit{mx}^{3} + 8 \, \mathit{mx}^{2}\right)} \mathit{mxy}^{6}\right)} y^{8} - \frac{1}{120} \, {\left(12 \, {\left(3 \, \mathit{mx} - 10\right)} \mathit{mxy}^{8} + 120 \, \mathit{mx}^{3} \mathit{mxy}^{5} - 40 \, {\left(2 \, \mathit{mx}^{2} - 9 \, \mathit{mx}\right)} \mathit{mxy}^{7} + 45 \, {\left(\mathit{mx}^{3} - 8 \, \mathit{mx}^{2}\right)} \mathit{mxy}^{6}\right)} y^{8} + \frac{1}{420} \, {\left(14 \, {\left(9 \, \mathit{mx}^{2} + 80 \, \mathit{mx} + 135\right)} \mathit{mxy}^{7} + 1260 \, \mathit{mx}^{3} \mathit{mxy}^{4} + 35 \, {\left(4 \, \mathit{mx}^{3} + 63 \, \mathit{mx}^{2} + 144 \, \mathit{mx}\right)} \mathit{mxy}^{6} + 90 \, {\left(12 \, \mathit{mx}^{3} + 49 \, \mathit{mx}^{2}\right)} \mathit{mxy}^{5}\right)} y^{7} - \frac{1}{420} \, {\left(14 \, {\left(9 \, \mathit{mx}^{2} - 80 \, \mathit{mx} + 135\right)} \mathit{mxy}^{7} - 1260 \, \mathit{mx}^{3} \mathit{mxy}^{4} - 35 \, {\left(4 \, \mathit{mx}^{3} - 63 \, \mathit{mx}^{2} + 144 \, \mathit{mx}\right)} \mathit{mxy}^{6} - 90 \, {\left(12 \, \mathit{mx}^{3} - 49 \, \mathit{mx}^{2}\right)} \mathit{mxy}^{5}\right)} y^{7} + \frac{1}{60} \, {\left(2 \, {\left(3 \, \mathit{mx}^{3} + 70 \, \mathit{mx}^{2} + 315 \, \mathit{mx} + 360\right)} \mathit{mxy}^{6} + 300 \, \mathit{mx}^{3} \mathit{mxy}^{3} + 15 \, {\left(9 \, \mathit{mx}^{3} + 72 \, \mathit{mx}^{2} + 112 \, \mathit{mx}\right)} \mathit{mxy}^{5} + 90 \, {\left(5 \, \mathit{mx}^{3} + 14 \, \mathit{mx}^{2}\right)} \mathit{mxy}^{4}\right)} y^{6} - \frac{1}{60} \, {\left(2 \, {\left(3 \, \mathit{mx}^{3} - 70 \, \mathit{mx}^{2} + 315 \, \mathit{mx} - 360\right)} \mathit{mxy}^{6} + 300 \, \mathit{mx}^{3} \mathit{mxy}^{3} + 15 \, {\left(9 \, \mathit{mx}^{3} - 72 \, \mathit{mx}^{2} + 112 \, \mathit{mx}\right)} \mathit{mxy}^{5} + 90 \, {\left(5 \, \mathit{mx}^{3} - 14 \, \mathit{mx}^{2}\right)} \mathit{mxy}^{4}\right)} y^{6} + \frac{1}{168} \, {\left(7 \, {\left(16 \, \mathit{mx}^{3} + 189 \, \mathit{mx}^{2} + 576 \, \mathit{mx} + 504\right)} \mathit{mxy}^{5} + 840 \, \mathit{mx}^{3} \mathit{mxy}^{2} + 24 \, {\left(45 \, \mathit{mx}^{3} + 245 \, \mathit{mx}^{2} + 294 \, \mathit{mx}\right)} \mathit{mxy}^{4} + 126 \, {\left(16 \, \mathit{mx}^{3} + 35 \, \mathit{mx}^{2}\right)} \mathit{mxy}^{3}\right)} y^{5} + \frac{1}{168} \, {\left(7 \, {\left(16 \, \mathit{mx}^{3} - 189 \, \mathit{mx}^{2} + 576 \, \mathit{mx} - 504\right)} \mathit{mxy}^{5} + 840 \, \mathit{mx}^{3} \mathit{mxy}^{2} + 24 \, {\left(45 \, \mathit{mx}^{3} - 245 \, \mathit{mx}^{2} + 294 \, \mathit{mx}\right)} \mathit{mxy}^{4} + 126 \, {\left(16 \, \mathit{mx}^{3} - 35 \, \mathit{mx}^{2}\right)} \mathit{mxy}^{3}\right)} y^{5} + \frac{1}{40} \, {\left({\left(75 \, \mathit{mx}^{3} + 600 \, \mathit{mx}^{2} + 1400 \, \mathit{mx} + 1008\right)} \mathit{mxy}^{4} + 120 \, \mathit{mx}^{3} \mathit{mxy} + 80 \, {\left(5 \, \mathit{mx}^{3} + 21 \, \mathit{mx}^{2} + 21 \, \mathit{mx}\right)} \mathit{mxy}^{3} + 30 \, {\left(15 \, \mathit{mx}^{3} + 28 \, \mathit{mx}^{2}\right)} \mathit{mxy}^{2}\right)} y^{4} - \frac{1}{40} \, {\left({\left(75 \, \mathit{mx}^{3} - 600 \, \mathit{mx}^{2} + 1400 \, \mathit{mx} - 1008\right)} \mathit{mxy}^{4} + 120 \, \mathit{mx}^{3} \mathit{mxy} + 80 \, {\left(5 \, \mathit{mx}^{3} - 21 \, \mathit{mx}^{2} + 21 \, \mathit{mx}\right)} \mathit{mxy}^{3} + 30 \, {\left(15 \, \mathit{mx}^{3} - 28 \, \mathit{mx}^{2}\right)} \mathit{mxy}^{2}\right)} y^{4} + \frac{1}{70} \, {\left({\left(200 \, \mathit{mx}^{3} + 1225 \, \mathit{mx}^{2} + 2352 \, \mathit{mx} + 1470\right)} \mathit{mxy}^{3} + 70 \, \mathit{mx}^{3} + 35 \, {\left(18 \, \mathit{mx}^{3} + 63 \, \mathit{mx}^{2} + 56 \, \mathit{mx}\right)} \mathit{mxy}^{2} + 105 \, {\left(4 \, \mathit{mx}^{3} + 7 \, \mathit{mx}^{2}\right)} \mathit{mxy}\right)} y^{3} + \frac{1}{70} \, {\left({\left(200 \, \mathit{mx}^{3} - 1225 \, \mathit{mx}^{2} + 2352 \, \mathit{mx} - 1470\right)} \mathit{mxy}^{3} + 70 \, \mathit{mx}^{3} + 35 \, {\left(18 \, \mathit{mx}^{3} - 63 \, \mathit{mx}^{2} + 56 \, \mathit{mx}\right)} \mathit{mxy}^{2} + 105 \, {\left(4 \, \mathit{mx}^{3} - 7 \, \mathit{mx}^{2}\right)} \mathit{mxy}\right)} y^{3} + \frac{1}{10} \, {\left(15 \, \mathit{mx}^{3} + {\left(25 \, \mathit{mx}^{3} + 126 \, \mathit{mx}^{2} + 210 \, \mathit{mx} + 120\right)} \mathit{mxy}^{2} + 30 \, \mathit{mx}^{2} + 5 \, {\left(9 \, \mathit{mx}^{3} + 28 \, \mathit{mx}^{2} + 24 \, \mathit{mx}\right)} \mathit{mxy}\right)} y^{2} - \frac{1}{10} \, {\left(15 \, \mathit{mx}^{3} + {\left(25 \, \mathit{mx}^{3} - 126 \, \mathit{mx}^{2} + 210 \, \mathit{mx} - 120\right)} \mathit{mxy}^{2} - 30 \, \mathit{mx}^{2} + 5 \, {\left(9 \, \mathit{mx}^{3} - 28 \, \mathit{mx}^{2} + 24 \, \mathit{mx}\right)} \mathit{mxy}\right)} y^{2} + 2 \, \mathit{mx}^{2} + \frac{1}{20} \, {\left(20 \, \mathit{mx}^{3} + 60 \, \mathit{mx}^{2} + {\left(24 \, \mathit{mx}^{3} + 105 \, \mathit{mx}^{2} + 160 \, \mathit{mx} + 90\right)} \mathit{mxy} + 60 \, \mathit{mx}\right)} y + \frac{1}{20} \, {\left(20 \, \mathit{mx}^{3} - 60 \, \mathit{mx}^{2} + {\left(24 \, \mathit{mx}^{3} - 105 \, \mathit{mx}^{2} + 160 \, \mathit{mx} - 90\right)} \mathit{mxy} + 60 \, \mathit{mx}\right)} y + 2$
e102 = integrate(e101,y,-1,1)/4
e102

$\displaystyle \frac{1}{180} \, {\left(3 \, \mathit{mx} + 10\right)} \mathit{mxy}^{8} - \frac{1}{180} \, {\left(3 \, \mathit{mx} - 10\right)} \mathit{mxy}^{8} + \frac{1}{54} \, {\left(2 \, \mathit{mx}^{2} + 9 \, \mathit{mx}\right)} \mathit{mxy}^{7} + \frac{1}{54} \, {\left(2 \, \mathit{mx}^{2} - 9 \, \mathit{mx}\right)} \mathit{mxy}^{7} + \frac{1}{1680} \, {\left(47 \, \mathit{mx}^{3} + 560 \, \mathit{mx}^{2} + 1260 \, \mathit{mx} + 1440\right)} \mathit{mxy}^{6} - \frac{1}{1680} \, {\left(47 \, \mathit{mx}^{3} - 560 \, \mathit{mx}^{2} + 1260 \, \mathit{mx} - 1440\right)} \mathit{mxy}^{6} + \frac{1}{504} \, {\left(109 \, \mathit{mx}^{3} + 648 \, \mathit{mx}^{2} + 1008 \, \mathit{mx}\right)} \mathit{mxy}^{5} - \frac{1}{504} \, {\left(109 \, \mathit{mx}^{3} - 648 \, \mathit{mx}^{2} + 1008 \, \mathit{mx}\right)} \mathit{mxy}^{5} + \frac{1}{2800} \, {\left(2025 \, \mathit{mx}^{3} + 8400 \, \mathit{mx}^{2} + 9800 \, \mathit{mx} + 7056\right)} \mathit{mxy}^{4} - \frac{1}{2800} \, {\left(2025 \, \mathit{mx}^{3} - 8400 \, \mathit{mx}^{2} + 9800 \, \mathit{mx} - 7056\right)} \mathit{mxy}^{4} + \frac{1}{70} \, {\left(95 \, \mathit{mx}^{3} + 294 \, \mathit{mx}^{2} + 294 \, \mathit{mx}\right)} \mathit{mxy}^{3} - \frac{1}{70} \, {\left(95 \, \mathit{mx}^{3} - 294 \, \mathit{mx}^{2} + 294 \, \mathit{mx}\right)} \mathit{mxy}^{3} + \frac{1}{120} \, {\left(185 \, \mathit{mx}^{3} + 504 \, \mathit{mx}^{2} + 420 \, \mathit{mx} + 240\right)} \mathit{mxy}^{2} - \frac{1}{120} \, {\left(185 \, \mathit{mx}^{3} - 504 \, \mathit{mx}^{2} + 420 \, \mathit{mx} - 240\right)} \mathit{mxy}^{2} + 2 \, \mathit{mx}^{2} + \frac{1}{60} \, {\left(63 \, \mathit{mx}^{3} + 140 \, \mathit{mx}^{2} + 120 \, \mathit{mx}\right)} \mathit{mxy} - \frac{1}{60} \, {\left(63 \, \mathit{mx}^{3} - 140 \, \mathit{mx}^{2} + 120 \, \mathit{mx}\right)} \mathit{mxy} + 1$
e103 = simplify(e102)

e104 = e103.substitute(mx=1,my=1,mxy=1)

float(e104)

$\displaystyle 44.6442328042$
2048/float(104)

$\displaystyle 19.6923076923$
integrate((1+mx*x)^3,x,-1,1)

$\displaystyle 2 \, \mathit{mx}^{2} + 2$