CoCalc Public Filessmc-build / pystan.ipynb
Author: Harald Schilly
Views : 164
Description: Testing PyStan on CoCalc
Compute Environment: Ubuntu 18.04 (Deprecated)

PyStan in CoCalc – Python 3 (Ubuntu Linux) Kernel

ATTN: compiling uses ~2.5 gb of RAM

In [1]:
import pystan

In [2]:
pystan.__version__

'2.18.1.0'
In [3]:
schools_code = """
data {
int<lower=0> J; // number of schools
vector[J] y; // estimated treatment effects
vector<lower=0>[J] sigma; // s.e. of effect estimates
}
parameters {
real mu;
real<lower=0> tau;
vector[J] eta;
}
transformed parameters {
vector[J] theta;
theta = mu + tau * eta;
}
model {
eta ~ normal(0, 1);
y ~ normal(theta, sigma);
}
"""

schools_dat = {
'J': 8,
'y': [28, 8, -3, 7, -1, 1, 18, 12],
'sigma': [15, 10, 16, 11, 9, 11, 10, 18]
}

In [6]:
sm = pystan.StanModel(model_code=schools_code)
sm

INFO:pystan:COMPILING THE C++ CODE FOR MODEL anon_model_19a09b474d1901f191444eaf8a6b8ce2 NOW.
<bound method StanModel.show of <pystan.model.StanModel object at 0x7f315b4a5ba8>>
In [5]:
fit = sm.sampling(data=schools_dat, iter=1000, chains=1, n_jobs=1)
fit

WARNING:pystan:1 of 500 iterations ended with a divergence (0.2 %). WARNING:pystan:Try running with adapt_delta larger than 0.8 to remove the divergences.
Inference for Stan model: anon_model_19a09b474d1901f191444eaf8a6b8ce2. 1 chains, each with iter=1000; warmup=500; thin=1; post-warmup draws per chain=500, total post-warmup draws=500. mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat mu 8.43 0.73 5.73 -1.14 4.65 8.07 11.33 24.13 61 1.0 tau 6.55 0.41 5.22 0.32 2.67 5.22 9.23 18.99 164 1.02 eta[1] 0.38 0.06 0.92 -1.5 -0.22 0.4 1.06 2.07 279 1.0 eta[2] 6.8e-3 0.04 0.94 -1.7 -0.62 -0.05 0.6 2.04 440 1.0 eta[3] -0.2 0.04 0.94 -1.92 -0.92 -0.21 0.51 1.66 504 1.0 eta[4] -0.05 0.05 0.89 -1.63 -0.76 -0.03 0.57 1.8 286 1.01 eta[5] -0.45 0.06 0.96 -2.29 -1.05 -0.4 0.13 1.44 253 1.0 eta[6] -0.32 0.05 0.92 -2.1 -0.86 -0.32 0.22 1.51 296 1.0 eta[7] 0.3 0.04 0.86 -1.47 -0.24 0.29 0.88 2.1 442 1.0 eta[8] -5.5e-3 0.05 0.99 -1.95 -0.65 -0.02 0.63 1.95 397 1.0 theta[1] 11.81 0.69 8.47 -0.71 6.24 10.16 15.66 32.93 152 1.0 theta[2] 8.06 0.27 6.32 -4.13 4.2 8.24 11.78 21.4 547 1.0 theta[3] 6.56 0.35 7.72 -10.19 1.91 6.84 11.29 21.5 478 1.0 theta[4] 7.67 0.26 6.09 -3.56 3.84 7.62 11.19 20.66 533 1.0 theta[5] 4.47 0.28 6.07 -9.01 0.82 4.92 8.41 16.15 461 1.01 theta[6] 5.9 0.34 6.35 -8.13 2.03 5.63 10.02 18.89 345 1.0 theta[7] 10.92 0.53 7.4 -0.04 5.8 9.89 14.88 30.73 197 1.0 theta[8] 8.74 0.49 8.13 -5.33 3.94 8.21 12.92 27.23 276 1.0 lp__ -4.93 0.2 2.52 -10.29 -6.39 -4.77 -3.15 -0.51 158 1.0 Samples were drawn using NUTS at Sat Jan 12 12:10:45 2019. For each parameter, n_eff is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence, Rhat=1).
In [ ]: