CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
Avatar for Testing 18.04.

Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place.

| Download

RStan on CoCalc

Project: Testing 18.04
Views: 649
Kernel: R (R-Project)

RStan in CoCalc (Kernel: "R-Project")

https://mc-stan.org/users/interfaces/rstan

Going through https://github.com/stan-dev/rstan/wiki/RStan-Getting-Started with slight changes for CoCalc.

Be aware, that the initial compilation takes about 2.5 gb of RAM!

# switch to PNG images, because the default of SVG produces very large files options(jupyter.plot_mimetypes = c('image/png'))
pkgbuild::has_build_tools(debug = TRUE)
Trying to compile a simple C file
Running /usr/lib/R/bin/R CMD SHLIB foo.c gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -g -O2 -fdebug-prefix-map=/build/r-base-AitvI6/r-base-3.4.4=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -g -c foo.c -o foo.o g++ -shared -L/usr/lib/R/lib -Wl,-Bsymbolic-functions -Wl,-z,relro -o foo.so foo.o -L/usr/lib/R/lib -lR
packageDescription("rstan")$Version
require(rstan)
Loading required package: rstan Loading required package: ggplot2 Loading required package: StanHeaders rstan (Version 2.18.2, GitRev: 2e1f913d3ca3) For execution on a local, multicore CPU with excess RAM we recommend calling options(mc.cores = parallel::detectCores()). To avoid recompilation of unchanged Stan programs, we recommend calling rstan_options(auto_write = TRUE)
# make sure to only use one core! options(mc.cores = 1)
rstan_options(auto_write = TRUE)
schools8 <- " data { int<lower=0> J; // number of schools real y[J]; // estimated treatment effects real<lower=0> sigma[J]; // standard error of effect estimates } parameters { real mu; // population treatment effect real<lower=0> tau; // standard deviation in treatment effects vector[J] eta; // unscaled deviation from mu by school } transformed parameters { vector[J] theta = mu + tau * eta; // school treatment effects } model { target += normal_lpdf(eta | 0, 1); // prior log-density target += normal_lpdf(y | theta, sigma); // log-likelihood }"
schools_dat <- list(J = 8, y = c(28, 8, -3, 7, -1, 1, 18, 12), sigma = c(15, 10, 16, 11, 9, 11, 10, 18) )
fit <- stan(model_code = schools8, data = schools_dat)
SAMPLING FOR MODEL '77851864eaef144f4a34884224755b9c' NOW (CHAIN 1). Chain 1: Chain 1: Gradient evaluation took 1.3e-05 seconds Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.13 seconds. Chain 1: Adjust your expectations accordingly! Chain 1: Chain 1: Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) Chain 1: Chain 1: Elapsed Time: 0.039216 seconds (Warm-up) Chain 1: 0.038589 seconds (Sampling) Chain 1: 0.077805 seconds (Total) Chain 1: SAMPLING FOR MODEL '77851864eaef144f4a34884224755b9c' NOW (CHAIN 2). Chain 2: Chain 2: Gradient evaluation took 9e-06 seconds Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. Chain 2: Adjust your expectations accordingly! Chain 2: Chain 2: Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) Chain 2: Chain 2: Elapsed Time: 0.042113 seconds (Warm-up) Chain 2: 0.043538 seconds (Sampling) Chain 2: 0.085651 seconds (Total) Chain 2: SAMPLING FOR MODEL '77851864eaef144f4a34884224755b9c' NOW (CHAIN 3). Chain 3: Chain 3: Gradient evaluation took 8e-06 seconds Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds. Chain 3: Adjust your expectations accordingly! Chain 3: Chain 3: Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) Chain 3: Chain 3: Elapsed Time: 0.048881 seconds (Warm-up) Chain 3: 0.038686 seconds (Sampling) Chain 3: 0.087567 seconds (Total) Chain 3: SAMPLING FOR MODEL '77851864eaef144f4a34884224755b9c' NOW (CHAIN 4). Chain 4: Chain 4: Gradient evaluation took 7e-06 seconds Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds. Chain 4: Adjust your expectations accordingly! Chain 4: Chain 4: Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) Chain 4: Chain 4: Elapsed Time: 0.043248 seconds (Warm-up) Chain 4: 0.054335 seconds (Sampling) Chain 4: 0.097583 seconds (Total) Chain 4:
Warning message: “There were 3 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help. See http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup”Warning message: “Examine the pairs() plot to diagnose sampling problems ”
print(fit)
Inference for Stan model: 77851864eaef144f4a34884224755b9c. 4 chains, each with iter=2000; warmup=1000; thin=1; post-warmup draws per chain=1000, total post-warmup draws=4000. mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat mu 7.86 0.12 5.11 -2.45 4.61 7.90 11.15 17.70 1857 1 tau 6.54 0.13 5.38 0.29 2.44 5.31 9.20 20.25 1599 1 eta[1] 0.40 0.01 0.95 -1.51 -0.24 0.43 1.05 2.18 4156 1 eta[2] 0.01 0.01 0.86 -1.69 -0.57 0.00 0.59 1.77 4138 1 eta[3] -0.21 0.01 0.92 -2.05 -0.82 -0.21 0.40 1.62 5337 1 eta[4] -0.01 0.01 0.90 -1.78 -0.59 0.00 0.56 1.78 4212 1 eta[5] -0.35 0.01 0.88 -2.07 -0.94 -0.35 0.23 1.41 4105 1 eta[6] -0.20 0.01 0.87 -1.84 -0.78 -0.21 0.37 1.58 4185 1 eta[7] 0.33 0.01 0.87 -1.46 -0.22 0.34 0.92 2.00 4145 1 eta[8] 0.06 0.01 0.92 -1.79 -0.56 0.07 0.69 1.81 4672 1 theta[1] 11.53 0.15 8.29 -2.12 6.08 10.50 15.77 31.26 3107 1 theta[2] 7.91 0.09 6.24 -4.38 4.03 7.78 11.76 20.43 4724 1 theta[3] 6.14 0.12 7.77 -11.13 2.17 6.57 10.92 20.15 3935 1 theta[4] 7.68 0.09 6.57 -5.30 3.71 7.67 11.84 20.92 5068 1 theta[5] 5.18 0.10 6.48 -8.82 1.20 5.63 9.52 16.46 3825 1 theta[6] 6.16 0.10 6.54 -7.89 2.27 6.53 10.52 18.08 4642 1 theta[7] 10.59 0.11 6.74 -1.56 6.09 9.93 14.54 25.98 3714 1 theta[8] 8.37 0.13 7.94 -7.34 3.71 8.28 12.72 25.05 3846 1 lp__ -39.49 0.08 2.61 -45.37 -41.08 -39.26 -37.61 -35.10 1208 1 Samples were drawn using NUTS(diag_e) at Thu Jan 10 17:15:29 2019. For each parameter, n_eff is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence, Rhat=1).
plot(fit)
'pars' not specified. Showing first 10 parameters by default. ci_level: 0.8 (80% intervals) outer_level: 0.95 (95% intervals)
Image in a Jupyter notebook
pairs(fit, pars = c("mu", "tau", "lp__"))
Image in a Jupyter notebook
### return an array of three dimensions: iterations, chains, parameters a <- extract(fit, permuted = FALSE) a[1:10]
### use S3 functions on stanfit objects a2 <- as.array(fit) m <- as.matrix(fit) d <- as.data.frame(fit) d[1:10]