R version 4.3.1 (2023-06-16) -- "Beagle Scouts"
Copyright (C) 2023 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

  Natural language support but running in an English locale

R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

> pdf("g13.pdf");options(width=64)
> #setwd("C:\\Users\\kolassa\\Class553")
> setwd("~/Taught1/960-553/Data")
> #***********************************************************
> #Bayesian methods                                          *
> #***********************************************************
> #*************************************************************/
> #*  Prostate data                                            */
> #* From Andrews and Herzberg (1985), Data: A Collection of   */
> #* Problems from Many Fields for the Student and Research    */
> #* Worker, Table 46.  Observations represent subjects in a   */
> #* prostate cohort study, randomized to one of four dose     */
> #* levels of diethylstilbestrol.  Rx records dose in four    */
> #* ordered categories, with 1 being placebo.  Disease stage  */
> #* is 3 or 4.  monfol is months of followup.  surv is 0 if   */
> #* alive after 50 mo, and codes cause of death otherwise.    */
> #* http://lib.stat.cmu.edu/datasets/Andrews/T46.1            */
> #* The on-line version of the data set adds 3 fields before  */
> #* the first field in the book.  Variables of interest are   */
> #* stage, rx, monfol, and surv in fields 5, 6, 10, 11 of the */
> #* online version, resp.  Causes of death are given by var-  */
> #* ious positive integers in surv; I recode these to 1.  The */
> #* data file has more columns than we are interested in.  Put*/
> #* place-holding variables in for variables not of interest  */
> #* between variables of interest.  Data were previously pub- */
> #* lished by Byar and Corle (1977 Chronic Disease) and Byar  */
> #* and Green (1980 Bull. Cancer).  Lower value of dichoto-   */
> #* mized dose begins with blank to make it alphabetized      */
> #* before high.                                              */
> #*************************************************************/
> # If we omit the ones past where the data of interest stops
> # in the "what" list, and use flush=T, R will ignore them. 
> prostate<-read.table("T46.1")[,c(5,6,10,11)]
> dimnames(prostate)[[2]]<-c("stage","rx","monfol","surv")
> prostate$alive<-(prostate$surv==0)+0                      
> prostate$dose<-c(" low","high")[1+(prostate$rx>1)]         
> stage4<-prostate[prostate$stage==4,]
> library(Hmisc)#For binconf; Gives Wilson and standard binomial confidence intervals.
> #Wilson and standard tests are given by binconf.  For review.
> (x<-sum(prostate$alive))
[1] 150
> (n<-length(prostate$alive))
[1] 506
> binconf(x,n, method="wilson")
  PointEst     Lower     Upper
 0.2964427 0.2583052 0.3376476
> #Posterior with flat prior, estimate
> (x+1)/(n+2)
[1] 0.2972441
> #Posterior with flat prior, equal tail interval.
> (eqtail<-qbeta(c(0.025,0.975),1+x, 1+n-x))
[1] 0.2583168 0.3376835
> #HPD interval.
> f<-function(ends,x,n) return(c(diff(dbeta(ends,1+x,1+n-x)),
+    diff(pbeta(ends,1+x,1+n-x))-0.95))
> library(nleqslv)#For nleqslv; Solves a nonlinear set of equations.
> #Use a nonlinear equation solver, starting from the equal
> #tail interval.
> (temp<-nleqslv(eqtail,f,NULL,x,n))
$x
[1] 0.2577981 0.3371374

$fvec
[1]  1.549063e-10 -3.537293e-11

$termcd
[1] 1

$message
[1] "Function criterion near zero"

$scalex
[1] 1 1

$nfcnt
[1] 4

$njcnt
[1] 2

$iter
[1] 4

> hpdout<-temp$x
> plot(pivec<-(1:999)/1000,den<-dbeta(pivec,1+x,1+n-x),
+    main="Beta posterior", ylab="Posterior Density",
+    xlab="Probability",
+    sub=paste("Flat prior, n=",n,"x=",x),type="l")
> segments(eqtail,0,eqtail,dbeta(eqtail,x+1,n-x+1))
> segments(hpdout,0,hpdout,dbeta(hpdout,x+1,n-x+1),lty=2)
> lines(hpdout,dbeta(hpdout,x+1,n-x+1),lty=2)
> legend(.5,max(den),legend=c("Equal Tail","HPD"),lty=1:2)
> # Repeat for a smaller sample
> (x<-sum(stage4$alive[stage4$dose==" low"]))
[1] 9
> (n<-length(stage4$alive[stage4$dose==" low"]))
[1] 53
> binconf(x,n, method="wilson")
  PointEst      Lower     Upper
 0.1698113 0.09199945 0.2922528
> #Posterior with flat prior, estimate
> (x+1)/(n+2)
[1] 0.1818182
> #Posterior with flat prior, equal tail interval.
> (eqtail<-qbeta(c(0.025,0.975),1+x, 1+n-x))
[1] 0.09254549 0.29294124
> #HPD interval.
> temp<-nleqslv(eqtail,f,NULL,x,n)
> (hpdout<-if(temp$termcd==1) temp$x else rep(NA,2))
[1] 0.08608517 0.28414160
> plot(pivec<-(1:999)/1000,den<-dbeta(pivec,1+x,1+n-x),
+    main="Beta posterior", ylab="Posterior Density",
+    xlab="Probability",
+    sub=paste("Flat prior, n=",n,"x=",x),type="l")
> segments(eqtail,0,eqtail,dbeta(eqtail,x+1,n-x+1))
> segments(hpdout,0,hpdout,dbeta(hpdout,x+1,n-x+1),lty=2)
> lines(hpdout,dbeta(hpdout,x+1,n-x+1),lty=2)
> legend(.5,max(den),legend=c("Equal Tail","HPD"),lty=1:2)
> # Apply independent beta priors to two proportions.  Re-
> # parameterize in terms of one proportion and the odds
> # ratio, and integrate out the remaining proportion to
> # get an interval for the odds ratio
> library(PropCIs)#For orci.bayes; Performs Bayesian inference on the odds ratio.
> x<-sum(prostate$alive[prostate$dose==" low"])
> y<-sum(prostate$alive[prostate$dose=="high"])
> m<-sum(prostate$dose==" low")
> n<-sum(prostate$dose=="high")
> orci.bayes(x,m,y,n,1,1,1,1,0.95)
[1] 0.4946238 1.2164387
> # Get the Fisher exact interval for comparison purposes.
> fisher.test(cbind(c(x,y),c(m-x,n-y)))

	Fisher's Exact Test for Count Data

data:  cbind(c(x, y), c(m - x, n - y))
p-value = 0.3137
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
 0.4766066 1.2403883
sample estimates:
odds ratio 
 0.7752801 

> #***********************************************************
> #The program stan applies a Bayesian analysis to a variety *
> #of statistical models.  The package rstan is an R front   *
> #end. Applied regression modeling is in rstanarm.          *
> #***********************************************************
> library(rstanarm)#For stan_glm; Fits the generalized linear model with a prior.
> #**************************************************************/
> #* Shipping Data: McCullagh and Nelder (1989) Generalized     */
> #* Linear Models provide data on losses by a shipping insurer.*/
> #* Data are grouped into putatively homogeneous classes, based*/
> #* on ship type, start of construction 5 year period, star of */
> #* observation 5 year period, ship months at risk, and count  */
> #* of losses.  Empty categories are removed.                  */
> #**************************************************************/
> ships<-read.table("ships.dat",
+    col.names=c("type","built", "period","smar","cases"))
> #Bayesian syntax to fit generalized linear models is similar
> #but with a call to stan_glm.  You can modify the prior from
> #the default with some extra options.
> glmo<-stan_glm(cases~type+built+offset(log(smar)),
+    family=poisson,data=ships)

SAMPLING FOR MODEL 'count' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 1.1e-05 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 0.035 seconds (Warm-up)
Chain 1:                0.034 seconds (Sampling)
Chain 1:                0.069 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'count' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 5e-06 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.05 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 0.034 seconds (Warm-up)
Chain 2:                0.03 seconds (Sampling)
Chain 2:                0.064 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'count' NOW (CHAIN 3).
Chain 3: 
Chain 3: Gradient evaluation took 5e-06 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.05 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 0.031 seconds (Warm-up)
Chain 3:                0.034 seconds (Sampling)
Chain 3:                0.065 seconds (Total)
Chain 3: 

SAMPLING FOR MODEL 'count' NOW (CHAIN 4).
Chain 4: 
Chain 4: Gradient evaluation took 5e-06 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.05 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 0.032 seconds (Warm-up)
Chain 4:                0.033 seconds (Sampling)
Chain 4:                0.065 seconds (Total)
Chain 4: 
> prior_summary(glmo)
Priors for model 'glmo' 
------
Intercept (after predictors centered)
 ~ normal(location = 0, scale = 2.5)

Coefficients
  Specified prior:
    ~ normal(location = [0,0,0,...], scale = [2.5,2.5,2.5,...])
  Adjusted prior:
    ~ normal(location = [0,0,0,...], scale = [6.09,6.09,6.09,...])
------
See help('prior_summary.stanreg') for more details
> # fixef extracts posterior medians.
> fixef(glmo)
(Intercept)       typeB       typeC       typeD       typeE 
-9.44840154 -0.55640517 -0.63901394 -0.22360361  0.38247890 
      built 
 0.05737883 
> # Plotting the fit gives ...
> plot(glmo)
> print(glmo,digits=4)
stan_glm
 family:       poisson [log]
 formula:      cases ~ type + built + offset(log(smar))
 observations: 34
 predictors:   6
------
            Median  MAD_SD 
(Intercept) -9.4484  0.8838
typeB       -0.5564  0.1779
typeC       -0.6390  0.3375
typeD       -0.2236  0.2845
typeE        0.3825  0.2323
built        0.0574  0.0124

------
* For help interpreting the printed output see ?print.stanreg
* For info on the priors used see ?prior_summary.stanreg
> posterior_interval(glmo)
                      5%         95%
(Intercept) -10.89279409 -8.10036420
typeB        -0.83883994 -0.25237256
typeC        -1.21499478 -0.12760426
typeD        -0.71863175  0.23996511
typeE        -0.01150767  0.74724157
built         0.03825572  0.07748218
> glm2<-stan_glm(cases~type+built+offset(log(smar)),
+    family=poisson,data=ships, prior=cauchy(0,.2))

SAMPLING FOR MODEL 'count' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 1.5e-05 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.15 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 0.104 seconds (Warm-up)
Chain 1:                0.133 seconds (Sampling)
Chain 1:                0.237 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'count' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 7e-06 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 0.11 seconds (Warm-up)
Chain 2:                0.092 seconds (Sampling)
Chain 2:                0.202 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'count' NOW (CHAIN 3).
Chain 3: 
Chain 3: Gradient evaluation took 7e-06 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 0.096 seconds (Warm-up)
Chain 3:                0.115 seconds (Sampling)
Chain 3:                0.211 seconds (Total)
Chain 3: 

SAMPLING FOR MODEL 'count' NOW (CHAIN 4).
Chain 4: 
Chain 4: Gradient evaluation took 7e-06 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 0.104 seconds (Warm-up)
Chain 4:                0.079 seconds (Sampling)
Chain 4:                0.183 seconds (Total)
Chain 4: 
> prior_summary(glm2)
Priors for model 'glm2' 
------
Intercept (after predictors centered)
 ~ normal(location = 0, scale = 2.5)

Coefficients
 ~ cauchy(location = [0,0,0,...], scale = [0.2,0.2,0.2,...])
------
See help('prior_summary.stanreg') for more details
> posterior_interval(glm2)
                       5%         95%
(Intercept) -11.221921059 -8.41789638
typeB        -0.708111416 -0.13661719
typeC        -0.814290277  0.06842354
typeD        -0.395255569  0.23487886
typeE        -0.001539924  0.78207951
built         0.041423291  0.08035100
> glm3<-stan_glm(cases~type+built+offset(log(smar)),
+    family=poisson,data=ships, prior=NULL)

SAMPLING FOR MODEL 'count' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 1.4e-05 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.14 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 0.032 seconds (Warm-up)
Chain 1:                0.03 seconds (Sampling)
Chain 1:                0.062 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'count' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 4e-06 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 0.032 seconds (Warm-up)
Chain 2:                0.029 seconds (Sampling)
Chain 2:                0.061 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'count' NOW (CHAIN 3).
Chain 3: 
Chain 3: Gradient evaluation took 3e-06 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 0.035 seconds (Warm-up)
Chain 3:                0.028 seconds (Sampling)
Chain 3:                0.063 seconds (Total)
Chain 3: 

SAMPLING FOR MODEL 'count' NOW (CHAIN 4).
Chain 4: 
Chain 4: Gradient evaluation took 4e-06 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 0.033 seconds (Warm-up)
Chain 4:                0.031 seconds (Sampling)
Chain 4:                0.064 seconds (Total)
Chain 4: 
> prior_summary(glm3)
Priors for model 'glm3' 
------
Intercept (after predictors centered)
 ~ normal(location = 0, scale = 2.5)

Coefficients
 ~ flat
------
See help('prior_summary.stanreg') for more details
> posterior_interval(glm3)
                       5%         95%
(Intercept) -10.852846371 -8.07467655
typeB        -0.847388349 -0.26399381
typeC        -1.250758225 -0.12046409
typeD        -0.733844842  0.22302227
typeE        -0.007796473  0.76101123
built         0.038358883  0.07674914
>