This notebook demonstrates setting up a representative and reproducible sampling frame, which provides complete coverage of Uganda’s croplands. Croplands are the primary Region of Interest (ROI) and the target for various land management interventions by the Government of Uganda and private sector entities. Based on a recent high-resolution remote sensing GeoSurvey (2020), croplands occupy approximately 112,404 km2 of Uganda’s overall land area (refer to Figure 1). Selecting an appropriate sampling frame for ground observations and measurements and/or experiments is a critical planning step, because it determines both the main and recurrent costs of any mapping and/or monitoring activities. The method being proposed here is Spatially Balanced Sampling, which offers several advantages over conventional approaches:
Representation of spatial variability: In agriculture, spatial variability is significant due to differences in soils, topography, microclimates, land use practices, and infrastructure. A spatially balanced sampling frame ensures that all of these varied areas are represented in the sample, providing a more accurate picture of agricultural landscapes.
Reduction of sampling bias: Traditional random sampling can lead to clusters of sample points in certain areas, especially if these areas are more accessible or have higher concentrations of agricultural activity. This can result in biased estimates. Spatially Balanced Sampling minimizes this risk by ensuring samples are spread across the entire region of interest.
Improved precision and accuracy: By covering a more diverse range of conditions, spatially balanced sampling enhances the precision and accuracy of survey estimates. This is particularly important for detecting and monitoring subtle but crucial changes in agricultural practices, yields, and environmental impacts.
Efficiency in data collection: Spatially Balanced Sampling can be more efficient. By strategically selecting sample locations across the landscape, the method reduces the need for excessive travel between points, which is a significant consideration in large, diverse agricultural areas.
Adaptability to different scales and purposes: This approach is flexible and can be adapted to various scales (local, regional, national) and purposes e.g., crop yield estimation, soil quality assessment, impact of agricultural practices on the environment, among others. This versatility makes it a valuable tool in diverse survey and experimental contexts.
Enhanced capability for trend analysis: In monitoring programs, where detecting changes over time is crucial, Spatially Balanced Sampling ensures that the entire area of interest is consistently represented over different survey iterations. This enhances the ability to accurately detect and analyze temporal trends.
Compliance with best practices and standards: Using a spatially balanced sampling frame aligns with best practices in environmental and agricultural research. It ensures that the data collected is robust, credible and reproducible, which are essential for informing policy decisions and scientific research.
This notebook is a companion to one describing methods for mapping the occurrence of irrigation and small area estimation, which you can find here. It is maintained on OSF here, from where you can download and alter it as you see fit. As an example use case, I demonstrate a novel method for assessing scale dependence in sampled irrigation detection probabilities on a Discreet Global Grid (DGG), using a Bayesian multilevel model.
To actually run this notebook, you will need to install and load the R packages indicated in the chunk directly below.
# Package names
packages <- c("osfr", "tidyverse", "rgdal", "raster", "sp", "BalancedSampling", "leaflet",
"htmlwidgets", "brms", "ggthemes", "tidybayes", "DT")
# Install packages
installed_packages <- packages %in% rownames(installed.packages())
if (any(installed_packages == FALSE)) {
install.packages(packages[!installed_packages])
}
# Load packages
invisible(lapply(packages, library, character.only = TRUE))
The following chunk downloads all of the data files needed for this notebook.
# create a ./sampling/ folder in your current working directory
dir.create("sampling", showWarnings = FALSE)
osf_retrieve_node("sa3uf") %>%
osf_ls_files() %>%
osf_download(path = "./sampling", conflicts = "overwrite")
# load GADM shapefile (courtesy https://gadm.org/)
shape <- shapefile("./sampling/gadm36_UGA_4.shp")
# load AfSIS grids
glist <- list.files(path="./sampling", pattern="tif", full.names = TRUE)
grids <- stack(glist)
This next chunk sets-up the layers spatially balanced survey location samples from the ROI. I use the overlay function from the raster package (to create the ROI) and the lcube fucntion from the BalancedSampling package (to sample the ROI). Note that apart from the cropland ROI layer, we also use a distance to nearest road layer to assist with field logistics. For this example, the distance (dr) has been set at 15 km, which encompasses all of the croplands. You could change that to e.g., 1 km for roadside sampling.
# create a ROI image based on cropland mask and distance to nearest road
cp <- 1 ## set cropland mask to 1 (present)
dr <- 20 ## set maximum distance to the nearest road (in km)
roi <- overlay(grids, fun=function(x) {return(ifelse(x[1] == cp && x[2] <= dr, 1, 0))})
# extract ROI coordinates
coord <- coordinates(roi)
index <- extract(roi, coord)
index <- as.data.frame(cbind(coord, index))
rmask <- index[which(index$index == 1), ]
The sampling method implements the cube approach of Deville and Tillé (2004) as implemented in the BalancedSampling package (Grafström, Lisic and Prentius, 2022). This allows sampling based on the relevant inclusion probabilities while aiming for balance and spread with respect to specified covariates and/or constraints.
# set sampling parameters
N <- nrow(rmask) ## ROI size (in 250 m pixels)
n <- round(N*0.015) ## set sample size (change this when needed)
p <- rep(n/N,N) ## calculate the inclusion probabilities
# draw spatially balanced sample
set.seed(85321) ## sets a randomization seed
B <- cbind(p, rmask[,1], rmask[,2]) ## specifies spatial balancing variables
rsamp <- cube(p, B) ## samples from the ROI
In this case potential survey sites falling within the ROI were selected purely for spatial balance, which entails that the mean coordinates of sample sites are close to the mean coordinates of all points in the sampling frame and have adequate spatial spread. This ensures that the observations are not clustered with respect to the spatial coordinates, see Grafström and Schelin (2014). The next chunk generates an initial output file and a map of the proposed sampling frame.
# extract sample coordinates
x <- rmask[rsamp,1]
y <- rmask[rsamp,2]
xy <- data.frame(cbind(x,y))
# attach GADM-L4 adimistrative unit names from shape
coordinates(xy) <- ~x+y
crs(xy) <- "+proj=laea +ellps=WGS84 +lon_0=20 +lat_0=5 +units=m +no_defs"
sloc <- spTransform(xy, CRS(proj4string(shape)))
gadm <- sloc %over% shape
sloc <- as.data.frame(sloc)
samp <- cbind(gadm[ ,c(4,6,8,10)], sloc)
colnames(samp) <- c('district', 'county', 'ward', 'parish', 'lon', 'lat')
The leaflet map below shows where the proposed sampling locations are located in Uganda.
# Sample locations
w <- leaflet() %>%
setView(lng = mean(samp$lon), lat = mean(samp$lat), zoom = 7) %>%
addProviderTiles(providers$OpenStreetMap.Mapnik) %>%
addCircleMarkers(samp$lon, samp$lat,
color = "tomato",
stroke = FALSE,
fillOpacity = 0.8,
radius = 5,
clusterOptions = markerClusterOptions(maxClusterRadius = 30))
saveWidget(w, 'UG_sampling_locs.html', selfcontained = T) ## save leaflet map
w ## plot widget
The main problem with using administrative divisions (AD) such as districts, wards and parishes (as in the case of Uganda), for sampling, reporting and communicating survey statistics are that their boundaries can and do change because of redistricting and gerrymandering. ADs also differ in size, shape and complexity from country to country. Hence, land surveyors tend to use Discreet Global Grids (DGGs) and the associated geocodes for small-area-estimation, mapping, monitoring, and land assessment.
There are a number of ISO compliant DGGs in world-wide use e.g., Google’s Open Location Codes Plus Codes, the Military Grid Reference System (MGRS), What3Words, among others. Because of our focus of work in Africa we generally use our own custom Lambert Azimuthal Equal Area based grid (i.e., CRS = +proj=laea +ellps=WGS84 +lon_0=20 +lat_0=5 +units=m +no_defs) that corresponds to the Coordinate Reference System that is used for all of the standard AfSIS raster stacks. Note that while this works well in Africa, it is not really suitable for use in other parts of the world without reprojection, e.g. to MGRS.
The first thing to do is to specify geocodes (GIDs) at different levels of resolution of the DGG. The next chunk sets these up in hierarchically nested levels. The resulting GIDs are a geocode on one of many possible DGGs, and are readily converted to any other rectangular DGG.
# specify vector of resolution values (in meters) for each DGG level
resolution <- c(100000, 50000, 25000, 12500)
# initialize an empty list to store the results
gids <- list()
# loop over each resolution value
for (i in 1:length(resolution)) {
res.pixel <- resolution[i]
xgid <- ceiling(abs(xy$x)/res.pixel)
ygid <- ceiling(abs(xy$y)/res.pixel)
gidx <- ifelse(xy$x < 0, paste("W", xgid, sep=""), paste("E", xgid, sep=""))
gidy <- ifelse(xy$y < 0, paste("S", ygid, sep=""), paste("N", ygid, sep=""))
gids[[i]] <- paste(gidx, gidy, sep="")
}
# append and write out `samp` dataframe
gids <- data.frame(gids)
names(gids) <- c("L1", "L2", "L3", "L4")
xy <- as.data.frame(xy)
samp <- cbind(samp, xy, gids)
write.csv(samp, "./sampling/UG_cropland_sample.csv")
## 'data.frame': 24386 obs. of 12 variables:
## $ district: chr "Luwero" "Gulu" "Masaka" "Moroto" ...
## $ county : chr "Katikamu" "Kilak" "Kalungu" "Bokora" ...
## $ ward : chr "Katikamu" "Lamogi" "Bukulula" "Matany" ...
## $ parish : chr "Kikoma" "Palema" "Kiti" "Lokupoi" ...
## $ lon : num 32.5 32.2 31.9 34.5 30.2 ...
## $ lat : num 0.7075 2.7961 -0.0972 2.443 0.3226 ...
## $ x : num 1394375 1349375 1318625 1603875 1132875 ...
## $ y : num -464125 -232625 -554625 -267375 -510375 ...
## $ L1 : chr "E14S5" "E14S3" "E14S6" "E17S3" ...
## $ L2 : chr "E28S10" "E27S5" "E27S12" "E33S6" ...
## $ L3 : chr "E56S19" "E54S10" "E53S23" "E65S11" ...
## $ L4 : chr "E112S38" "E108S19" "E106S45" "E129S22" ...
The main output dataframe, samp
, contains 24386 proposed
sampling locations with their associated GADM administrative divisions,
GIDs at different DGG resolutions, lon/lat and LAEA coordinates. It is
written out as a .csv
file,
UG_cropland_sample.csv
, and you can use this as waypoint
input to a GPS (see e.g., GPSBabel), tablet or smart phone for
in-the-field navigation. The next step is to link a map of areas
predicted to have smallholder irrigation (see Figure 2) to the
samp
dataframe (see Figure 2).
The next chunk extracts the irrigation presence/absence values to the
samp
dataframe.
coordinates(samp) <- ~x+y
projection(samp) <- projection(grids)
# Extract irrigation raster values at sample locations
irgrid <- extract(grids, samp)
samp <- as.data.frame(cbind(samp, irgrid))
samp$CP_mask.PERMANENT <- NULL
samp$DOR2.PERMANENT <- NULL
samp$irrigation_preds.4.PERMANENT <- NULL
names(samp)[11] <- "ir_mask"
write.csv(samp, "./sampling/UG_ir_mask_sample.csv")
Scale dependence refers to the idea that processes and patterns can vary when observed at different spatial or temporal scales. This concept is essential for land use mapping and monitoring because it acknowledges that the behavior and relationships of agricultural systems can change depending on the scale of observation. DGGs are as per their definition hierarchical and nested, just like administrative divisions or students in classrooms, in schools, etc. A bare bones nested model structure with 4 GID levels is as follows:
\[ Pr(y_{ijklm} =1) = logit^{-1}(\mu_{ijklm} + L_{i} + L_{ij} + L_{ijk} + L_{ijkl} + \epsilon_{ijklm}) \]
Where:
The next chunk fits this structure as a Bayesian multilevel regression model using the bmrs package (Bürkner et al., 2023). Note that in the specification below, the priors have been commented out. This induces an initial fit with noninformative priors. You can change that if you would like.
priors = c(prior(normal(-2, 2), class = Intercept),
prior(cauchy(0, 2), class = sd, group = L1),
prior(cauchy(0, 1), class = sd, group = L2),
prior(cauchy(0, 1), class = sd, group = L3),
prior(cauchy(0, 1), class = sd, group = L4))
# fit random intercept model
sam0 <- brm(data = samp,
family = bernoulli,
formula = ir_mask ~ 1 + (1 | L1/L2/L3/L4),
# prior = priors,
iter = 4000, warmup = 1000, chains = 4, cores = 6,
control = list(adapt_delta = .975, max_treedepth = 20),
seed = 1235813)
saveRDS(sam0, "./sampling/dgg_irrigation_detection.rds") # save model object
## Family: bernoulli
## Links: mu = logit
## Formula: ir_mask ~ 1 + (1 | L1/L2/L3/L4)
## Data: samp (Number of observations: 24386)
## Draws: 4 chains, each with iter = 4000; warmup = 1000; thin = 1;
## total post-warmup draws = 12000
##
## Group-Level Effects:
## ~L1 (Number of levels: 32)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 2.88 0.45 2.12 3.89 1.00 4145 6281
##
## ~L1:L2 (Number of levels: 105)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 1.27 0.17 0.97 1.64 1.00 4450 6741
##
## ~L1:L2:L3 (Number of levels: 341)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 0.81 0.07 0.67 0.96 1.00 4180 6601
##
## ~L1:L2:L3:L4 (Number of levels: 1140)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 0.74 0.04 0.66 0.82 1.00 4771 8021
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept -1.84 0.54 -2.90 -0.78 1.00 1962 4050
##
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
Posterior predictive distributions are a fundamental concept in Bayesian statistics, serving as the foundation for making predictions and understanding the uncertainty in the predictions. The posterior predictive distribution is a probability distribution that represents what we can expect for new, unseen data points, given the data we have already observed. In Bayesian terms, it combines our prior beliefs (the prior distribution) and the evidence from the observed data (the likelihood) to give us a refined view (the posterior distribution). The posterior predictive distribution takes this one step further by integrating all possible parameter values, weighted by how likely each parameter value is according to the posterior distribution. Its purposes and uses are for:
Making predictions: It is used to make predictions about future observations. By considering the range of plausible values for the model parameters, the posterior predictive distribution can provide predictions that incorporate uncertainty.
Model checking: It helps in assessing the model’s fit. By comparing the predictions of the posterior predictive distribution to actual observed data, we can check how well the model captures the underlying process.
Understanding uncertainty: It provides a fuller picture of uncertainty. Unlike point estimates, this distribution gives a range of plausible values for new observations, highlighting the uncertainty inherent in any predictions.
In essence, posterior predictive distributions allow us to make informed predictions and understand the limits of predictions in a Bayesian framework.
The next chunk draws 3,000 random samples from the posterior
distribution of the sam0
model parameters.
# draw posterior parameter samples (3,000) in this case)
sam0_draws <- as.data.frame(as_draws(sam0))
# function to simplify the sam0_draws column names
simplify_variable_name <- function(name) {
# split the name by periods or underscores
parts <- unlist(strsplit(name, split = "[._]"))
# define the repetitive parts to be removed
common_parts <- c("X1", "r", "L1", "L2", "L3", "L4", "Intercept")
# remove the common parts and keep the unique parts
unique_parts <- parts[!parts %in% common_parts]
# join the unique parts back together
simplified_name <- paste(unique_parts, collapse = "-")
return(simplified_name)
}
variable_names <- colnames(sam0_draws)
simplified_names <- sapply(variable_names, simplify_variable_name)
# Rename the columns in your dataframe
colnames(sam0_draws) <- simplified_names
names(sam0_draws)[c(1:5)] <- c('Intercept', 'L1', 'L2', 'L3', 'L4')
# reshape data to long format
sam0_draws <- sam0_draws[ ,c(1:5)]
sam0_draws_long <- gather(sam0_draws, key = "GID_level", value = "value",
Intercept, L1, L2, L3, L4)
This is what the posterior parameter distributions look like …
# plot the posterior model parameter distributions
ggplot(sam0_draws_long, aes(x = value, fill = GID_level)) +
geom_density(alpha = 0.8) +
scale_fill_brewer(palette = "Set2") +
labs(x = "Parameter values", y = "Density", fill = "Parameter") +
theme_few()
Figure 3 illustrates significant scale dependence over the range of spatial variation in the irrigation detection model’s parameters. This helps in understanding how much of the variability in the data is attributable to each nested level of scale in the DGG hierarchy.
# calculate posterior log odds at the 4 GID levels
L1 <- sam0_draws$L1 + sam0_draws$Intercept
L2 <- sam0_draws$L2 + sam0_draws$Intercept
L3 <- sam0_draws$L3 + sam0_draws$Intercept
L4 <- sam0_draws$L4 + sam0_draws$Intercept
GID_preds <- as.data.frame(cbind(L1, L2, L3, L4))
GID_preds_long <- gather(GID_preds, key = "GID_level", value = "prev")
This is what the posterior predictive distributions for irrigation prevalence look like (on a log odds scale) at the four GID levels …
# plot the posterior model parameter distributions
ggplot(GID_preds_long, aes(x = prev, fill = GID_level)) +
geom_density(alpha = 0.8) +
scale_fill_brewer(palette = "Set2") +
labs(x = "Prevalence (log odds)", y = "Density", fill = "GID level") +
theme_few()
As previously mentioned, brms models are also used for prediction, and there are three main methods to consider:
predict()
provides point estimates along with
uncertainty measures, such as credible intervals.
posterior_predict()
is useful for exploring a range
of potential outcomes based on the observed data and the model. It
aligns well with the expectations for future surveys
The posterior_epred()
function assists in
determining the expected value of the response variable for each
observation by integrating over the posterior distribution of the
parameters, excluding residual variability.
In this particular case, I use posterior_predict()
,
specifically posterior_linpred
, which computes posterior
draws of the linear predictor, that is draws before applying any link
functions or other transformations. My choice was based on the method’s
effectiveness in representing the uncertainty of survey outcomes.
# setup new dataframe
gid <- samp %>%
group_by(L1, L2, L3, L4) %>%
summarize(n = n())
gid <- as.data.frame(gid)
# generate posterior predictions
pred <- as.data.frame(posterior_linpred(sam0, newdata = gid, ndraws = 100))
# calculate median and credible intervals of the posterior distribution
quantiles <- pred %>%
reframe(across(everything(), ~quantile(.x, probs = c(0.05, 0.5, 0.95), na.rm = TRUE)))
# transpose quantiles dataframe to long and append to gid
quant_long <- t(quantiles)
colnames(quant_long) <- c("Q5", "median", "Q95")
gid_pred <- cbind(gid, quant_long)
# write out the prediction file
write.csv(gid_pred, "./sae/sam0_post_pred.csv", row.names = FALSE)
This next chunk creates a data table designed for reference and application in small area predictions. Note that the predicted prevalence estimates are provided in log odds (logit) format. Therefore, it’s essential to convert both the estimates and their corresponding credible intervals back to the probability scale before use.
round_if_numeric <- function(x) {
if(is.numeric(x)) {
return(round(x, digits = 2))
} else {
return(x)
}
}
tab <- gid_pred
tab <- as.data.frame(lapply(tab, round_if_numeric))
The main practical applications of these results in advance of fieldwork are:
Survey planning and dispatch management: You can use the geocodes and the georeferenced survey locations for survey planning and for monitoring implementation progress. There are proprietary dispatch and route management software packages such as OptimoRoute and Locus Dispatcher available to do this, or have someone else to do it for you. There is also open source route planning software, such as GraphHopper available that requires some geocode customization for survey work.
Site selection for experiments: Since Randomized
Control Trials (RCT) are costly to conduct over large areas, choosing
the right site is a critical initial step in their design. The goal is
to achieve the most reliable population inference possible for the least
amount of money or effort expended. This process also involves
prediction. If the objective is to validate a model (such as the
sam0
model) in relation to specific GIDs used during its
fitting, this represents one scenario. However, generating predictions
for new GIDs, not part of the original sample, presents a different
challenge, which I have not discussed here.
Ex Ante analysis: This forward-looking approach is utilized in various fields, such as economics, finance, environmental impact assessment, climate change studies, and decision support applications, to evaluate potential outcomes and impacts before a survey decision takes place. It is a predictive technique that involves making assumptions and modeling scenarios to understand the probable effects of a decision or action. This approach is distinct from ex post analysis, which examines outcomes after they have occurred. In agriculture, ex ante analysis is extensively applied for planning, policymaking, and risk management.
Planning for monitoring and change detection: Both monitoring and change detection are more effective when spatially balanced sampling with Discrete Global Grids (DGGs) is employed. This combination is ideal for establishing benchmarks (baselines) to detect spatial and temporal changes, such as the evolution of smallholder irrigation practices and their economic and environmental impacts. When executed properly, the underlying models can pinpoint significant changes and differentiate actual changes from random fluctuations, which is crucial for informed decision-making in areas with potentially substantial economic and/or environmental effects.
## R version 4.3.1 (2023-06-16)
## Platform: x86_64-apple-darwin20 (64-bit)
## Running under: macOS Big Sur 11.7.10
##
## Matrix products: default
## BLAS: /Library/Frameworks/R.framework/Versions/4.3-x86_64/Resources/lib/libRblas.0.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/4.3-x86_64/Resources/lib/libRlapack.dylib; LAPACK version 3.11.0
##
## locale:
## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
##
## time zone: America/Denver
## tzcode source: internal
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] DT_0.30 tidybayes_3.0.6 ggthemes_4.2.4
## [4] brms_2.20.4 Rcpp_1.0.11 htmlwidgets_1.6.2
## [7] leaflet_2.2.0 BalancedSampling_1.6.3 raster_3.6-23
## [10] rgdal_1.6-7 sp_2.1-0 lubridate_1.9.3
## [13] forcats_1.0.0 stringr_1.5.0 dplyr_1.1.3
## [16] purrr_1.0.2 readr_2.1.4 tidyr_1.3.0
## [19] tibble_3.2.1 ggplot2_3.4.3 tidyverse_2.0.0
## [22] osfr_0.2.9
##
## loaded via a namespace (and not attached):
## [1] RColorBrewer_1.1-3 tensorA_0.36.2 rstudioapi_0.15.0
## [4] jsonlite_1.8.7 magrittr_2.0.3 farver_2.1.1
## [7] rmarkdown_2.25 fs_1.6.3 vctrs_0.6.3
## [10] memoise_2.0.1 base64enc_0.1-3 terra_1.7-46
## [13] htmltools_0.5.6 distributional_0.3.2 curl_5.1.0
## [16] sass_0.4.7 StanHeaders_2.26.28 KernSmooth_2.23-21
## [19] bslib_0.5.1 plyr_1.8.9 zoo_1.8-12
## [22] cachem_1.0.8 igraph_1.5.1 mime_0.12
## [25] lifecycle_1.0.3 pkgconfig_2.0.3 colourpicker_1.3.0
## [28] Matrix_1.5-4.1 R6_2.5.1 fastmap_1.1.1
## [31] shiny_1.7.5.1 digest_0.6.33 colorspace_2.1-0
## [34] ps_1.7.5 crosstalk_1.2.0 labeling_0.4.3
## [37] urltools_1.7.3 fansi_1.0.5 timechange_0.2.0
## [40] httr_1.4.7 abind_1.4-5 compiler_4.3.1
## [43] proxy_0.4-27 withr_2.5.1 backports_1.4.1
## [46] inline_0.3.19 shinystan_2.6.0 DBI_1.1.3
## [49] QuickJSR_1.0.7 pkgbuild_1.4.2 classInt_0.4-10
## [52] gtools_3.9.4 loo_2.6.0 units_0.8-4
## [55] tools_4.3.1 httpuv_1.6.11 threejs_0.3.3
## [58] glue_1.6.2 callr_3.7.3 nlme_3.1-162
## [61] promises_1.2.1 sf_1.0-14 grid_4.3.1
## [64] checkmate_2.3.0 reshape2_1.4.4 generics_0.1.3
## [67] leaflet.providers_1.13.0 gtable_0.3.4 tzdb_0.4.0
## [70] class_7.3-22 hms_1.1.3 utf8_1.2.3
## [73] ggdist_3.3.0 pillar_1.9.0 markdown_1.9
## [76] posterior_1.5.0 later_1.3.1 lattice_0.21-8
## [79] tidyselect_1.2.0 miniUI_0.1.1.1 knitr_1.44
## [82] SamplingBigData_1.0.0 arrayhelpers_1.1-0 gridExtra_2.3
## [85] V8_4.4.0 crul_1.4.0 stats4_4.3.1
## [88] xfun_0.40 bridgesampling_1.1-2 matrixStats_1.0.0
## [91] rstan_2.32.3 stringi_1.7.12 yaml_2.3.7
## [94] evaluate_0.22 codetools_0.2-19 httpcode_0.3.0
## [97] cli_3.6.1 RcppParallel_5.1.7 shinythemes_1.2.0
## [100] xtable_1.8-4 munsell_0.5.0 processx_3.8.2
## [103] jquerylib_0.1.4 triebeard_0.4.1 coda_0.19-4
## [106] svUnit_1.0.6 parallel_4.3.1 rstantools_2.3.1.1
## [109] ellipsis_0.3.2 prettyunits_1.2.0 dygraphs_1.1.1.6
## [112] bayesplot_1.10.0 Brobdingnag_1.2-9 mvtnorm_1.2-3
## [115] e1071_1.7-13 scales_1.2.1 xts_0.13.1
## [118] crayon_1.5.2 rlang_1.1.1 shinyjs_2.1.0