library(tidyverse)
options(dplyr.summarise.inform = FALSE)

For this lab you will use multivariate auto-regressive state-space (MARSS) to analyze multivariate salmon data from the Columbia River. These data are noisy and gappy. They are estimates of total spawner abundance and might include hatchery spawners.

Teams

  1. Lower Columbia River Chinook: Zoe Rand (QERM), Emma Timmins-Schiffman (Genome Sci), Maria Kuruvilla (QERM)
  2. Lower Columbia River Steelhead: Eric French (Civil), Liz Elmstrom (SAFS), Terrance Wang (SAFS)
  3. Lower Columbia River Coho: Nick Chambers (SAFS), Karl Veggerby (SAFS), Miranda Mudge (Molecular & Cellular)
  4. Middle Columbia River Steelhead: Madison Shipley (SAFS), Dylan Hubl (Env & Forest Sci)

Lower Columbia River salmon spawner data

These data are from the Coordinated Assessments Partnership (CAP) and downloaded using the rCAX R client for the CAX (the CAP database) API. The data are saved in Lab-2/Data_Images/columbia-river.rda.

load(file.path("Data_Images", "columbia-river.rda"))

The data set has data for fi endangered and threatened ESU (Evolutionary Significant Units) in the Lower Columbia River.

esu <- unique(columbia.river$esu_dps)
esu
## [1] "Steelhead (Middle Columbia River DPS)"     
## [2] "Steelhead (Upper Columbia River DPS)"      
## [3] "Steelhead (Lower Columbia River DPS)"      
## [4] "Salmon, coho (Lower Columbia River ESU)"   
## [5] "Salmon, Chinook (Lower Columbia River ESU)"
Figure from ESA recovery plan for Lower Columbia River Coho salmon, Lower Columbia River Chinook salmon, Columbia River Chum salmon, and Lower Columbia River steelhead. 2013. NMFS NW Region.  https://repository.library.noaa.gov/view/noaa/16002

Figure from ESA recovery plan for Lower Columbia River Coho salmon, Lower Columbia River Chinook salmon, Columbia River Chum salmon, and Lower Columbia River steelhead. 2013. NMFS NW Region. https://repository.library.noaa.gov/view/noaa/16002

Data structure

The dataset has the following columns

colnames(columbia.river)
## [1] "species"       "esu_dps"       "majorpopgroup" "esapopname"   
## [5] "commonpopname" "run"           "spawningyear"  "value"        
## [9] "value_type"
  • species: Chinook, Coho, Steelhead
  • esu_dps: name of the ESU
  • majorpopgroup: biological major group
  • commonpopname: common population name, generally a stream or river
  • run: run-timing
  • spawningyear: the year that the spawners were counted on the spawning grounds
  • value: total (natural-born and hatchery-born) spawners on the spawning ground. Generally some type of redd-count expansion or some other stream count of spawners. Redd = a gravel nest.

Data plots

Let’s load one ESU and make a plot. Create a function.

plotesu <- function(esuname){
  df <- columbia.river %>% subset(esu_dps %in% esuname)
ggplot(df, aes(x=spawningyear, y=log(value), color=majorpopgroup)) + 
  geom_point(size=0.2, na.rm = TRUE) + 
  theme(strip.text.x = element_text(size = 3)) +
  theme(axis.text.x = element_text(size = 5, angle = 90)) +
  facet_wrap(~esapopname) +
  ggtitle(paste0(esuname, collapse="\n"))
}
plotesu(esu[3])

plotesu(esu[4])

plotesu(esu[5])

plotesu(esu[1])

df <- columbia.river %>% subset(species == "Chinook salmon")
ggplot(df, aes(x=spawningyear, y=log(value), color=run)) + 
  geom_point(size=0.2, na.rm = TRUE) +
  theme(strip.text.x = element_text(size = 3)) +
  theme(axis.text.x = element_text(size = 5, angle = 90)) + 
  facet_wrap(~esapopname)

Tasks for each group

  1. Create estimates of spawner abundance for all missing years and provide estimates of the decline from the historical abundance.

  2. Evaluate support for the major population groups. Are the populations in the groups more correlated than outside the groups?

  3. Evaluate the evidence of cycling in the data. We will talk about how to do this on the Tuesday after lab.

Tips

Simplify

If your ESU has many populations, start with a smaller set of 4-7 populations.

Assumptions

You can assume that R="diagonal and equal" and A="scaling". Assume that “historical” means the earliest years available for your group.

States

Your abundance estimate is the “x” or “state” estimates. You can get this from

fit$states

or

tsSmooth(fit)

where fit is from fit <- MARSS()

plotting

Estimate of the mean of the spawner counts based on your x model.

autoplot(fit, plot.type="fitted.ytT")

diagnostics

autoplot(fit, plot.type="residuals")

Address the following in your methods

  • Describe your assumptions about the x and how the data time series are related to x.

    • How are the x and y (data) related? 1 x for 1 y or will you assume 1 x for all y or 1 x for each major population group? How will you choose?
    • What will you assume about the U for the x’s?
    • What will you assume about the Q matrix?
  • Write out your assumptions as different models in matrix form, fit each and then compare these with AIC or AICc.

  • Do your estimates differ depending on the assumptions you make about the structure of the data, i.e. you assumptions about the x’s, Q, and U.

Sample code

Here I show how I might analyze the Upper Columbia Steelhead data.

Figure from 2022 5-Year Review: Summary & Evaluation of Upper Columbia River Spring-run Chinook Salmon and Upper Columbia River Steelhead. NMFS. West Coast Region. https://doi.org/10.25923/p4w5-dp31

Figure from 2022 5-Year Review: Summary & Evaluation of Upper Columbia River Spring-run Chinook Salmon and Upper Columbia River Steelhead. NMFS. West Coast Region. https://doi.org/10.25923/p4w5-dp31

Set up the data. We need the time series in a matrix with time across the columns.

library(dplyr)
esuname <- esu[2]
dat <- columbia.river %>% 
  subset(esu_dps == esuname) %>% # get only this ESU
  mutate(log.spawner = log(value)) %>% # create a column called log.spawner
  select(esapopname, spawningyear, log.spawner) %>% # get just the columns that I need
  pivot_wider(names_from = "esapopname", values_from = "log.spawner") %>% 
  column_to_rownames(var = "spawningyear") %>% # make the years rownames
  as.matrix() %>% # turn into a matrix with year down the rows
  t() # make time across the columns
# MARSS complains if I don't do this
dat[is.na(dat)] <- NA

Clean up the row names

tmp <- rownames(dat)
tmp <- stringr::str_replace(tmp, "Steelhead [(]Upper Columbia River DPS[)]", "")
tmp <- stringr::str_replace(tmp, "River - summer", "")
tmp <- stringr::str_trim(tmp)
rownames(dat) <- tmp

Specify a model

mod.list1 <- list(
  U = "unequal",
  R = "diagonal and equal",
  Q = "unconstrained"
)

Fit the model. In this case, a BFGS algorithm is faster.

library(MARSS)
fit1 <- MARSS(dat, model=mod.list1, method="BFGS")
## Success! Converged in 235 iterations.
## Function MARSSkfas used for likelihood calculation.
## 
## MARSS fit is
## Estimation method: BFGS 
## Estimation converged in 235 iterations. 
## Log-likelihood: -109.4078 
## AIC: 256.8155   AICc: 262.1676   
##  
##                Estimate
## R.diag          0.00997
## U.X.Entiat      0.02182
## U.X.Methow      0.01852
## U.X.Okanogan    0.00140
## U.X.Wenatchee  -0.02222
## Q.(1,1)         0.28016
## Q.(2,1)         0.12303
## Q.(3,1)         0.14275
## Q.(4,1)         0.23415
## Q.(2,2)         0.31642
## Q.(3,2)         0.30806
## Q.(4,2)         0.19061
## Q.(3,3)         0.31031
## Q.(4,3)         0.18852
## Q.(4,4)         0.52813
## x0.X.Entiat     4.61647
## x0.X.Methow     6.43401
## x0.X.Okanogan   6.47217
## x0.X.Wenatchee  8.04868
## Initial states (x0) defined at t=0
## 
## Standard errors have not been calculated. 
## Use MARSSparamCIs to compute CIs and bias estimates.

Hmmmmm, the Q variance is so high that it perfectly fits the data. That doesn’t seem right.

autoplot(fit1, plot.type="fitted.ytT")
## MARSSresiduals.tT reported warnings. See msg element or attribute of returned residuals object.

## plot.type = fitted.ytT

## Finished plots.

Let’s look at the corrplot. Interesting. The Methow and Entiat are almost perfectly correlated while the Entiat and Wenatchee are somewhat correlated. That makes sense if you look at a map.

library(corrplot)
## corrplot 0.92 loaded
Q <- coef(fit1, type="matrix")$Q
corrmat <- diag(1/sqrt(diag(Q))) %*% Q %*% diag(1/sqrt(diag(Q)))
corrplot(corrmat)

I need to use the EM algorithm (remove method="BFGS") because the BFGS algorithm doesn’t allow constraints on the Q matrix.

mod.list2 <- list(
  U = "unequal",
  R = "diagonal and equal",
  Q = "equalvarcov"
)
fit2 <- MARSS(dat, model=mod.list2, control = list(maxit=1000))
## Success! abstol and log-log tests passed at 794 iterations.
## Alert: conv.test.slope.tol is 0.5.
## Test with smaller values (<0.1) to ensure convergence.
## 
## MARSS fit is
## Estimation method: kem 
## Convergence test: conv.test.slope.tol = 0.5, abstol = 0.001
## Estimation converged in 794 iterations. 
## Log-likelihood: -120.6028 
## AIC: 263.2057   AICc: 264.9657   
##  
##                Estimate
## R.diag           0.1290
## U.X.Entiat       0.0257
## U.X.Methow       0.0311
## U.X.Okanogan     0.0166
## U.X.Wenatchee   -0.0282
## Q.diag           0.2632
## Q.offdiag        0.2631
## x0.X.Entiat      4.2026
## x0.X.Methow      5.9042
## x0.X.Okanogan    5.8359
## x0.X.Wenatchee   8.0703
## Initial states (x0) defined at t=0
## 
## Standard errors have not been calculated. 
## Use MARSSparamCIs to compute CIs and bias estimates.
autoplot(fit2, plot.type="fitted.ytT")
## MARSSresiduals.tT reported warnings. See msg element or attribute of returned residuals object.

## plot.type = fitted.ytT

## Finished plots.

Now I want try something different. I will treat the Methow-Okanogan as one state and the Entiat-Wenatchee as another. I’ll let these be correlated together. Interesting, these two are estimated to be perfectly correlated.

mod.list3 <- mod.list1
mod.list3$Q <- "unconstrained"
mod.list3$Z <- factor(c("ew", "mo", "mo", "ew"))
fit3 <- MARSS(dat, model = mod.list3)
## Warning! Reached maxit before parameters converged. Maxit was 500.
##  neither abstol nor log-log convergence tests were passed.
## 
## MARSS fit is
## Estimation method: kem 
## Convergence test: conv.test.slope.tol = 0.5, abstol = 0.001
## WARNING: maxit reached at  500  iter before convergence.
##  Neither abstol nor log-log convergence test were passed.
##  The likelihood and params are not at the ML values.
##  Try setting control$maxit higher.
## Log-likelihood: -137.532 
## AIC: 295.064   AICc: 296.5209   
##  
##             Estimate
## A.Okanogan  -0.68779
## A.Wenatchee  1.54127
## R.diag       0.18062
## U.ew        -0.02175
## U.mo         0.00374
## Q.(1,1)      0.22050
## Q.(2,1)      0.22103
## Q.(2,2)      0.22164
## x0.ew        6.51468
## x0.mo        7.33795
## Initial states (x0) defined at t=0
## 
## Standard errors have not been calculated. 
## Use MARSSparamCIs to compute CIs and bias estimates.
## 
## Convergence warnings
##  Warning: the  logLik  parameter value has not converged.
##  Type MARSSinfo("convergence") for more info on this warning.
autoplot(fit3, plot.type="fitted.ytT")

## plot.type = fitted.ytT

## Finished plots.

Finally, let’s look at the AIC values. Fit1 was very flexible and can put a line through the data so I know I have at least one model in the set that can fit the data. Well, the most flexible model is the best. At this point, I’d like to look just at data after 1980 or so. I don’t like the big dip that happened in the Wenatchee River. I’d want to talk to the biologists to find out what happened, especially because I know that there might be hatchery releases in this system.

aic <- c(fit1$AICc, fit2$AICc, fit3$AICc)
aic-min(aic)
## [1]  0.00000  2.79807 34.35331

Resources

Chapter 7 MARSS models. ATSA Lab Book. https://atsa-es.github.io/atsa-labs/chap-mss.html

Chapter 8 MARSS models with covariate. ATSA Lab Book. https://atsa-es.github.io/atsa-labs/chap-msscov.html

Chapter 16 Modeling cyclic sockeye https://atsa-es.github.io/atsa-labs/chap-cyclic-sockeye.html