8.4 Process-error only model

Now let’s model the data as an autoregressive process observed without error, and incorporate the covariates into the process model. Note that this is much different from typical linear regression models. The \(\mathbf{x}\) part represents our model of the data (in this case plankton species). How is this different from the autoregressive observation errors? Well, we are modeling our data as autoregressive so data at \(t-1\) affects the data at \(t\). Population abundances are inherently autoregressive so this model is a bit closer to the underlying mechanism generating the data. Here is our new process model for plankton abundance. \[\begin{equation} \mathbf{x}_t = \mathbf{x}_{t-1} + \mathbf{C}\mathbf{c}_t + \mathbf{w}_t, \text{ where } \mathbf{w}_t \sim \text{MVN}(0,\mathbf{Q}) \tag{8.4} \end{equation}\] We can fit this as follows:

R <- A <- U <- "zero"
B <- Z <- "identity"
Q <- "equalvarcov"
C <- "unconstrained"
model.list <- list(B = B, U = U, Q = Q, Z = Z, A = A, R = R, 
    C = C, c = covariates)
kem <- MARSS(dat, model = model.list)
Success! algorithm run for 15 iterations. abstol and log-log tests passed.
Alert: conv.test.slope.tol is 0.5.
Test with smaller values (<0.1) to ensure convergence.

MARSS fit is
Estimation method: kem 
Convergence test: conv.test.slope.tol = 0.5, abstol = 0.001
Algorithm ran 15 (=minit) iterations and convergence was reached. 
Log-likelihood: -285.0732 
AIC: 586.1465   AICc: 586.8225   
 
                      Estimate
Q.diag                  0.7269
Q.offdiag              -0.0210
x0.X.Greens            -0.5189
x0.X.Bluegreens        -0.2431
C.(X.Greens,Temp)      -0.0434
C.(X.Bluegreens,Temp)   0.0988
C.(X.Greens,TP)        -0.0589
C.(X.Bluegreens,TP)     0.0104
Initial states (x0) defined at t=0

Standard errors have not been calculated. 
Use MARSSparamCIs to compute CIs and bias estimates.

Now, it looks like temperature has a strong negative effect on algae? Also our log-likelihood dropped a lot. Well, the data do not look at all like a random walk model (where \(\mathbf{B}=1\)), which we can see from the plot of the data (Figure 8.1). The data are fluctuating about some mean so let’s switch to a better autoregressive model—a mean-reverting model. To do this, we will allow the diagonal elements of \(\mathbf{B}\) to be something other than 1.

model.list$B <- "diagonal and unequal"
kem <- MARSS(dat, model = model.list)
Success! algorithm run for 15 iterations. abstol and log-log tests passed.
Alert: conv.test.slope.tol is 0.5.
Test with smaller values (<0.1) to ensure convergence.

MARSS fit is
Estimation method: kem 
Convergence test: conv.test.slope.tol = 0.5, abstol = 0.001
Algorithm ran 15 (=minit) iterations and convergence was reached. 
Log-likelihood: -236.6106 
AIC: 493.2211   AICc: 494.2638   
 
                              Estimate
B.(X.Greens,X.Greens)           0.1981
B.(X.Bluegreens,X.Bluegreens)   0.7672
Q.diag                          0.4899
Q.offdiag                      -0.0221
x0.X.Greens                    -1.2915
x0.X.Bluegreens                -0.4179
C.(X.Greens,Temp)               0.2844
C.(X.Bluegreens,Temp)           0.1655
C.(X.Greens,TP)                 0.0332
C.(X.Bluegreens,TP)             0.1340
Initial states (x0) defined at t=0

Standard errors have not been calculated. 
Use MARSSparamCIs to compute CIs and bias estimates.

Notice that the log-likelihood goes up quite a bit, which means that the mean-reverting model fits the data much better.

With this model, we are estimating \(\mathbf{x}_0\). If we set model$tinitx=1, we will get a error message that \(\mathbf{R}\) diagonals are equal to 0 and we need to fix x0. Because \(\mathbf{R}=0\), if we set the initial states at \(t=1\), then they are fully determined by the data.

x0 <- dat[, 1, drop = FALSE]
model.list$tinitx <- 1
model.list$x0 <- x0
kem <- MARSS(dat, model = model.list)
Success! algorithm run for 15 iterations. abstol and log-log tests passed.
Alert: conv.test.slope.tol is 0.5.
Test with smaller values (<0.1) to ensure convergence.

MARSS fit is
Estimation method: kem 
Convergence test: conv.test.slope.tol = 0.5, abstol = 0.001
Algorithm ran 15 (=minit) iterations and convergence was reached. 
Log-likelihood: -235.4827 
AIC: 486.9653   AICc: 487.6414   
 
                              Estimate
B.(X.Greens,X.Greens)           0.1980
B.(X.Bluegreens,X.Bluegreens)   0.7671
Q.diag                          0.4944
Q.offdiag                      -0.0223
C.(X.Greens,Temp)               0.2844
C.(X.Bluegreens,Temp)           0.1655
C.(X.Greens,TP)                 0.0332
C.(X.Bluegreens,TP)             0.1340
Initial states (x0) defined at t=1

Standard errors have not been calculated. 
Use MARSSparamCIs to compute CIs and bias estimates.