Return accuracy metrics
accuracy_marssMLE.Rd
This is a method for the generic accuracy
function in the generics package. It is written to mimic the output from the accuracy function in the forecast package. See that package for details.
The measures calculated are:
ME: Mean Error
RMSE: Root Mean Squared Error
MAE: Mean Absolute Error
MPE: Mean Percentage Error
MAPE: Mean Absolute Percentage Error
MASE: Mean Absolute Scaled Error
ACF1: Autocorrelation of errors at lag 1.
The MASE calculation is scaled using MAE of the training set naive forecasts which are simply \(\mathbf{y}_{t-1}\).
For the training data, the metrics are shown for the one-step-ahead predictions by default (type="ytt1"
). This is the prediction of \(\mathbf{y}_t\) conditioned on the data up to \(t-1\) (and the model estimated from all the data). With type="ytT"
, you can compute the metrics for the fitted ytT
, which is the expected value of new data at \(t\) conditioned on all the data. type
does not affect test data (forecasts are past the end of the training data).
Usage
# S3 method for marssPredict
accuracy(object, x, test = NULL, type = "ytt1", verbose = FALSE, ...)
# S3 method for marssMLE
accuracy(object, x, test = NULL, type = "ytt1", verbose = FALSE, ...)
Arguments
- object
A
marssMLE
ormarssPredict
object- x
A matrix or data frame with data to test against the h steps of a forecast.
- test
Which time steps in training data (data model fit to) to compute accuracy for.
- type
type="ytt1" is the one-step-ahead predictions. type="ytT" is the fitted ytT predictions. The former are standardly used for training data prediction metrics.
- verbose
Show metrics for each time series of data.
- ...
Not used.
References
Hyndman, R.J. and Koehler, A.B. (2006) "Another look at measures of forecast accuracy". International Journal of Forecasting, 22(4), 679-688.
Hyndman, R.J. and Athanasopoulos, G. (2018) "Forecasting: principles and practice", 2nd ed., OTexts, Melbourne, Australia. Section 3.4 "Evaluating forecast accuracy". https://otexts.com/fpp2/accuracy.html.
Examples
dat <- t(harborSeal)
dat <- dat[c(2, 11, 12),]
train.dat <- dat[,1:12]
fit <- MARSS(train.dat, model = list(Z = factor(c("WA", "OR", "OR"))))
#> Success! abstol and log-log tests passed at 79 iterations.
#> Alert: conv.test.slope.tol is 0.5.
#> Test with smaller values (<0.1) to ensure convergence.
#>
#> MARSS fit is
#> Estimation method: kem
#> Convergence test: conv.test.slope.tol = 0.5, abstol = 0.001
#> Estimation converged in 79 iterations.
#> Log-likelihood: 14.98234
#> AIC: -13.96467 AICc: -2.887748
#>
#> Estimate
#> A.OR.SouthCoast 0.8952
#> R.diag 0.0087
#> U.WA 0.1043
#> U.OR 0.0569
#> Q.(WA,WA) 0.0112
#> Q.(OR,OR) 0.0000
#> x0.WA 7.3171
#> x0.OR 6.2970
#> Initial states (x0) defined at t=0
#>
#> Standard errors have not been calculated.
#> Use MARSSparamCIs to compute CIs and bias estimates.
#>
accuracy(fit)
#> ME RMSE MAE MPE MAPE MASE
#> Training set 0.000250726 0.1387191 0.1151958 -0.006414212 1.460056 0.216267
#> ACF1
#> Training set -0.1999844
# Compare to test data set
fr <- predict(fit, n.ahead=10)
test.dat <- dat[,13:22]
accuracy(fr, x=test.dat)
#> ME RMSE MAE MPE MAPE MASE
#> Training set 0.000250726 0.1387191 0.1151958 -0.006414212 1.460056 0.2162670
#> Test set -0.086565133 0.3712881 0.2964253 -0.940733987 3.662688 0.6105122
#> ACF1
#> Training set -0.1999844
#> Test set 0.7480767