Characteristics of time series
- Expectation, mean & variance
- Covariance & correlation
- Stationarity
- Autocovariance & autocorrelation
- Correlograms
White noise
Random walks
Backshift & difference operators
10 Jan 2019
Characteristics of time series
White noise
Random walks
Backshift & difference operators
The expectation (\(E\)) of a variable is its mean value in the population
\(\text{E}(x) \equiv\) mean of \(x = \mu\)
We can estimate \(\mu\) from a sample as
\[ m = \frac{\sum_{i=1}^N{x_i}}{N} \]
\(\text{E}([x - \mu]^2) \equiv\) expected deviations of \(x\) about \(\mu\)
\(\text{E}([x - \mu]^2) \equiv\) variance of \(x = \sigma^2\)
We can estimate \(\sigma^2\) from a sample as
\[ s^2 = \frac{1}{N-1}\sum_{i=1}^N{(x_i - m)^2} \]
If we have two variables, \(x\) and \(y\), we can generalize variance
\[ \sigma^2 = \text{E}([x_i - \mu][x_i - \mu]) \]
into covariance
\[ \gamma_{x,y} = \text{E}([x_i - \mu_x][y_i - \mu_y]) \]
If we have two variables, \(x\) and \(y\), we can generalize variance
\[ \sigma^2 = \text{E}([x_i - \mu][x_i - \mu]) \]
into covariance
\[ \gamma_{x,y} = \text{E}([x_i - \mu_x][y_i - \mu_y]) \]
We can estimate \(\gamma_{x,y}\) from a sample as
\[ \text{Cov}(x,y) = \frac{1}{N-1}\sum_{i=1}^N{(x_i - m_x)(y_i - m_y)} \]
Correlation is a dimensionless measure of the linear association between 2 variables, \(x\) & \(y\)
It is simply the covariance standardized by the standard deviations
\[ \rho_{x,y} = \frac{\gamma_{x,y}}{\sigma_x \sigma_y} \]
\[ -1 < \rho_{x,y} < 1 \]
Correlation is a dimensionless measure of the linear association between 2 variables \(x\) & \(y\)
It is simply the covariance standardized by the standard deviations
\[ \rho_{x,y} = \frac{\gamma_{x,y}}{\sigma_x \sigma_y} \]
We can estimate \(\rho_{x,y}\) from a sample as
\[ \text{Cor}(x,y) = \frac{\text{Cov}(x,y)}{s_x s_y} \]
Consider a single value, \(x_t\)
Consider a single value, \(x_t\)
\(\text{E}(x_t)\) is taken across an ensemble of all possible time series
If \(\text{E}(x_t)\) is constant across time, we say the time series is stationary in the mean
Stationarity is a convenient assumption that allows us to describe the statistical properties of a time series.
In general, a time series is said to be stationary if there is
Our eyes are really bad at identifying stationarity, so we will learn some tools to help us
For stationary ts, we define the autocovariance function (\(\gamma_k\)) as
\[ \gamma_k = \text{E}([x_t - \mu][x_{t+k} - \mu]) \]
which means that
\[ \gamma_0 = \text{E}([x_t - \mu][x_{t} - \mu]) = \sigma^2 \]
For stationary ts, we define the autocovariance function (\(\gamma_k\)) as
\[ \gamma_k = \text{E}([x_t - \mu][x_{t+k} - \mu]) \]
"Smooth" series have large ACVF for large \(k\)
"Choppy" series have ACVF near 0 for small \(k\)
For stationary ts, we define the autocovariance function (\(\gamma_k\)) as
\[ \gamma_k = \text{E}([x_t - \mu][x_{t+k} - \mu]) \]
We can estimate \(\gamma_k\) from a sample as
\[ c_k = \frac{1}{N}\sum_{t=1}^{N-k}{(x_t - m)(x_{t+k} - m)} \]
The autocorrelation function (ACF) is simply the ACVF normalized by the variance
\[ \rho_k = \frac{\gamma_k}{\sigma^2} = \frac{\gamma_k}{\gamma_0} \]
The ACF measures the correlation of a time series against a time-shifted version of itself
The autocorrelation function (ACF) is simply the ACVF normalized by the variance
\[ \rho_k = \frac{\gamma_k}{\sigma^2} = \frac{\gamma_k}{\gamma_0} \]
The ACF measures the correlation of a time series against a time-shifted version of itself
We can estimate ACF from a sample as
\[ r_k = \frac{c_k}{c_0} \]
The ACF has several important properties:
Recall the transitive property, whereby
If \(A = B\) and \(B = C\), then \(A = C\)
Recall the transitive property, whereby
If \(A = B\) and \(B = C\), then \(A = C\)
which suggests that
If \(x \propto y\) and \(y \propto z\), then \(x \propto z\)
Recall the transitive property, whereby
If \(A = B\) and \(B = C\), then \(A = C\)
which suggests that
If \(x \propto y\) and \(y \propto z\), then \(x \propto z\)
and thus
If \(x_t \propto x_{t+1}\) and \(x_{t+1} \propto x_{t+2}\), then \(x_t \propto x_{t+2}\)
The partial autocorrelation function (\(\phi_k\)) measures the correlation between a series \(x_t\) and \(x_{t+k}\) with the linear dependence of \(\{x_{t-1},x_{t-2},\dots,x_{t-k-1}\}\) removed
The partial autocorrelation function (\(\phi_k\)) measures the correlation between a series \(x_t\) and \(x_{t+k}\) with the linear dependence of \(\{x_{t-1},x_{t-2},\dots,x_{t-k-1}\}\) removed
We can estimate \(\phi_k\) from a sample as
\[ \phi_k = \begin{cases} \text{Cor}(x_1,x_0) = \rho_1 & \text{if } k = 1 \\ \text{Cor}(x_k-x_k^{k-1}, x_0-x_0^{k-1}) & \text{if } k \geq 2 \end{cases} \]
\[ x_k^{k-1} = \beta_1 x_{k-1} + \beta_2 x_{k-2} + \dots + \beta_{k-1} x_1 \]
\[ x_0^{k-1} = \beta_1 x_1 + \beta_2 x_2 + \dots + \beta_{k-1} x_{k-1} \]
The ACF & PACF will be very useful for identifying the orders of ARMA models
Often we want to look for relationships between 2 different time series
We can extend the notion of covariance to cross-covariance
Often we want to look for relationships between 2 different time series
We can extend the notion of covariance to cross-covariance
We can estimate \(g^{x,y}_k\) from a sample as
\[ g^{x,y}_k = \frac{1}{N}\sum_{t=1}^{N-k}{(x_t - m_x)(y_{t+k} - m_y)} \]
The cross-correlation function is the CCVF normalized by the standard deviations of x & y
\[ r^{x,y}_k = \frac{g^{x,y}_k}{s_x s_y} \]
Just as with other measures of correlation
\[ -1 \leq r^{x,y}_k \leq 1 \]
SOME SIMPLE MODELS
A time series \(\{w_t\}\) is discrete white noise if its values are
independent
identically distributed with a mean of zero
A time series \(\{w_t\}\) is discrete white noise if its values are
independent
identically distributed with a mean of zero
Note that distributional form for \(\{w_t\}\) is flexible
We often assume so-called Gaussian white noise, whereby
\[ w_t \sim \text{N}(0,\sigma^2) \]
We often assume so-called Gaussian white noise, whereby
\[ w_t \sim \text{N}(0,\sigma^2) \]
and the following apply as well
autocovariance: \(\gamma_k = \begin{cases} \sigma^2 & \text{if } k = 0 \\ 0 & \text{if } k \geq 1 \end{cases}\)
autocorrelation: \(\rho_k = \begin{cases} 1 & \text{if } k = 0 \\ 0 & \text{if } k \geq 1 \end{cases}\)
A time series \(\{x_t\}\) is a random walk if
\(x_t = x_{t-1} + w_t\)
\(w_t\) is white noise
The following apply to random walks
mean: \(\mu_x = 0\)
autocovariance: \(\gamma_k(t) = t \sigma^2\)
autocorrelation: \(\rho_k(t) = \frac{t \sigma^2}{\sqrt{t \sigma^2(t + k) \sigma^2}}\)
The following apply to random walks
mean: \(\mu_x = 0\)
autocovariance: \(\gamma_k(t) = t \sigma^2\)
autocorrelation: \(\rho_k(t) = \frac{t \sigma^2}{\sqrt{t \sigma^2(t + k) \sigma^2}}\)
Note: Random walks are not stationary
SOME IMPORTANT OPERATORS
The backshift shift operator (\(\mathbf{B}\)) is an important function in time series analysis, which we define as
\[ \mathbf{B} x_t = x_{t-1} \]
or more generally as
\[ \mathbf{B}^k x_t = x_{t-k} \]
For example, a random walk with
\[ x_t = x_{t-1} + w_t \]
can be written as
\[ \begin{align} x_t &= \mathbf{B} x_t + w_t \\ x_t - \mathbf{B} x_t &= w_t \\ (1 - \mathbf{B}) x_t &= w_t \\ x_t &= (1 - \mathbf{B})^{-1} w_t \end{align} \]
The difference operator (\(\nabla\)) is another important function in time series analysis, which we define as
\[ \nabla x_t = x_t - x_{t-1} \]
The difference operator (\(\nabla\)) is another important function in time series analysis, which we define as
\[ \nabla x_t = x_t - x_{t-1} \]
For example, first-differencing a random walk yields white noise
\[ \begin{align} \nabla x_t &= x_{t-1} + w_t \\ x_t - x_{t-1} &= x_{t-1} + w_t - x_{t-1}\\ x_t - x_{t-1} &= w_t\\ \end{align} \]
The difference operator and the backshift operator are related
\[ \nabla^k = (1 - \mathbf{B})^k \]
The difference operator and the backshift operator are related
\[ \nabla^k = (1 - \mathbf{B})^k \]
For example
\[ \begin{align} \nabla x_t &= (1 - \mathbf{B})x_t \\ x_t - x_{t-1} &= x_t - \mathbf{B} x_t \\ x_t - x_{t-1} &= x_t - x_{t-1} \end{align} \]
Differencing is a simple means for removing a trend
The 1st-difference removes a linear trend; a 2nd-difference would remove a quadratic trend, etc.
Differencing is a simple means for removing a seasonal effect
Using a 1st-difference with \(k = period\) removes both trend & seasonal effects
Characteristics of time series
White noise
Random walks
Backshift & difference operators