# Theoretical questions regarding forecast and fable

Hi

Some theoretical questions:

Do the functions

``````forecast::Acf forecast::Pacf
feasts::ACF feasts::PACF
``````

use the Yule-Walker method or Burg's method?

Burg, J. (1967), "Maximum Entropy Spectral Analysis", Proceedings of the 37th Meeting of the Society of Exploration Geophysicists.

Also, for the following process:

`y(t) = mu + phi1*y(t-1) + beta*trend + eps(t)`

`y` = series
`mu` = constant
`phi1 `= coefficient
`beta `= coefficient
`trend `= time-trend
`eps` = white noise process

Using recursive back-substitution I can calculate the `E(y(t))` and `Var(y(t))` for `phi1=1`. What is the `E(y(t))` and `Var(y(t))` for `|phi1|<1` ?

thanks,
Amarjit

All four functions call `stats::acf()` or `stats::pacf()` to do the calculations. They are simply wrappers to provide more convenient output or improved plots.

The help file for `stats::acf()` and `stats::pacf()` gives a brief explanation. For ACF:

For `type` = `"correlation"` and `"covariance"` , the estimates are based on the sample covariance. (The lag 0 autocorrelation is fixed at 1 by convention.)

and for PACF

The partial correlation coefficient is estimated by fitting autoregressive models of successively higher orders up to lag.max.

Later it gives the reference:

Venables, W. N. and Ripley, B. D. (2002) Modern Applied Statistics with S. Fourth Edition. Springer-Verlag.
(This contains the exact definitions used.)

If you look at the reference, you will find the standard covariance estimates of the ACF (p390), and the PACF defined using the Yule-Walker equations (on p398).

If you want to see precisely how it is done, you can always look at the code. After all, R is open source. The code for `stats::acf()` includes

``````acf <- .Call(C_acf, x, lag.max, type == "correlation")
``````

which tells you that it calls C to do the calculation. If you look at the C sources, you will find the relevant function here:

Similarly, the PACF uses the C code here:

The last question looks like a homework problem. To solve moment problems for problems like this, backshift notation is helpful for re-arranging the equation into a suitable form -- you want y_t on the left and only lagged error terms on the right. Then you need a couple of standard results for infinite power series.

Thanks.

The reason I ask regarding autocorrelations is ESTIMA use Burg as default https://estima.com/webhelp/index.html?context=1290 see Correlate - method used to compute correlations - autocorrelations - Yule–Walker vs Burg Methods.

My second question, the process is: `y(t) = mu + phi1*y(t-1) + beta*(t) + eps(t)` , as from ADF test, third model form.

And I have

``````E[y(t)] = mu*(1 + phi1 + phi1^2 + ... + phi1^(n-1)) +
(phi1^(n-1))*E[y(t-n)] +
E[beta*(n) + phi1*beta*(n-1) + phi1^2*beta*(n-2) + ... + phi1^(n-1)*beta*(n)]
``````

As `E[eps(t)]=0` by definition.

With `|phi1|<1`, as `n->infinity`, the middle-term cancels to zero.

So the first term is

`E[y(t)] = mu/(1-phi1) ` via infinite power series result: from `i=0 ∑ ` to `i=(n-1) phi^i = 1/(1-phi1)` as per 'textbook(s)'.

Same reason for the `beta` term and therefore,

`E[y(t)] = (mu + beta*(t))/(1-phi1)`

Correct?

I can then use `Var[y(t)] = (E[y(t)^2] - (E[y(t)])^2)`.

I am not certain how backshift notation makes things easier?