I'm working with a two-state process with $x_t$ in $\{1, -1\}$ for $t = 1, 2, \ldots$
The autocorrelation function is indicative of a process with long-memory, i.e. it displays a power law decay with an exponent < 1. You can simulate a similar series in R with:
> library(fArma)
> x<-fgnSim(10000,H=0.8)
> x<-sign(x)
> acf(x)
My question: is there a canonical way to optimally predict the next value in the series given just the autocorrelation function? One way to predict is simply to use
$\hat{x}(t) = x(t-1)$
which has a classification rate of $(1 + \rho_1) / 2$, where $\rho$ is the lag-1 autocorrelation, but I feel like it must be possible to do better by taking into account the long-memory structure.
fracdiff
package. $\endgroup$