Autocorrelation is useful for finding repeating patterns in a signal, such as determining the presence of a periodic signal which has been buried under noise, or identifying the fundamental frequency of a signal which doesn't actually contain that frequency component, but implies it with many harmonic frequencies (Wikipedia 2006).
Different definitions of autocorrelation are in use depending on the field of study which is being considered and not all of them are equivalent (Wikipedia 2006). In some fields, the term is used interchangeably with autocovariance(Wikipedia 2006).
In statistics, the autocorrelation of a discrete time series or a process Xt is simply the correlation of the process against a time-shifted version of itself (Wikipedia 2006). If Xt is second-order stationary with mean μ then this definition is
where E is the expected value and k is the time shift being considered (usually referred to as the lag) (Wikipedia 2006). This function has the attractive property of being in the range [−1, 1] with 1 indicating perfect correlation (the signals exactly overlap when time shifted by k) and −1 indicating perfect anti-correlation (Wikipedia 2006). It is common practice in many disciplines to drop the normalisation by σ2 and use the term autocorrelation interchangeably with autocovariance (Wikipedia 20006).
In signal processing, given a signal f(t), the continuous autocorrelation Rf(τ) is the continuous cross-correlation of f(t) with itself, at lag τ, and is defined as:
No comments:
Post a Comment