Time series analysis is a form of regression analysis that deals with the statistical inference and prediction of trends (trend extrapolation). A time series is a sequence of observations ordered chronologically. They are typically not in a series (i.e. there will not be an increasing order to the numbers), but because of the way they were ordered (typically by date) it is possible for them to be in order, like share prices, the stock market, population development and more.
When using statistics on time, the points in time are usually converted into a set of observation times. In other words, one measurement will always correspond with each point in time. Time series occur in many different fields of study.
One even speaks of autocovariances, because they are covariances of the same process. In the special case of the multidimensional normal distribution of the stochastic process, it is clearly defined by the moments of first and second order. For statistical inference with time series, assumptions must be made, since in practice there is usually only a realization of the process generating the time series. The assumption that these samples are drawn from an ergodic process (means that one can use the moments obtained from a finite time series). The time series can be seen in many areas, such as in financial mathematics and finance (stock market prices, liquidity developments), in econometrics (gross national product and unemployment rate). To predict if it’s going to rain, a meteorologist will look at the temperature, wind speed and direction.
Complex data situations can happen when you have time-sensitive microdata, which is personal or household data for different points in time. However, this information is no longer referred to as time series data, but as trend, panel, or event data based on its time structure.
A time series analysis seeks to answer a question about the process. The goal can vary, and can involve predicting a value of interest in the next step of the process or fitting complications such as trends, periodic fluctuations, and outliers. It is best suited for detecting changes in time series, like EEG and ECG monitoring in medicine to confirm that a surgical intervention went as planned, or a change in global vegetation phenology due to human-induced climate change. When it comes to time series analysis, the procedure can be divided into the following steps:
- Identification phase: Identification of a suitable model for modeling the time series. During this phase, the parameters that will be necessary for the chosen Model or option will be estimated.
- Diagnostic phase: Diagnosis and evaluation of the estimated model
- Deployment phase: The model will be adopted after the approval.
When you need to adjust trends in a time series, it’s important to ask yourself if the trend should be mapped out deterministically or stochastically. Deterministic mapping implies one method of adjustment, stochastic mapping implies another. Deterministic mapping uses regression as an analysis method, while stochastic mapping incorporates difference formation.
When first creating the model, we’ll use various techniques to estimate the parameters and coefficients. For trend models, the least squares estimation is suitable. For models that fall within the context of the Box-Jenkins approach, we can use moment methods and nonlinear least squares estimates. As for maximum likelihood estimation, which you may see in generalized linear mixed models or semi-parametric mixed effects models, it turns out that this technique isn’t valid because a Gaussian distribution doesn’t have a chi-square distribution.
To predict future values, it’s necessary to create an equation by taking the model equation found in the identification phase and confirmed to be accurate. You need to come up with your own criteria beforehand to decide where it’ll work best. For example, you could use the minimum mean squared error (MMSE) as your optimality criterion.