The problem of predicting points in a time series will be used to introduce the concept of a Model Foundry, where the user identifies the likely characteristics of the process that lies behind the time series, assembles a model of appropriate complexity, and the system then estimates the parameters of the composite model.
A whole industry exists for predicting future values from a time series. One branch of the industry uses some form of Box Jenkins methodology, where AutoRegression (AR) or Moving Average (MA) or some combination (ARMA, ARIMA) is sought in the time series - parameters for one of these models are estimated, and these parameters are used for prediction. The technique requires considerable skill to apply (most of the skill lies in making sure it is not applied in inappropriate cases), and is frequently abused by using it on nonlinear processes.
The other branch attempts to train a neural network to produce the time series - much art is used in incorporating in the neural network the properties that may exist in the time series, such as memory of the last value (or the last but one, or...) to modify the next. The resultant model lacks identifiability - that is, there is no partial correspondence, no interior point in the model that is an analogue of an interior point in the process. The result is hard to explain or extend (or to defend the whole concept when the output is wrong).
Some possible characteristics of the process producing the time series that may need to be modelled using components available in the Model Foundry:
The benefit of the Model Foundry approach is that a reasonable composite model can be quickly constructed from components, and its parameters estimated. The model can have connections that allow the characteristics of the process to vary with time - the capacity constraints to vary with growth, or the energy levels to tune themselves to what is being found.
The model can be changing its parameters as new values for the time series arrive, rather than using fixed parameters found in a one time identification of the series so far.
The knowledge network that provides the substructure for the model allows the model to be extensible - you can tack new bits on as they become necessary - and to be adjusted in a non-parametric way - you can make new connections in the existing model that change its behaviour, perhaps radically.
So what happens when the user has no idea of the underlying process? An assumption can be made, by allowing the properties of the time series to index into a catalogue of models, and the system then puts out of range the parameters it can't estimate. If the time series then hits a limit, the limit range can be adjusted to suit.
See Model Building