.. continuing a ramble prompted by the Normal / Levy thread :

This deceptively simple question is at the heart of many modelling issues.

Interpreted at a shallow level, this could be a question about whether the values of some variables are within the range they have moved in historically.

At a deeper level, the question is whether models built on history will continue to be fit for purpose in future application.

Models cover a wide range from the deterministic ones of physics, that embody precise understanding of the mechanisms, through to empirical models that merely claim to capture useful patterns. It could be an interesting thread to fill in this continuum with examples; financial models, and especially econometric models, would be up the “empirical” end. There may be various nomenclatures for this kind of discussion – noting for example the von Mises (/Austrian) link provided earlier where “time invariant” is used as a descriptor of certain models.

So, sure, there is wide confidence that the models of physics will still work the same way in the future, but only qualified confidence in (especially) those models that have anything to do with human behaviour, or other complex systems.

Empirical models tend to depend heavily on the choice of the data that “calibrates” them (enough for another thread on this topic). Also, to the extent that they rely on patterns without understanding the drivers of those patterns, there may come a time when they unexpectedly perform less well – perhaps even catastrophically so – than before.

Footnote: I like Andrew’s comment “models are good tools but bad masters” and another well known one “all models are wrong, but some are useful”. Perhaps this latter one should have an addendum “…some of

the time”.