The classical theory of conditioning was founded by Alan Turing and John Von Neumann. The main idea is that the condition number of a problem measures the difficulty of delivering approximated solutions to it. As I plan to expose, this can be used to infer estimates on the accuracy of computations performed by a ``good'' algorithm.
Neumann-Turing's theory is based on worst-case analysis. This has the advantage of guaranteeing rigorous bounds, but is sometimes overly pessimistic, failing to effectively predict the practical reliability of algorithms' outputs. To overcome this shortcoming, it was proposed by many researchers to pursue a stochastic approach, studying expected values of approximation errors. This statistical approach has in certain cases provided more insight. Still, there exist (classes of) problems for which even stochastic condition is not satisfactory in predicting the typical behaviour of ``good'' algorithms. For this reason, as well as for the technical difficulties in performing rigorous probabilistic analysis, this (good!) idea never became too popular.
In this talk I will present a case study of one such class of problems. Then, I will propose some novel ideas and definitions, and in particular what we call a weak-stochastic condition number, thus building a new theory that
(1) is endowed with higher predicting power than classical and stochastic theories
(2) is a somewhat ``easier'' tool than stochastic condition
(3) can still deliver mathematically rigorous statements.
This will be demonstrated by applying the theory to the example. (I also plan to explain what the heck a bus has to do with all this.)