The very scope of reliability analyses is to take decisions relative to engineering systems. This commands for radical changes in the probabilistic/statistical techniques used in reliability practice. Since the decisions to be taken have an engineering/economical relevance (one can incur a loss if the system fails before mission time), the superimposed cost structure does play a central role in any reliability problem. Only the Bayesian predictive approach allows to combine harmonically together, costs and probabilities. Using a predictive approach means to assess one's own probability distribution on the observable random quantities of interest (for example, on the failure time of a system). The probability assessment is possibly made on the basis of some statistical evidence. In this view, parameters of probability distributions are just a computational aid to derive the probability of interest. From the Bayesian predictive standpoint, any reliability problem (even a system reliability problem) becomes a decision problem in face of uncertainty. Uncertainty can be mitigated by some forthcoming statistical evidence; the latter has, in general, a dynamic character. It is often the case that one can exercise some control on the forthcoming information. First decisions to be taken when coping with a reliability problem is then: what course of actions will make the control on the forthcoming information optimal (in some suitable sense)? The need of answering this question and the dynamic character of the statistical evidence require the use of the theory of point process, of the theory of stochastic control and of the theory of stochastic filtering. The volume aims to enlighten the reasons which motivated the changes in the mathematical technology of reliability theory, and to give the readers an insight into the disciplines which recently started playing an important role in the reliability field. This volume should be of interest to both reliability theoreticians and practitioners.