email: lvu@utkin.usr.etu.spb.ru, utkin@stat.unimuenchen.de
This document is available
in pdfformat
The main aim of the paper is to define what the imprecise reliability is, what problems can be solved by means of a framework of the imprecise reliability. From this point of view, various branches of reliability analysis are considered, including analysis of monotone systems, repairable systems, multistate systems, structural reliability, software reliability, human reliability, fault tree analysis. Various types of initial information used in the imprecise reliability are overviewed. Some open problems are given in conclusion.
A lot of methods and models of the classical reliability theory assume that all probabilities are precise, that is, that every probability involved is perfectly determinable. Moreover, it is usually assumed that there exists some complete probabilistic information about the system and component reliability behavior. The completeness of the probabilistic information means that two conditions must be fulfilled:
The precise system reliability measures can be always (at least theoretically) computed if both conditions are satisfied (it is assumed here that the system structure is defined precisely and there exists a function linking the system time to failure (TTF) and TTFs of components). If at least one of the conditions is violated, then only the interval reliability measures can be obtained. In reality, it is difficult to expect that the first condition is fulfilled. If the information we have about the functioning of components and systems is based on a statistical analysis, then a probabilistic uncertainty model should be used in order to mathematically represent and manipulate that information. However, the reliability assessments that are combined to describe systems and components may come from various sources. Some of them may be objective measures based on relative frequencies or on wellestablished statistical models. A part of the reliability assessments may be supplied by experts. If a system is new or exists only as a project, then there are not sufficient statistical data in many cases. Even if such data exist, we do not always observe their stability from the statistical point of view. Moreover, the failure time may be not accurately observed or even missed. Sometimes, failure does not occur or occurs partially and we get a censored observation of the failure time. As a result, only partial information about reliability of the system components may be available, for example, the mean time to failure (MTTF) or bounds for the probability of failure at a time. Of course, one can always assume that the TTF has a certain distribution, for example, exponential or normal. However, how to belief to obtained results of reliability analysis if our assumption is based only on our or expert's experience.
It is difficult to expect that components of many systems are independent. Let us consider two programs functioning in parallel (twoversion programming). If these programs were developed by means of the same programming language, then possible errors in a language library of typical functions produce dependent faults in both programs. Several experimental studies show that the assumption of independence of failures between independently developed programs does not hold. Moreover, the main difficulty here is that the degree of dependency is unknown. The same examples can be presented for various applications. This implies that the second condition for complete information is also violated and it is impossible to obtain some precise reliability measures for a system.
One of the tools to cope with imprecision of available information in reliability analysis is the fuzzy reliability theory [11,12,21,92,94]. However, a framework of this theory does not cover a large variety of possible judgements in reliability. Moreover, it requires to assume a certain type of possibility distributions of TTF or time to repair, and may be unreasonable in a wide scope of cases. Another approach to reliability analysis by incomplete information based on using the random set and evidence theories [37,62] has been proposed in [3,40,65]. The random set theory provides us with an appropriate mathematical model of uncertainty when the information is not complete or when the result of each observation is not pointvalued but setvalued, so that it is not possible to assume the existence of a unique probability measure. However, this approach also does not cover all possible judgements in reliability.
To overcome every difficulty, Gert de Cooman proposed to use the theory of imprecise probabilities (also called the theory of lower previsions [109,110], the theory of interval statistical models [52], the theory of interval probabilities [112,113]), which can be the most powerful and promising tool for reliability analyzing and whose general framework is provided by upper and lower previsions.
It is necessary to note that an idea of using some aspects of the imprecise probability theory in reliability analysis has been considered in the literature. For example, Barlow and Proschan [5,6] considered a case of the lack of information about independence of components and nonparametric interval reliability analysis of ageing classes of TTF distributions. Barzilovich and Kashtanov [8] solved some tasks of optimal preventive maintenance under incomplete information. Coolen and Newby [17,18,19] have shown how the commonly used concepts in reliability theory can be extended in a sensible way and combined with prior knowledge through the use of imprecise probabilities. However, they provide a study of methods to develop parametric models for lifetimes. Some examples of the successful application of imprecise probabilities to reliability analysis can be found in [38,98].
Let us consider the following examples. Suppose that there is available the following information about components of a twocomponent series system. The MTTF of the first component is 10 hours and the probability of the second component failure before 2 hours is 0.01. The reliability of the system can not be determined by means of methods of the conventional reliability theory because the probability distribution of TTF is unknown. Any assumption about a certain probability distribution of TTF may lead to incorrect results. However, this problem can be solved by using the imprecise probabilities.
Suppose that we analyze a system whose n1 components are described by some precise probability distributions of TTFs with precisely known parameters, but information about one of the components, say the nth one, is partial, for example, we know only the probability of failure before time t_{n}. If the probability of the system failure before time t_{0 }has to be found, then, according to [88], the precision of information about n1 components does not influence on precision of the desired solution and is mainly determined by information about the nth "imprecise" component. Hence, the precise distributions in this case are useless and the imprecision of information about one of the components may cancel complete information about other components. In this case, the imprecise probability theory allows us to explain this example and to avoid possible errors in reliability analysis.
The following virtues of the imprecise probability theory can be pointed out:
A structure of the proposed review is shown in Fig. 1. The author does not pretend to an exhaustive and comprehensive stateoftheart. The main aim of the review is to briefly show that the imprecise reliability exists by this time and is successfully developed. I apologize to those authors whose related work is not addressed here or is not comprehended properly here.
Consider a system consisting of n components. Suppose that partial information about reliability of components is represented as a set of lower and upper expectations E^{L}f_{ij} and E^{U}f_{ij}, i=1,...,n, j=1,...,m_{i}, of functions f_{ij}. Here m_{i} is a number of judgements that are related to the ith component; f_{ij}(X_{i}) is a function of the random TTF X_{i} of the ith component or some different random variable, describing the ith component reliability and corresponding to the jth judgement about this component. For example, an intervalvalued probability that a failure is in the interval [a,b] can be represented as expectations of the indicator function I_{[a,b]}(X_{i}) such that I_{[a,b]}(X_{i})=1 if X_{i} ╬ [a,b] and I_{[a,b]}(X_{i})=0 if X_{i} ╧ [a,b]. The lower and upper MTTFs are expectations of the function f(X_{i})=X_{i}.
Denote X=(x_{1},...,x_{n}) and X=(X_{1},...,X_{n}). Here x_{1},...,x_{n} are values of random variables X_{1},...,X_{n}, respectively. It is assumed that the random variable X_{i} is defined on a sample space W and the random vector X is defined on a sample space W^{n}=W╫...╫W. If X_{i} is the TTF, then W = R_{+}. If X_{i}is a random state of a multistate system [7], then W = {1,...,L}, where L is a number of states of the multistate system. In a case of the discrete TTF, W = {1,2,...}, i.e., W = Z_{+}. According to [6], the system TTF can be uniquely determined by the component TTFs. Then there exists a function g( X) of the component lifetimes characterizing the system reliability behavior.
In terms of the imprecise probability theory the lower and
upper expectations can be regarded as lower and upper
previsions. The functions f_{ij} and g can be
regarded as gambles (the case of unbounded gambles is
studied in [68,69]). The lower and upper previsions
E^{L}f_{ij} and E^{U}f_{ij} can be also viewed as bounds for
an unknown precise prevision Ef_{ij} which will be called a linear prevision. Since the function g is the system TTF, then, for computing the reliability
measures (probability of failure, MTTF, kth moment of
TTF), it is necessary to find lower and upper previsions of a gamble h(g), where the function h is defined by the system reliability measure which has
to be found. For example, if this measure is the probability of failure before
time t, then h(g)=I_{[0,t]}(g). In this
case, the optimization problems (natural extension)
for computing the lower E^{L}h(g) and upper E^{U}h(g) previsions of h(g) are [38,98] ( c_{ij}d_{ij}) f_{ij} г h(g(X)).
subject
to c_{ij},d_{ij} ╬ R_{+}, i=1,...,n, j=1,...,m_{i}, c ╬ R, and "X ╬ W^{n},
E^{L}h(g)=sup
ь
э
ю
c+
n
х
i=1
m_{i}
х
j=1 ( c_{ij}E^{L}f_{ij}d_{ij}E^{U}f_{ij})
№
¤
■
,
The
optimization problem for computing the upper prevision E^{U}h(g) of the system function h(g) is
c+
n
х
i=1
m_{i}
х
j=1
subject
to c_{ij},d_{ij} ╬ R_{+}, i=1,...,n, j=1,...,m_{i}, c ╬ R, and "X ╬ W^{n},
E^{U}h(g)=inf
ь
э
ю
c+
n
х
i=1
m_{i}
х
j=1 ( c_{ij}E^{U}f_{ij}d_{ij}E^{L}f_{ij})
№
¤
■
,
If
to assume that TTFs are governed by some unknown joint density r(X), then E^{L}h(g) and E^{U}h(g) can be computed as
c+
n
х
i=1
m_{i}
х
j=1 ( c_{ij}d_{ij}) f_{ij} │ h(g(X)).
E^{L}h(g)=
inf_{P}
є
ї
W^{n} h(g(X))r(X)dX,
subject
to
E^{U}h(g)=
sup_{P}
є
ї
W^{n} h(g(X))r(X)dX,
r(X) │ 0,
є
ї
W^{n} r(X)dX=1,
Here
the infimum and supremum are taken over the set P of
all possible density functions {r(X)} satisfying the above
constraints, i.e., solutions to the problems are defined on the set P of densities that are consistent with partial
information expressed in the form of the constraints. The optimization problems
mean that we can find only the largest and smallest possible values of Eh(g) over all densities from the set P.
E^{L}f_{ij} г
є
ї
W^{n} f_{ij}(x_{i})r(X)dX
г E^{U}f_{ij}, i г n, j г m_{i}.
It should be noted that only joint densities are used in the above optimization problems because, in a general case, we may not be aware whether the variables X_{1},...,X_{n} are dependent or not. If it is known that components are independent, then r(X)=r_{1}(x_{1})╖╖╖r_{m}(x_{m}). In this case, the set P is reduced and consists only of the densities that can be represented as a product of marginal densities. This results more precise reliability assessments. However, it is difficult to forecast how the condition of independence influences on the precision of assessments. Anyway, imprecision is reduced if independence is available in the most cases of initial information and can not be increased.
If the set P is empty, this means that the set of available evidence is conflicting and it is impossible to get any solution to the optimization problems. There may be two ways to cope with conflicting evidence and to be able to construct a prevision of interest. The first is to localize the conflicting evidence and discard it. The second is to combine the conflicting evidence making it nonconflicting [74] and apply the above optimization problems.
Most reliability measures (probabilities of failure,
MTTFs, failure rates, moments of TTF, etc.) can be represented in the form of
lower and upper previsions or expectations. Each measure is defined by the
gamble f_{ij}. The precise reliability information is
a special case of the imprecise information when the lower and upper previsions
of the gamble f_{ij} coincide, i.e., E^{L}f_{ij}=E^{U}f_{ij}. For
example, let us consider a series system consisting of two components. Suppose
that the following information about reliability of components is available. The
probability of the first component failure before 10 hours is 0.01. The MTTF of
the second component is between 50 and 60 hours. It can be seen from the example
that the available information is heterogeneous and it is impossible to find the
system reliability measures on the basis of conventional reliability models
without using additional assumptions about probability distributions. At the
same time, this information can be formalized as follows:
or
E^{L}I_{[0,10]}(X_{1})=E^{U}I_{[0,10]}(X_{1})=0.01, E^{L}X_{2}=50, E^{U}X_{2}=60,
0.01 г
є
ї
R_{+}^{2} I_{[0,10]}(x_{1})r(x_{1},x_{2})dx_{1}dx_{2} г
0.01,
If it is known that
components are statistically independent, then the constraint r(x_{1},x_{2})=r_{1}(x_{1})r_{2}(x_{2}) is added. The above constraints form a set of
possible joint densities r. Suppose
that we want to find the probability of the system failure after time 100 hours.
This measure can be regarded as the prevision of the gamble I_{[100,е)}(min(X_{1},X_{2})), i.e., g(X)=min(X_{1},X_{2}) and h(g)=I_{[100,е)}(g). Then
the objective functions are of the form:
50 г
є
ї
R_{+}^{2} x_{2}r(x_{1},x_{2})dx_{1}dx_{2} г
60.
E^{L}h(g)=
inf_{P}
є
ї
R_{+}^{2} I_{[100,е)}(
min(x_{1},x_{2}))r(x_{1},x_{2})dx_{1}dx_{2},
Solutions
to the problems are E^{L}h(g)=0 and E^{U}h(g)=0.59. The above bounds for the probability of the
system failure after time 100 hours are the best by the given information. If
there is no information about independence, then optimization problems for
computing E^{L}h(g) and E^{U}h(g) can be written as
E^{U}h(g)=
sup_{P}
є
ї
R_{+}^{2} I_{[100,е)}(
min(x_{1},x_{2}))r(x_{1},x_{2})dx_{1}dx_{2}.
E^{L}h(g)=
sup{ c+0.01c_{11}0.01d_{11}+50c_{21}60d_{21}}
,
subject
to c_{11},d_{11},c_{21},d_{21,} ╬ R_{+}, c ╬ R, and "(x_{1},x_{2}) ╬ R_{+}^{2},
E^{U}h(g)=
 E^{L}(h(g)),
c+(c_{11}d_{11})I_{[0,10]}(x_{1})+(c_{21}d_{21})x_{2} г I_{[100,е)}(min
(x_{1},x_{2})).
If the considered random variables are discrete and the sample space W^{n} is finite, then integrals and densities in the optimization problems are replaced by sums and probability distribution functions, respectively.
Let us introduce the notion of the imprecise reliability model of the ith component as a set of m_{i} available lower and upper previsions and
corresponding gambles
Our
aim is to get the imprecise reliability model M=сE^{L},E^{U},h(g(X))ё of the system. This can be done by
using the natural extension which will be regarded as a transformation of the
component imprecise models to the system model and denoted ┘_{i=1}^{n }M_{i}оM. The models in the
above considered example are M_{1}=с0.01,0.01,I_{[0,10]}(X_{1})ё, M_{2}=с50,60,X_{2}ё, M=сE^{L},E^{U},I_{[100,е)}(min(X_{1},X_{2}))ё.
M_{i}=сE_{ij}^{L},E_{ij}^{U},f_{ij}(X_{i}),j=1,...,m_{i}ё = ┘_{j=1}^{mi
}M_{ij}=┘_{j=1}^{mi}сE_{ij}^{L},E_{ij}^{U},f_{ij}(X_{i})ё.
Different forms of optimization problems for computing
the system reliability measures are studied in [103]. However, if a number of judgements
about the component reliability behavior, х_{i=1}^{n}m_{i}, and a number of components, n, are rather large, optimization problems for computing
E^{L}h(g) and E^{U}h(g) can not be practically
solved due to their extremely large dimensionality. This fact restricts
essentially the application of imprecise calculations to reliability analysis.
Therefore, simplified algorithms for solving the optimization problems and
analytical solution of the problems for some special types of systems and
initial information have to be develop. Some effective algorithms are proposed
in [88,102,107]. The main idea underlying these
algorithms is to decompose the difficult (nonlinear by independent components)
optimization problems into several simple linear programming problems whose
solution presents no difficulty. For example, in terms of the introduced
imprecise reliability models, an algorithm given in [88] allows us
to replace the complex transformation ┘_{i=1}^{n}M_{i}оM by a set of n+1 simple transformations
M_{i}оM_{i}^{0}=сE^{L},E^{U},h(X_{i})ё, i=1,...,n,
┘_{i=1}^{n}M_{i}^{0}оM.
The judgements considered above can be related to direct ones, which are a straightforward way to elicit the imprecise reliability characteristics of interest. Moreover, the condition of independence of components can be related to structural judgements. However, variety of evidence is wider and other types of initial information have to be pointed out (see Fig.2).
Comparative judgements are based on comparison of reliability measures concerning one or two components. An example of comparative judgement related to one component is "the probability of the ith component failure before time t is less than the probability of the same component failure in time interval [t_{1},t_{2}]". This judgement can be formally represented as E^{L}(I_{[t1,t2]}(X_{i})I_{[0,t]}(X_{i})) │ 0. An example of comparative judgement related to two components is "the MTTF of the ith component is less than the kth component MTTF", which can be rewritten as E^{L}(X_{k}X_{i}) │ 0. By using the property of previsions E^{U}X= E^{L}(X), for instance, the last comparative judgement can be rewritten as E^{U}(X_{i}X_{k}) г0. A more detailed description of comparative judgements in reliability analysis can be found [51,71].
A lot of reliability measures are based on conditional probabilities (previsions), for example, failure rate, mean residual TTF, probability of residual TTF, etc. Moreover, experts sometimes are able to judge on probabilities of outcomes conditionally on the occurrence of other events. The lower and upper residual MTTFs can be formally represented as E^{L}(XtI_{[t,е)}(X)) and E^{U}(XtI_{[t,е)}(X)), where Xt is the residual lifetime. The lower and upper probabilities of residual TTF after time z (lower and upper residual survivor functions) are similarly written as E^{L}(I_{[z,е)}(Xt)I_{[t,е)}(X)) and E^{U}(I_{[z,е)}(Xt)I_{[t,е)}(X)). It should be noted that the imprecise conditional reliability measures may be computed from unconditional ones by using the generalized Bayes rule [109,110]. For example, if lower E^{L}X and upper E^{U}X MTTFs are known, then the lower and upper residual MTTFs produced by the generalized Bayes rule are max{0,E^{L}Xt} and E^{U}X, respectively. A more detailed description of conditional judgements in reliability analysis can be found in [101].
It should be noted that some additional information about unimodality of lifetime probability distributions may be
involved into imprecise calculations [76,78]. In a
case of continuous TTF, this information is formalized by means of Khintchine's
condition [44]. At that, this condition transforms the
initial gambles f(x) by x > 0 to

Some qualitative or quantitative judgements about kurtosis, skewness, variance are also involved into the imprecise calculations [76,78]. For example, we may know that the component TTF has typically a flat density function, which is rather constant near zero, and very small for larger values of the variable (negative kurtosis). This qualitative judgement can be represented as a set of previsions E^{L}X^{2}=E^{U}X^{2}=h and E^{L}(X^{4}3h^{2}) г 0, where h ╬ [infX^{2},supX^{2}]. In this case, the natural extension is viewed as a parametric linear optimization problem with the parameter h.
Experts are often asked about k%quantiles of the TTF X, i.e.,
they supply points x_{i} such that Pr{X
г x_{i}}=k/100. As pointed out in [24], experts better supply intervals
rather than pointvalues because their knowledge is not only of limited
reliability, but also imprecise. In other words, experts provide some intervals
of quantiles in the form [x_{i}^{L},x_{i}^{U}]. This information can be formally
written as





Sometimes, for restricting a set of possible distribution functions of TTF in the considered optimization problems and for formalizing judgements about the ageing aspects of lifetime distributions, the various nonparametric or semiparametric classes of probability distributions are used. In particular, the class of all IFRA (increasing failure rate average) and DFRA (decreasing failure rate average) distributions are studied in [6]. Flexible classes of distributions, so called H(r,s) classes, have been investigated in [39,95,97,99].
A system is called monotone if it does not become better by a failure of a component. Various results have been obtained for computing the reliability measures of typical monotone systems by some special types of initial information.
Some results concerning the reliability of typical
systems are given in [45,46]. If initial information
about reliability of components is restricted by lower and upper MTTFs, then the
lower and upper system MTTFs have been obtained in the explicit form for series
and parallel systems [70,93]. The MTTFs of coldstandby systems have
been obtained in [38,98]. The coldstandby systems do not belong
to a class of monotone systems. Nevertheless, we consider these systems as
typical ones. It is worth noticing that expressions in the explicit form have
been proposed for cases of independent components and the lack of information
about independence. For example, the lower and upper MTTFs of a series system
consisting of n components are

Suppose that the probability distribution functions of
the component TTFs X_{i} are known only at some points, i.e., the
available initial information is represented in the form of lower E^{L}I_{[0,tij]}(X_{i}) and upper E^{U}I_{[0,tij]}(X_{i}) previsions, i=1,...,n, j=1,...,m_{i}. Here t_{ij} is the jth point of the ith
component TTF. Then explicit expressions for lower and upper probabilities of
the system failures before some time t have been
obtained for series, parallel [86], moutofn [89], cold standby [82] systems.
For example, the lower and upper probabilities of the ncomponent parallel system failure before time t by independent components are determined as




General expressions for the reliability of arbitrary monotone systems are given in [80].
Let L be a set representing levels of component performance ranging from perfect functioning supL to complete failure infL. A general model of the structure function of a system consisting of n multistate components was considered in [56]. It can be written as S:L^{n}о L. If L={0,1}, we have a classical binary system; if L={0,1,╝,m}, we have a multistate system; if L=[0,T], T ╬ R^{+}, we have a continuum system. The ith component may be in a state x_{i}(t) at arbitrary time t. This implies that the component is described by the random process {x_{i}(t),t │ 0}, x_{i}(t)╬L. Then the probability distribution function of the ith component states at time t is defined as the mapping F_{i}:Lо[0,1] such that F_{i}(r,t)=Pr{x_{i}(t) │ r}, "r ╬ L. The state of the system at time t is determined by states of its n components, i.e., S(X)=S(x_{1},╝,x_{n})╬L.
The mean level of component performance is defined as E{x_{i}(t)}. For a system, we write the mean level of system performance E{S(X)}. Suppose that probability distributions of the component states are unknown and we have only partial information in the form of lower E^{L}{x_{i}(t)} and upper E^{U}{x_{i}(t)} mean levels of component performance. It is proved in [96], that the number of states in this case does not influence on the mean level of system performance which is defined only by boundary states infL and supL. This implies that reliability analysis of multistate and continuumstate systems by such initial data is reduced to analysis of a binary system. A number of expressions have been obtained in the explicit form in [96].
At the same time, incomplete information about reliability of the multistate and continuumstate components can be represented as some set of reliability measures (precise or imprecise) defined for different time moments. For example, interval probabilities of some states of a multistate unit at time t_{1} may be known. How to compute the probabilities of states at time t_{2} without any information about the probability distribution of time to transitions between states? This problem has been solved in [104].
Fault tree analysis (FTA) is a logical and diagrammatic method to evaluate the probability of an accident resulting from sequences and combinations of faults and failure events. Fault tree analysis can be regarded as a special case of event tree analysis. A comprehensive study of event trees by representing initial information in the framework of convex sets of probabilities has been proposed by Cano and Moral [16]. Therefore, this work may be a basis for investigating fault trees. One of the advantages of the imprecise fault tree analysis is a possibility to consider dependent event in a straight way.
Another substantial question is to study the influence of events in a fault tree on a top event and the influence of uncertainty of the event description on uncertainty of the top event description. This may be done by introducing and computing importance measures of events and uncertainty importance measures of their description. However, a comprehensive study of this question is absent.
Reliability analysis of repairable systems is the most
difficult computational task even by precise initial information. A simple
repairable process with instantaneous repair (the time to repair is equal to 0) by the lack of
information about independence of random TTFs X_{i} has been studied in [98]. According to this work, if we know the
lower and upper MTTFs of a system, then the timedependent lower B^{L}(t) and upper B^{U}(t) mean time between failures (MTBF) before time t are B^{L}(t)=0,

Another simple model of repairable systems based on using intervalvalued Markov chains has been considered in [47,49]. Some special tasks of optimal preventive maintenance under incomplete information can be found in [8]. A rather general approach for reliability analysis of repairable systems proposed by Gurov and Utkin is to substitute the optimal density functions of TTF and time to repair, which are weighted sums of Dirac functions [103], into integral equations describing mathematically arbitrary repairable systems and to solve the obtained optimization problems. However, this approach meets with the extremely complex nonlinear optimization problems. Therefore, an efficient and practical approach for reliability analysis of repairable systems remains to be an open problem.
A probabilistic model of structural reliability has been introduced by Freudenthal [31]. Following his work, a number of studies have been carried out to compute the probability of failure under different assumptions about initial information. Briefly the problem of structural reliability can be stated as follows. Let Y represent a random variable describing the strength (resistance) of a system and let X represent a random variable describing the stress or load placed on the system. System failure occurs when the stress exceeds the strength: F = {(x ╬ X,y ╬ Y):X │ Y}. Here F is a region where the combination of system parameters leads to an unacceptable or unsafe system response. Then the reliability of the system is determined as R=Pr{ X г Y} .
Several authors [4,55,116] used the fuzzy set and possibility theories [25] to cope with the lack of complete statistical information about the stress and strength. The main idea of their approaches is to consider the stress and strength as fuzzy variables [57] or fuzzy random variables [53]. Authors argued that the assessment of structural parameters is both objective and subjective in character and the best way for describing the subjective component is fuzzy sets. Another approach to the structural reliability based on using the random set theory [37] has been proposed in [30,40,64]. More general structural problems solved by means of the random set theory have been considered in [66,67].
A more general approach to the structural reliability
analysis was proposed in [105,106]. It allows us to utilize and combine a
more wide class of partial information about structural parameters, which
includes possible data about probabilities of arbitrary events, expectations of
the random stress and strength and their functions, moments. Comparable
judgements and information about independence or a lack of independence of the
random stress and strength can be also incorporated in a framework of this
approach. At the same time, this approach allows us to avoid additional
assumptions about probability distributions of the random parameters because the
identification of precise probability distributions requires more information
than what experts or incomplete statistical data are able to supply. For
example, if intervalvalued probabilities



However, there are cases when types of probability distributions of the stress and strength are known, for example, from their physical nature, but parameters of distributions are defined by experts. If experts provide possible intervals of parameters and these experts are absolutely reliable, i.e., they always provide true assessments, then the problem of computing the structural reliability is reduced to the well known interval analysis. In reality, there is some degree of our belief to each expert's judgement whose value is determined by experience and competence of the expert. Therefore, it is necessary to take into account the available information about experts to obtain more credible assessments of the stressstrength reliability. An approach for computing the stressstrength reliability under these conditions is considered in [79].
Software error occurrence phenomena have been studied extensively in the literature with the objective of improving software performance [13,114]. In the last decades, various software reliability models have been developed based on testing or debugging processes, but no model can be trusted to be accurate at all times. This fact is due to the unrealistic assumptions in each model. A comprehensive critical review on probabilistic software reliability models (PSRMs) was proposed by Cai et al [14]. Authors argued that fuzzy software reliability models (FSRMs) should be developed in place of PSRMs because the software reliability behavior is fuzzy in nature as a result of the uniqueness of software. This point is explained in three ways. First, any two copies of a software exhibit no differences. Second, a software never experiences performance deterioration without external intervention. Third, a software debugging process is never replicated. Obviously, the uniqueness of software violates probabilistic conditions that a large size of sample is available and that sample data are repetitive in the probability sense. In addition, a large variety of factors makes a contribution to the unsuccesses of the existing PSRMs. To predict software reliability from debugging data, it is necessary to simultaneously take account of test cases, characteristics of software, human intervention, and debugging data. It is impossible to model all four aspects precisely because of the extremely high complexity behind them [14].
To take into account the problems described above, Cai et al [15] proposed a simple FSRM (Cai's model) and validated it. The central concept in this FSRM is Nahmias' fuzzy variable [57], i.e., time intervals between the software failures are taken as fuzzy variables governed by a membership function [25]. Another fuzzy model was also proposed in [32]. Some extension of Cai's FSRMs taking into account the programmer's behavior (possibility of error removal and introduction) have been made by Utkin et al [108,100]. Combined fuzzyprobabilistic models have been also proposed in [108].
It turns out that the available PSRMs and FSRMs can be incorporated into more general software reliability models called imprecise software reliability models (ISRMs) [77,85] and based on applying the theory of imprecise probabilities. Suppose that we have a complex PSRM, which takes into account the most factors of the software reliability behavior. Obviously, it is difficult to expect that the obtained data are stable from the statistical point of view and the corresponding random variables characterizing times to software failure are governed by one certain probability distribution even with different parameters. Moreover, it is difficult to expect that the random TTFs are independent [14]. Therefore, a family of noncountable many probability distributions constrained by some lower and upper distributions must be incorporated in the PSRM. Such the family of probability distributions can be mathematically described by the theory of imprecise probabilities.
ISRMs can be regarded as a generalization of well known probabilistic and possibilistic models. Moreover, they allow us to explain some peculiarities of known models, for example, taking into account the condition of independence of times to software failures, which are often hidden or can be explained only intuitively. For example, the ISRM explains why FSRMs, as stated in [14], allow us to take into account a lot of factors influenced on the software reliability. At the same time, PSRMs and FSRMs can be regarded as some boundary cases. Indeed, too rigid and often unrealistic assumptions are introduced in PSRMs, namely, times to software failure are independent and governed by one distribution. In FSRMs, it is assumed that the widest class of possible distributions of times to software failure is considered and there is no information about independence. Obviously, the golden mean (ISRMs) should be sought between these bounds.
Human reliability [41,42] is defined as the probability for a human operator to perform correctly required tasks in required conditions and not to assume tasks which may degrade the controlled system. Human reliability analysis aims at assessing this probability. A number of papers are devoted to fuzzy or possibilistic description of the human reliability behavior [61]. Human behavior has been described also by means of the evidence theory [63]. Cai [12] noted the following factors of human reliability behavior contributing to the fuzziness:
But these factors can be also addressed to the imprecision. This implies that the imprecise probability theory might be successfully applied to the human reliability analyzing. Moreover, the behavioral interpretation of lower and upper previsions is the most suitable for describing the human behavior. However, systematic research of this problem has not yet done.
Risk of an unwanted event happening is defined as the probability of the occurrence of this event multiplied by consequences. The consequences include financial cost, elapsed time, etc. If the number of events is large, then risk is defined as expectation of consequences. It very often happens that the probability distributions cannot be determined exactly, either due to measurement imperfections, or due to more fundamental reasons, such as insufficient available information. In practice, it is not likely that enough data about unwanted events can be collected to correctly use the precise probabilities for risk analyzing. Moreover, the risk assessments may come from various sources and differ fundamentally in kind. In this case it makes sense to say about a set of possible probability distributions consistent with the available information and about their lower and upper bounds. As a result, we have the minimal and maximal values of risk, which can be regarded as lower and upper previsions of consequences [73]. Another model of risk under partial information about consequences in the form of interval probabilities has been proposed in [115]. Some methods of handling partial information in risk analysis have been investigated in [28] and in [26].
Security engineering is concerned with whether a system can survive accidental or intentional attacks on it from outside (e.g. from users or virus intruders). In particular, computer security deals with the social regulations, managerial procedures and technological safeguards applied to computer hardware, software and data to assure against accidental or deliberate unauthorized access to and dissemination of computer system resources (hardware, software, data) while they are in storage, processing or communication [43]. One of the most important problems in security engineering is the quantitative evaluation of the security efficiency. A very interesting and valuable approach to measuring and predicting the operational security of a system was proposed by Brocklehurst et al. [10]. According to this approach, the behavior of a system should be considered from owner's and attacker's points of view. From the attacker's point of view, it is necessary to consider the effort expended by the attacking agent and the reward an attacker would get from breaking into the system. Effort include financial cost, elapsed time, experience, ability of attacker, and could be expressed in such the terms as mean effort to next security breach, probability of successfully resisting an attack, etc. Examples of rewards are personal satisfaction, gain of money, etc. From the owner's point of view, it is necessary to consider the system owner's loss which can be interpreted as an infimum selling price for a successful attack, and the owner's expenses on the security means which include, for instance, antivirus programs, new passwords, encoding, etc. The expenses come out in terms of time used for system verification, for maintenance of antivirus software, as well as in terms of money spent on the protection. The expenses can be interpreted as a supremum buying price for a successful attack. Brocklehurst et al. [10] proposed to consider also the viewpoint of an allknowing, allseeing oracle, as well as the owner and attacker. This viewpoint could be regarded as being in a sense the `true' security of the system in the testing environment.
From the above, we can say that four variables are the base for obtaining the security measures: effort, rewards, system owner's loss, owner's expenses. Moreover, their interpretation coincides with the behavioral interpretation of lower (expenses) and upper (system owner's loss) previsions, linear previsions (the allknowing oracle). A detailed description of an imprecise security model has been proposed in [72,108].
Natural extension is a powerful tool for analyzing the system reliability on the basis of available partial information about the component reliability. However, it has a disadvantage. Let us imagine that two experts provide the following judgements about the MTTF of a component: (1) MTTF is not greater than 10 hours; (2) MTTF is not less than 10 hours. The natural extension produces the resulting MTTF [0,10]╟[10,е)=10. In other words, the absolutely precise MTTF is obtained from too imprecise initial data. This is unrealistic in practice of reliability analysis. The reason of such results is that probabilities of judgements are assumed to be 1. If we assign some different probabilities to judgements, then we obtain more realistic assessments. For example, if the belief to each judgement is 0.5, then, according to [48], the resulting MTTF is greater than 5 hours. Therefore, in order to obtain the accurate and realistic system reliability assessments, it is necessary to take into account some vagueness of information about the component reliability measures, i.e., to assume that expert judgements and statistical information about reliability of a system or its components may be unreliable. This leads to studying the secondorder uncertainty models (hierarchical uncertainty models) on which much attention have been focused due to their quite commonality. These models describe the uncertainty of a random quantity by means of two levels. For example, suppose that an expert provides a judgement about the mean level of component performance [96]. If this expert sometimes provides incorrect judgements, we have to take into account some degree of belief to this judgement. In this case, the information about the mean level of component performance can be considered on the first level of the hierarchical model (firstorder information) and the degree of belief to the expert judgements is considered on the second level (secondorder information). Many papers are devoted to the theoretical [22,36,58,111] and practical [27,33,59] aspects of secondorder uncertainty models. It should be noted that the secondorder uncertainty models have been studied in reliability. Lindqvist and Langseth in [54] investigated monotone multistate systems under assumption that probabilities of the component states (firstorder probabilities) can be regarded as random variables governed by the Dirichlet probability distribution (secondorder probabilities). A comprehensive review of hierarchical models is given in [23] where it is argued that the most common hierarchical model is the Bayesian one [9,34,35]. At the same time, the Bayesian hierarchical model is unrealistic in problems where there is available only partial information about the system behavior.
Most proposed secondorder uncertainty models assume that
there is a precise secondorder probability distribution (or possibility
distribution). Moreover, most models use probabilities as a kind of the
firstlevel uncertainty description. Unfortunately, such information is often
absent in many applications and additional assumptions may lead to some
inaccuracy in results. A study of some tasks related to the homogeneous
secondorder models without any assumptions about probability distributions has
been illustrated by Kozine and Utkin in [48,50]. However, these models are of limited
use due to homogeneity of gambles considered on the firstorder level. A
hierarchical uncertainty model for combining different types of evidence was
proposed by Utkin [75,83], where the secondorder
probabilities can be regarded as confidence weights and the firstorder
uncertainty is modelled by lower and upper previsions of different gambles.
However, the proposed model [75,83] supposes
that initial information is given only for one random variable. At the same
time, the reliability applications suppose that there is a set of random
variables (component TTFs) described by a secondorder uncertainty model, and it
is necessary to find a model for some function of these variables (system TTF).
Suppose that we have a set of weighted expert judgements related to some
measures Ef_{ij}(X_{i}) of the component reliability behavior,
i=1,...,n, j=1,...,m_{i}, i.e.,
there are lower and upper previsions E^{L}f_{ij} and E^{U}f_{ij}. Suppose that each expert is
characterized by an interval of probabilities [g_{ij}^{L},g_{ij}^{U}]. Then the judgements can be
represented as

An imprecise hierarchical reliability model of systems has been studied by Utkin [90]. This model supposes that there is no information about independence of components. A model taking into account the possible independence of components leads to hard nonlinear optimization problems. However, this difficulty can be overcome by means of approaches proposed in [84,91]. Some hierarchical models of reliability taking into account the imprecision of parameters of known lifetime distributions are investigated in [81,87].
Many new results have been obtained to apply the imprecise probability theory to reliability analysis of various systems. The imprecise reliability theory is developed step by step with every result. However, the stateoftheart is only a visible top of the iceberg called the imprecise reliability theory and there are many open theoretical and practical problems, which should be solved in future. Let us note some of them.
It is obvious that the modern systems and equipment are characterized by complexity of structures and variety of initial information. This implies that, on the one hand, it is impossible to adjust all features of a real system to the considered framework. On the other hand, introduction of some additional assumptions for constructing a reasonable model of a system may cancel all advantages of imprecise probabilities. Where are limits for introducing additional assumptions (simplification) in construction of a model? How do possible changes of initial data imprecision influence on results of the system reliability calculations? It is obvious that the stated questions relate to the informational aspect of the imprecise reliability. The same can be said about necessity of studying the effects of possible estimation errors of initial data on resulting reliability measures. This leads to introducing and determining the uncertainty importance measures.
Another important point is how to solve the optimization problems if the function h(g(X)) is not expressed analytically in the explicit form and can be computed only numerically. For example, this function may be a system of integral equations (repairable system). One of the ways to solve the corresponding optimization problems is the wellknown simulation technique. However, the development of effective simulation procedures for solving the considered optimization problems is an open problem.
Most results of the imprecise reliability conjecture the strong independence of components or the lack of information about independence. However, the imprecise probability theory allows us to take into account more subtle types of independence [20,29,52] and, thereby, to make reliability analysis to be more flexible and adequate. Therefore, a clear interpretation of independence concepts in terms of the reliability theory is also an open problem, which has to be solved in future.
In spite of the fact that many algorithms and methods for reliability analysis of various systems have been developed, they are rather theoretical and cover some typical systems, typical initial evidence, typical situations. At the same time, real systems are more complex. This has been shown by challenge problems posed in [60] and discussed at the Epistemic Uncertainty Workshop organized by Sandia National Laboratories, Albuquerque, New Mexico, 2002 ( http://www.sandia.gov/epistemic ) and at the Workshop Application of Fuzzy Sets and Fuzzy Logic to Engineering, Pertisau, Austria, 2002 ( http://techmath.uibk.ac.at/research/fuzzy/workshop ). Therefore, practical approaches to analyze real systems (may be approximately) have to be developed.
In order to achieve a required level of the system reliability by minimal costs, the redundancy optimization technique is usually used. A number of redundant components in a system is determined by the required level of reliability and by the component reliability. Various algorithms for determining the optimal number of redundant components are available in the literature. However, most results assume that there exists complete information about reliability. Therefore, the development of efficient algorithms of optimization by partial information is also an open problem.
A similar problem is the product quality control which needs a tradeoff between a better product quality and lower production costs by system constraints related to operating feasibility, product specifications, safety and environmental issues. Here results obtained by Augustin [1,2] concerning decision making under partial information about probabilities of states of nature may be a basis for investigating this problem.
It should be noted that the list of open problems may be extended. However, most problems can be partially reduced to methods of solving the considered optimization problems (natural extension) by different conditions.