Imprecise reliability: An introductory review

Lev V. Utkin
Institute of Statistics, Munich University, Germany;
and
Department of Computer Science,
St. Petersburg Forest Technical Academy, Russia

e-mail: lvu@utkin.usr.etu.spb.ru, utkin@stat.uni-muenchen.de

This document is available in pdf-format

Abstract

The main aim of the paper is to define what the imprecise reliability is, what problems can be solved by means of a framework of the imprecise reliability. From this point of view, various branches of reliability analysis are considered, including analysis of monotone systems, repairable systems, multi-state systems, structural reliability, software reliability, human reliability, fault tree analysis. Various types of initial information used in the imprecise reliability are overviewed. Some open problems are given in conclusion.


1  Why imprecise probabilities in reliability analysis

PreviousUpNext

A lot of methods and models of the classical reliability theory assume that all probabilities are precise, that is, that every probability involved is perfectly determinable. Moreover, it is usually assumed that there exists some complete probabilistic information about the system and component reliability behavior. The completeness of the probabilistic information means that two conditions must be fulfilled:

  1. all probabilities or probability distributions are known or perfectly determinable;

  2. the system components are independent, i.e., all random variables, describing the component reliability behavior, are independent.

The precise system reliability measures can be always (at least theoretically) computed if both conditions are satisfied (it is assumed here that the system structure is defined precisely and there exists a function linking the system time to failure (TTF) and TTFs of components). If at least one of the conditions is violated, then only the interval reliability measures can be obtained. In reality, it is difficult to expect that the first condition is fulfilled. If the information we have about the functioning of components and systems is based on a statistical analysis, then a probabilistic uncertainty model should be used in order to mathematically represent and manipulate that information. However, the reliability assessments that are combined to describe systems and components may come from various sources. Some of them may be objective measures based on relative frequencies or on well-established statistical models. A part of the reliability assessments may be supplied by experts. If a system is new or exists only as a project, then there are not sufficient statistical data in many cases. Even if such data exist, we do not always observe their stability from the statistical point of view. Moreover, the failure time may be not accurately observed or even missed. Sometimes, failure does not occur or occurs partially and we get a censored observation of the failure time. As a result, only partial information about reliability of the system components may be available, for example, the mean time to failure (MTTF) or bounds for the probability of failure at a time. Of course, one can always assume that the TTF has a certain distribution, for example, exponential or normal. However, how to belief to obtained results of reliability analysis if our assumption is based only on our or expert's experience.

It is difficult to expect that components of many systems are independent. Let us consider two programs functioning in parallel (two-version programming). If these programs were developed by means of the same programming language, then possible errors in a language library of typical functions produce dependent faults in both programs. Several experimental studies show that the assumption of independence of failures between independently developed programs does not hold. Moreover, the main difficulty here is that the degree of dependency is unknown. The same examples can be presented for various applications. This implies that the second condition for complete information is also violated and it is impossible to obtain some precise reliability measures for a system.

One of the tools to cope with imprecision of available information in reliability analysis is the fuzzy reliability theory [11,12,21,92,94]. However, a framework of this theory does not cover a large variety of possible judgements in reliability. Moreover, it requires to assume a certain type of possibility distributions of TTF or time to repair, and may be unreasonable in a wide scope of cases. Another approach to reliability analysis by incomplete information based on using the random set and evidence theories [37,62] has been proposed in [3,40,65]. The random set theory provides us with an appropriate mathematical model of uncertainty when the information is not complete or when the result of each observation is not point-valued but set-valued, so that it is not possible to assume the existence of a unique probability measure. However, this approach also does not cover all possible judgements in reliability.

To overcome every difficulty, Gert de Cooman proposed to use the theory of imprecise probabilities (also called the theory of lower previsions [109,110], the theory of interval statistical models [52], the theory of interval probabilities [112,113]), which can be the most powerful and promising tool for reliability analyzing and whose general framework is provided by upper and lower previsions.

It is necessary to note that an idea of using some aspects of the imprecise probability theory in reliability analysis has been considered in the literature. For example, Barlow and Proschan [5,6] considered a case of the lack of information about independence of components and non-parametric interval reliability analysis of ageing classes of TTF distributions. Barzilovich and Kashtanov [8] solved some tasks of optimal preventive maintenance under incomplete information. Coolen and Newby [17,18,19] have shown how the commonly used concepts in reliability theory can be extended in a sensible way and combined with prior knowledge through the use of imprecise probabilities. However, they provide a study of methods to develop parametric models for lifetimes. Some examples of the successful application of imprecise probabilities to reliability analysis can be found in [38,98].

Let us consider the following examples. Suppose that there is available the following information about components of a two-component series system. The MTTF of the first component is 10 hours and the probability of the second component failure before 2 hours is 0.01. The reliability of the system can not be determined by means of methods of the conventional reliability theory because the probability distribution of TTF is unknown. Any assumption about a certain probability distribution of TTF may lead to incorrect results. However, this problem can be solved by using the imprecise probabilities.

Suppose that we analyze a system whose n-1 components are described by some precise probability distributions of TTFs with precisely known parameters, but information about one of the components, say the n-th one, is partial, for example, we know only the probability of failure before time tn. If the probability of the system failure before time t0 has to be found, then, according to [88], the precision of information about n-1 components does not influence on precision of the desired solution and is mainly determined by information about the n-th "imprecise" component. Hence, the precise distributions in this case are useless and the imprecision of information about one of the components may cancel complete information about other components. In this case, the imprecise probability theory allows us to explain this example and to avoid possible errors in reliability analysis.

The following virtues of the imprecise probability theory can be pointed out:

  1. It is not necessary to make assumptions about probability distributions of random variables characterizing the component reliability behavior (TTFs, numbers of failures in a unit of time, etc.).

  2. The imprecise probability theory is completely based on the classical probability theory and can be regarded as its generalization. Therefore, imprecise reliability models can be interpreted in terms of the probability theory. Conventional reliability models can be regarded as a special case of imprecise models.

  3. The imprecise probability theory provides a unified tool (natural extension) for computing the system reliability under partial information about the component reliability behavior.

  4. The reliability measures different in kind can be involved into the natural extension in a straightforward way.

  5. The imprecise probability theory allows us to obtain the best possible bounds for the system reliability by given information about the component reliability.

  6. The possible large imprecision of resulting system reliability measures reflects the available incompleteness of initial information and stimulates to searching new information sources.

A structure of the proposed review is shown in Fig. 1. The author does not pretend to an exhaustive and comprehensive state-of-the-art. The main aim of the review is to briefly show that the imprecise reliability exists by this time and is successfully developed. I apologize to those authors whose related work is not addressed here or is not comprehended properly here.

ImpRel1.gif
Figure 1: Structure of the review
PreviousUpNext

2  General approach

PreviousUpNext

Consider a system consisting of n components. Suppose that partial information about reliability of components is represented as a set of lower and upper expectations ELfij and EUfij, i=1,...,n, j=1,...,mi, of functions fij. Here mi is a number of judgements that are related to the i-th component; fij(Xi) is a function of the random TTF Xi of the i-th component or some different random variable, describing the i-th component reliability and corresponding to the j-th judgement about this component. For example, an interval-valued probability that a failure is in the interval [a,b] can be represented as expectations of the indicator function I[a,b](Xi) such that I[a,b](Xi)=1 if Xi [a,b] and I[a,b](Xi)=0 if Xi [a,b]. The lower and upper MTTFs are expectations of the function f(Xi)=Xi.

Denote X=(x1,...,xn) and X=(X1,...,Xn). Here x1,...,xn are values of random variables X1,...,Xn, respectively. It is assumed that the random variable Xi is defined on a sample space W and the random vector X is defined on a sample space Wn=W╫...╫W. If Xi is the TTF, then W = R+. If Xiis a random state of a multi-state system [7], then W = {1,...,L}, where L is a number of states of the multi-state system. In a case of the discrete TTF, W = {1,2,...}, i.e., W = Z+. According to [6], the system TTF can be uniquely determined by the component TTFs. Then there exists a function g( X) of the component lifetimes characterizing the system reliability behavior.

In terms of the imprecise probability theory the lower and upper expectations can be regarded as lower and upper previsions. The functions fij and g can be regarded as gambles (the case of unbounded gambles is studied in [68,69]). The lower and upper previsions ELfij and EUfij can be also viewed as bounds for an unknown precise prevision Efij which will be called a linear prevision. Since the function g is the system TTF, then, for computing the reliability measures (probability of failure, MTTF, k-th moment of TTF), it is necessary to find lower and upper previsions of a gamble h(g), where the function h is defined by the system reliability measure which has to be found. For example, if this measure is the probability of failure before time t, then h(g)=I[0,t](g). In this case, the optimization problems (natural extension) for computing the lower ELh(g) and upper EUh(g) previsions of h(g) are [38,98]
ELh(g)=sup ь
э
ю
c+ n
х
i=1 
mi
х
j=1 
( cijELfij-dijEUfij)
¤
,
subject to cij,dij R+, i=1,...,n, j=1,...,mi, c R, and "X Wn,
c+ n
х
i=1 
mi
х
j=1 

( cij-dij) fij г h(g(X)).

The optimization problem for computing the upper prevision EUh(g) of the system function h(g) is
EUh(g)=inf ь
э
ю
c+ n
х
i=1 
mi
х
j=1 
( cijEUfij-dijELfij)
¤
,
subject to cij,dij R+, i=1,...,n, j=1,...,mi, c R, and "X Wn,
c+ n
х
i=1 
mi
х
j=1 
( cij-dij) fij h(g(X)).
If to assume that TTFs are governed by some unknown joint density r(X), then ELh(g) and EUh(g) can be computed as
ELh(g)=

infP є
ї


Wn 
h(g(X))r(X)dX,

EUh(g)= supP є
ї


Wn 
h(g(X))r(X)dX,
subject to
r(X) 0,  є
ї


Wn 
r(X)dX=1,

ELfij г є
ї


Wn 
fij(xi)r(X)dX г EUfiji г nj г mi.
Here the infimum and supremum are taken over the set P of all possible density functions {r(X)} satisfying the above constraints, i.e., solutions to the problems are defined on the set P of densities that are consistent with partial information expressed in the form of the constraints. The optimization problems mean that we can find only the largest and smallest possible values of Eh(g) over all densities from the set P.

It should be noted that only joint densities are used in the above optimization problems because, in a general case, we may not be aware whether the variables X1,...,Xn are dependent or not. If it is known that components are independent, then r(X)=r1(x1)╖╖╖rm(xm). In this case, the set P is reduced and consists only of the densities that can be represented as a product of marginal densities. This results more precise reliability assessments. However, it is difficult to forecast how the condition of independence influences on the precision of assessments. Anyway, imprecision is reduced if independence is available in the most cases of initial information and can not be increased.

If the set P is empty, this means that the set of available evidence is conflicting and it is impossible to get any solution to the optimization problems. There may be two ways to cope with conflicting evidence and to be able to construct a prevision of interest. The first is to localize the conflicting evidence and discard it. The second is to combine the conflicting evidence making it non-conflicting [74] and apply the above optimization problems.

Most reliability measures (probabilities of failure, MTTFs, failure rates, moments of TTF, etc.) can be represented in the form of lower and upper previsions or expectations. Each measure is defined by the gamble fij. The precise reliability information is a special case of the imprecise information when the lower and upper previsions of the gamble fij coincide, i.e., ELfij=EUfij. For example, let us consider a series system consisting of two components. Suppose that the following information about reliability of components is available. The probability of the first component failure before 10 hours is 0.01. The MTTF of the second component is between 50 and 60 hours. It can be seen from the example that the available information is heterogeneous and it is impossible to find the system reliability measures on the basis of conventional reliability models without using additional assumptions about probability distributions. At the same time, this information can be formalized as follows:
ELI[0,10](X1)=EUI[0,10](X1)=0.01, ELX2=50, EUX2=60,
or
0.01 г є
ї


R+2 
I[0,10](x1)r(x1,x2)dx1dx2 г 0.01,

50 г є
ї


R+2 
x2r(x1,x2)dx1dx2 г 60.
If it is known that components are statistically independent, then the constraint r(x1,x2)=r1(x1)r2(x2) is added. The above constraints form a set of possible joint densities r. Suppose that we want to find the probability of the system failure after time 100 hours. This measure can be regarded as the prevision of the gamble I[100,е)(min(X1,X2)), i.e., g(X)=min(X1,X2) and h(g)=I[100,е)(g). Then the objective functions are of the form:
ELh(g)= infP є
ї


R+2 
I[100,е)( min(x1,x2))r(x1,x2)dx1dx2,

EUh(g)= supP є
ї


R+2 
I[100,е)( min(x1,x2))r(x1,x2)dx1dx2.
Solutions to the problems are ELh(g)=0 and EUh(g)=0.59. The above bounds for the probability of the system failure after time 100 hours are the best by the given information. If there is no information about independence, then optimization problems for computing ELh(g) and EUh(g) can be written as
ELh(g)= sup{ c+0.01c11-0.01d11+50c21-60d21} ,

EUh(g)=  - EL(-h(g)),
subject to c11,d11,c21,d21, R+, c R, and "(x1,x2) R+2,
c+(c11-d11)I[0,10](x1)+(c21-d21)x2 г I[100,е)(min (x1,x2)).

If the considered random variables are discrete and the sample space Wn is finite, then integrals and densities in the optimization problems are replaced by sums and probability distribution functions, respectively.

Let us introduce the notion of the imprecise reliability model of the i-th component as a set of mi available lower and upper previsions and corresponding gambles
Mi=сEijL,EijU,fij(Xi),j=1,...,miё = j=1mi Mij=j=1miсEijL,EijU,fij(Xi)ё.
Our aim is to get the imprecise reliability model M=сEL,EU,h(g(X))ё of the system. This can be done by using the natural extension which will be regarded as a transformation of the component imprecise models to the system model and denoted i=1n MiоM. The models in the above considered example are M1=с0.01,0.01,I[0,10](X1)ё, M2=с50,60,X2ё, M=сEL,EU,I[100,е)(min(X1,X2))ё.

Different forms of optimization problems for computing the system reliability measures are studied in [103]. However, if a number of judgements about the component reliability behavior, хi=1nmi, and a number of components, n, are rather large, optimization problems for computing ELh(g) and EUh(g) can not be practically solved due to their extremely large dimensionality. This fact restricts essentially the application of imprecise calculations to reliability analysis. Therefore, simplified algorithms for solving the optimization problems and analytical solution of the problems for some special types of systems and initial information have to be develop. Some effective algorithms are proposed in [88,102,107]. The main idea underlying these algorithms is to decompose the difficult (non-linear by independent components) optimization problems into several simple linear programming problems whose solution presents no difficulty. For example, in terms of the introduced imprecise reliability models, an algorithm given in [88] allows us to replace the complex transformation i=1nMiоM by a set of n+1 simple transformations
MiоMi0=сEL,EU,h(Xi)ёi=1,...,n,

i=1nMi0оM.

PreviousUpNext

3  Judgements in imprecise reliability

PreviousUpNext

Judgem1.gif
Figure 2: Types of judgements used in imprecise reliability

The judgements considered above can be related to direct ones, which are a straightforward way to elicit the imprecise reliability characteristics of interest. Moreover, the condition of independence of components can be related to structural judgements. However, variety of evidence is wider and other types of initial information have to be pointed out (see Fig.2).

Comparative judgements are based on comparison of reliability measures concerning one or two components. An example of comparative judgement related to one component is "the probability of the i-th component failure before time t is less than the probability of the same component failure in time interval [t1,t2]". This judgement can be formally represented as EL(I[t1,t2](Xi)-I[0,t](Xi)) 0. An example of comparative judgement related to two components is "the MTTF of the i-th component is less than the k-th component MTTF", which can be rewritten as EL(Xk-Xi) 0. By using the property of previsions EUX= -EL(-X), for instance, the last comparative judgement can be rewritten as EU(Xi-Xk) г0. A more detailed description of comparative judgements in reliability analysis can be found [51,71].

A lot of reliability measures are based on conditional probabilities (previsions), for example, failure rate, mean residual TTF, probability of residual TTF, etc. Moreover, experts sometimes are able to judge on probabilities of outcomes conditionally on the occurrence of other events. The lower and upper residual MTTFs can be formally represented as EL(X-t|I[t,е)(X)) and EU(X-t|I[t,е)(X)), where X-t is the residual lifetime. The lower and upper probabilities of residual TTF after time z (lower and upper residual survivor functions) are similarly written as EL(I[z,е)(X-t)|I[t,е)(X)) and EU(I[z,е)(X-t)|I[t,е)(X)). It should be noted that the imprecise conditional reliability measures may be computed from unconditional ones by using the generalized Bayes rule [109,110]. For example, if lower ELX and upper EUX MTTFs are known, then the lower and upper residual MTTFs produced by the generalized Bayes rule are max{0,ELX-t} and EUX, respectively. A more detailed description of conditional judgements in reliability analysis can be found in [101].

It should be noted that some additional information about unimodality of lifetime probability distributions may be involved into imprecise calculations [76,78]. In a case of continuous TTF, this information is formalized by means of Khintchine's condition [44]. At that, this condition transforms the initial gambles f(x) by x > 0 to
fr(x)=x-1 є
ї
x

0 
f(t+r)dt,
where r is a mode. But values of previsions remain the same. For example, if the lower ELX and upper EUX MTTFs are known, then unimodality gives new previsions EL(Xr/2+r)=ELX and EU(Xr/2+r)=EUX.

Some qualitative or quantitative judgements about kurtosis, skewness, variance are also involved into the imprecise calculations [76,78]. For example, we may know that the component TTF has typically a flat density function, which is rather constant near zero, and very small for larger values of the variable (negative kurtosis). This qualitative judgement can be represented as a set of previsions ELX2=EUX2=h and EL(X4-3h2) г 0, where h [infX2,supX2]. In this case, the natural extension is viewed as a parametric linear optimization problem with the parameter h.

Experts are often asked about k%-quantiles of the TTF X, i.e., they supply points xi such that Pr{X г xi}=k/100. As pointed out in [24], experts better supply intervals rather than point-values because their knowledge is not only of limited reliability, but also imprecise. In other words, experts provide some intervals of quantiles in the form [xiL,xiU]. This information can be formally written as

Pr{X г [xiL,xiU]}=qi.
The interpretation of this interval depends on experts, i.e., on their imagination of interval quantiles. Two models of the expert imagination can be marked out. The first model corresponds to the expert judgement: "I do not know exactly the true value of the quantile, but one of the values in the interval [xiL,xiU] is true". The second model corresponds to the expert judgement: "All points in the interval [xiL,xiU] are true values of the quantile". The first model is more common in practice of elicitation of judgements from experts. It is worth noticing that the considered models of uncertainty differ from standard uncertainty models used in the imprecise probability theory, where there exists an interval of previsions of a certain gamble. In the models of quantiles, the gamble is viewed as a set of gambles for which the same previsions are defined. The first model can be represented as the union of models
t [xiL,xiU]сqi,qi,I[0,t](X)ё.
The symbol t [xiL,xiU]  means that at least one of the models сqi,qi,I[0,t](X)ё is true. Then arbitrary reliability measures may be computed by using the natural extension. For example, if there are n judgements about imprecise quantiles (q1 г ... г qn) and a sample space of TTF is bounded by values x0 and xN, then the lower and upper MTTFs of a component are
ELX=q1x0+ n
х
i=1 
(qi+1-qi)maxk=1,...,i xkL,  qn+1=1,

EUX=(1-qn)xN+ n
х
i=1 
(qi-qi-1)mink=i,...,n xkU,   q0=0.
The second model can be viewed as the intersection of models
t [xiL,xiU]сqi,qi,I[0,t](X)ё.
Here the symbol t [xiL,xiU] means that all models сqi,qi,I[0,t](X)ё are simultaneously used. This model may take place if it is supposed that a probability distribution of the random TTF is not changed on intervals [xiL,xiU]. Such the probability distribution is typical in reliability analysis of systems having some random interruptions of their functioning. For example, by considering reliability of the airplane undercarriages in calendar time, it is necessary to take into account that the undercarriages are working mainly during take-off and landing and a typical probability distribution of the undercarriage TTF does not change on intervals between take-off and landing [6].

Sometimes, for restricting a set of possible distribution functions of TTF in the considered optimization problems and for formalizing judgements about the ageing aspects of lifetime distributions, the various non-parametric or semi-parametric classes of probability distributions are used. In particular, the class of all IFRA (increasing failure rate average) and DFRA (decreasing failure rate average) distributions are studied in [6]. Flexible classes of distributions, so called H(r,s) classes, have been investigated in [39,95,97,99].

PreviousUpNext

4  Reliability of monotone systems

PreviousUpNext

A system is called monotone if it does not become better by a failure of a component. Various results have been obtained for computing the reliability measures of typical monotone systems by some special types of initial information.

Some results concerning the reliability of typical systems are given in [45,46]. If initial information about reliability of components is restricted by lower and upper MTTFs, then the lower and upper system MTTFs have been obtained in the explicit form for series and parallel systems [70,93]. The MTTFs of cold-standby systems have been obtained in [38,98]. The cold-standby systems do not belong to a class of monotone systems. Nevertheless, we consider these systems as typical ones. It is worth noticing that expressions in the explicit form have been proposed for cases of independent components and the lack of information about independence. For example, the lower and upper MTTFs of a series system consisting of n components are
 ELmini=1,...,n Xi=0,  EUmini=1,...,n Xi=mini=1,...,n EUXi.
At that, the condition of independence does not influence on results in this particular example.

Suppose that the probability distribution functions of the component TTFs Xi are known only at some points, i.e., the available initial information is represented in the form of lower ELI[0,tij](Xi) and upper EUI[0,tij](Xi) previsions, i=1,...,n, j=1,...,mi. Here tij is the j-th point of the i-th component TTF. Then explicit expressions for lower and upper probabilities of the system failures before some time t have been obtained for series, parallel [86], m-out-of-n [89], cold standby [82] systems. For example, the lower and upper probabilities of the n-component parallel system failure before time t by independent components are determined as
ELI[0,t](maxi=1,...,n Xi)=

i=1,...,n 
ELI[0,tiwi](Xi),

EUI[0,t](maxi=1,...,n Xi)=

i=1,...,n 
EUI[0,tivi](Xi),
and by the lack of knowledge about independence as
ELI[0,t](maxi=1,...,n Xi)=maxi=1,...,n ELI[0,tiwi](Xi),

EUI[0,t](maxi=1,...,n Xi)=min{1,
х
i=1,...,n 
EUI[0,tivi](Xi)},
where vi=min{j:tij t} and wi=max{j:tij г t}.

General expressions for the reliability of arbitrary monotone systems are given in [80].

PreviousUpNext

5  Multi-state and continuum-state systems

PreviousUpNext

Let L be a set representing levels of component performance ranging from perfect functioning supL to complete failure infL. A general model of the structure function of a system consisting of n multi-state components was considered in [56]. It can be written as S:Lnо L. If L={0,1}, we have a classical binary system; if L={0,1,,m}, we have a multi-state system; if L=[0,T], T R+, we have a continuum system. The i-th component may be in a state xi(t) at arbitrary time t. This implies that the component is described by the random process {xi(t),t 0}, xi(t)L. Then the probability distribution function of the i-th component states at time t is defined as the mapping Fi:Lо[0,1] such that Fi(r,t)=Pr{xi(t) r}, "r L. The state of the system at time t is determined by states of its n components, i.e., S(X)=S(x1,,xn)L.

The mean level of component performance is defined as E{xi(t)}. For a system, we write the mean level of system performance E{S(X)}. Suppose that probability distributions of the component states are unknown and we have only partial information in the form of lower EL{xi(t)} and upper EU{xi(t)} mean levels of component performance. It is proved in [96], that the number of states in this case does not influence on the mean level of system performance which is defined only by boundary states infL and supL. This implies that reliability analysis of multi-state and continuum-state systems by such initial data is reduced to analysis of a binary system. A number of expressions have been obtained in the explicit form in [96].

At the same time, incomplete information about reliability of the multi-state and continuum-state components can be represented as some set of reliability measures (precise or imprecise) defined for different time moments. For example, interval probabilities of some states of a multi-state unit at time t1 may be known. How to compute the probabilities of states at time t2 without any information about the probability distribution of time to transitions between states? This problem has been solved in [104].

PreviousUpNext

6  Fault tree analysis

PreviousUpNext

Fault tree analysis (FTA) is a logical and diagrammatic method to evaluate the probability of an accident resulting from sequences and combinations of faults and failure events. Fault tree analysis can be regarded as a special case of event tree analysis. A comprehensive study of event trees by representing initial information in the framework of convex sets of probabilities has been proposed by Cano and Moral [16]. Therefore, this work may be a basis for investigating fault trees. One of the advantages of the imprecise fault tree analysis is a possibility to consider dependent event in a straight way.

Another substantial question is to study the influence of events in a fault tree on a top event and the influence of uncertainty of the event description on uncertainty of the top event description. This may be done by introducing and computing importance measures of events and uncertainty importance measures of their description. However, a comprehensive study of this question is absent.

PreviousUpNext

7  Repairable systems

PreviousUpNext

Reliability analysis of repairable systems is the most difficult computational task even by precise initial information. A simple repairable process with instantaneous repair (the time to repair is equal to 0) by the lack of information about independence of random TTFs Xi has been studied in [98]. According to this work, if we know the lower and upper MTTFs of a system, then the time-dependent lower BL(t) and upper BU(t) mean time between failures (MTBF) before time t are BL(t)=0,
BU(t)=
min
1 г k < +е 
ц
ш
EUX k
х
i=1 
 1

i
+ min ц
ш
 t-kEUX

k+1
,  t-kEUX

k
Ў
°
Ў
°
.
These bounds are of limited interest becauseBL(t)=0 and BU(t)ое by large values of t due to the lack of information about independence.

Another simple model of repairable systems based on using interval-valued Markov chains has been considered in [47,49]. Some special tasks of optimal preventive maintenance under incomplete information can be found in [8]. A rather general approach for reliability analysis of repairable systems proposed by Gurov and Utkin is to substitute the optimal density functions of TTF and time to repair, which are weighted sums of Dirac functions [103], into integral equations describing mathematically arbitrary repairable systems and to solve the obtained optimization problems. However, this approach meets with the extremely complex non-linear optimization problems. Therefore, an efficient and practical approach for reliability analysis of repairable systems remains to be an open problem.

PreviousUpNext

8  Structural reliability

PreviousUpNext

A probabilistic model of structural reliability has been introduced by Freudenthal [31]. Following his work, a number of studies have been carried out to compute the probability of failure under different assumptions about initial information. Briefly the problem of structural reliability can be stated as follows. Let Y represent a random variable describing the strength (resistance) of a system and let X represent a random variable describing the stress or load placed on the system. System failure occurs when the stress exceeds the strength: F = {(x X,y Y):X Y}. Here F is a region where the combination of system parameters leads to an unacceptable or unsafe system response. Then the reliability of the system is determined as R=Pr{ X г Y} .

Several authors [4,55,116] used the fuzzy set and possibility theories [25] to cope with the lack of complete statistical information about the stress and strength. The main idea of their approaches is to consider the stress and strength as fuzzy variables [57] or fuzzy random variables [53]. Authors argued that the assessment of structural parameters is both objective and subjective in character and the best way for describing the subjective component is fuzzy sets. Another approach to the structural reliability based on using the random set theory [37] has been proposed in [30,40,64]. More general structural problems solved by means of the random set theory have been considered in [66,67].

A more general approach to the structural reliability analysis was proposed in [105,106]. It allows us to utilize and combine a more wide class of partial information about structural parameters, which includes possible data about probabilities of arbitrary events, expectations of the random stress and strength and their functions, moments. Comparable judgements and information about independence or a lack of independence of the random stress and strength can be also incorporated in a framework of this approach. At the same time, this approach allows us to avoid additional assumptions about probability distributions of the random parameters because the identification of precise probability distributions requires more information than what experts or incomplete statistical data are able to supply. For example, if interval-valued probabilities
piL г
Pr{X г ai} г piU,  qjL г
Pr{Y г bj} гqjU
 
,
of the stress X and strength Y are known at points ai, i=1,...,n, and bj, j=1,...,m, then the interval-valued stress-strength reliability by the lack of information about independence of X and Y is determined as


RL=maxi=1,...,n max (0,piL-qj(i)U), j(i)=min {j:ai г bj},

RU=1-maxk=1,...,m max (0,qkL-pl(k)U), l(k)=min {l:bk г al}.

However, there are cases when types of probability distributions of the stress and strength are known, for example, from their physical nature, but parameters of distributions are defined by experts. If experts provide possible intervals of parameters and these experts are absolutely reliable, i.e., they always provide true assessments, then the problem of computing the structural reliability is reduced to the well known interval analysis. In reality, there is some degree of our belief to each expert's judgement whose value is determined by experience and competence of the expert. Therefore, it is necessary to take into account the available information about experts to obtain more credible assessments of the stress-strength reliability. An approach for computing the stress-strength reliability under these conditions is considered in [79].

PreviousUpNext

9  Software reliability models

PreviousUpNext

Software error occurrence phenomena have been studied extensively in the literature with the objective of improving software performance [13,114]. In the last decades, various software reliability models have been developed based on testing or debugging processes, but no model can be trusted to be accurate at all times. This fact is due to the unrealistic assumptions in each model. A comprehensive critical review on probabilistic software reliability models (PSRMs) was proposed by Cai et al [14]. Authors argued that fuzzy software reliability models (FSRMs) should be developed in place of PSRMs because the software reliability behavior is fuzzy in nature as a result of the uniqueness of software. This point is explained in three ways. First, any two copies of a software exhibit no differences. Second, a software never experiences performance deterioration without external intervention. Third, a software debugging process is never replicated. Obviously, the uniqueness of software violates probabilistic conditions that a large size of sample is available and that sample data are repetitive in the probability sense. In addition, a large variety of factors makes a contribution to the unsuccesses of the existing PSRMs. To predict software reliability from debugging data, it is necessary to simultaneously take account of test cases, characteristics of software, human intervention, and debugging data. It is impossible to model all four aspects precisely because of the extremely high complexity behind them [14].

To take into account the problems described above, Cai et al [15] proposed a simple FSRM (Cai's model) and validated it. The central concept in this FSRM is Nahmias' fuzzy variable [57], i.e., time intervals between the software failures are taken as fuzzy variables governed by a membership function [25]. Another fuzzy model was also proposed in [32]. Some extension of Cai's FSRMs taking into account the programmer's behavior (possibility of error removal and introduction) have been made by Utkin et al [108,100]. Combined fuzzy-probabilistic models have been also proposed in [108].

It turns out that the available PSRMs and FSRMs can be incorporated into more general software reliability models called imprecise software reliability models (ISRMs) [77,85] and based on applying the theory of imprecise probabilities. Suppose that we have a complex PSRM, which takes into account the most factors of the software reliability behavior. Obviously, it is difficult to expect that the obtained data are stable from the statistical point of view and the corresponding random variables characterizing times to software failure are governed by one certain probability distribution even with different parameters. Moreover, it is difficult to expect that the random TTFs are independent [14]. Therefore, a family of non-countable many probability distributions constrained by some lower and upper distributions must be incorporated in the PSRM. Such the family of probability distributions can be mathematically described by the theory of imprecise probabilities.

ISRMs can be regarded as a generalization of well known probabilistic and possibilistic models. Moreover, they allow us to explain some peculiarities of known models, for example, taking into account the condition of independence of times to software failures, which are often hidden or can be explained only intuitively. For example, the ISRM explains why FSRMs, as stated in [14], allow us to take into account a lot of factors influenced on the software reliability. At the same time, PSRMs and FSRMs can be regarded as some boundary cases. Indeed, too rigid and often unrealistic assumptions are introduced in PSRMs, namely, times to software failure are independent and governed by one distribution. In FSRMs, it is assumed that the widest class of possible distributions of times to software failure is considered and there is no information about independence. Obviously, the golden mean (ISRMs) should be sought between these bounds.

PreviousUpNext

10  Human reliability

PreviousUpNext

Human reliability [41,42] is defined as the probability for a human operator to perform correctly required tasks in required conditions and not to assume tasks which may degrade the controlled system. Human reliability analysis aims at assessing this probability. A number of papers are devoted to fuzzy or possibilistic description of the human reliability behavior [61]. Human behavior has been described also by means of the evidence theory [63]. Cai [12] noted the following factors of human reliability behavior contributing to the fuzziness:

  1. inability to acquire and process an adequate amount of information about systems;

  2. vagueness of the relationship between people and working environments;

  3. vagueness of human thought process;

  4. human reliability behavior is unstable and vague in nature because it depends on human competence, activities, and experience.

But these factors can be also addressed to the imprecision. This implies that the imprecise probability theory might be successfully applied to the human reliability analyzing. Moreover, the behavioral interpretation of lower and upper previsions is the most suitable for describing the human behavior. However, systematic research of this problem has not yet done.

PreviousUpNext

11  Risk analysis

PreviousUpNext

Risk of an unwanted event happening is defined as the probability of the occurrence of this event multiplied by consequences. The consequences include financial cost, elapsed time, etc. If the number of events is large, then risk is defined as expectation of consequences. It very often happens that the probability distributions cannot be determined exactly, either due to measurement imperfections, or due to more fundamental reasons, such as insufficient available information. In practice, it is not likely that enough data about unwanted events can be collected to correctly use the precise probabilities for risk analyzing. Moreover, the risk assessments may come from various sources and differ fundamentally in kind. In this case it makes sense to say about a set of possible probability distributions consistent with the available information and about their lower and upper bounds. As a result, we have the minimal and maximal values of risk, which can be regarded as lower and upper previsions of consequences [73]. Another model of risk under partial information about consequences in the form of interval probabilities has been proposed in [115]. Some methods of handling partial information in risk analysis have been investigated in [28] and in [26].

PreviousUpNext

12  Security engineering

PreviousUpNext

Security engineering is concerned with whether a system can survive accidental or intentional attacks on it from outside (e.g. from users or virus intruders). In particular, computer security deals with the social regulations, managerial procedures and technological safeguards applied to computer hardware, software and data to assure against accidental or deliberate unauthorized access to and dissemination of computer system resources (hardware, software, data) while they are in storage, processing or communication [43]. One of the most important problems in security engineering is the quantitative evaluation of the security efficiency. A very interesting and valuable approach to measuring and predicting the operational security of a system was proposed by Brocklehurst et al. [10]. According to this approach, the behavior of a system should be considered from owner's and attacker's points of view. From the attacker's point of view, it is necessary to consider the effort expended by the attacking agent and the reward an attacker would get from breaking into the system. Effort include financial cost, elapsed time, experience, ability of attacker, and could be expressed in such the terms as mean effort to next security breach, probability of successfully resisting an attack, etc. Examples of rewards are personal satisfaction, gain of money, etc. From the owner's point of view, it is necessary to consider the system owner's loss which can be interpreted as an infimum selling price for a successful attack, and the owner's expenses on the security means which include, for instance, anti-virus programs, new passwords, encoding, etc. The expenses come out in terms of time used for system verification, for maintenance of anti-virus software, as well as in terms of money spent on the protection. The expenses can be interpreted as a supremum buying price for a successful attack. Brocklehurst et al. [10] proposed to consider also the viewpoint of an all-knowing, all-seeing oracle, as well as the owner and attacker. This viewpoint could be regarded as being in a sense the `true' security of the system in the testing environment.

From the above, we can say that four variables are the base for obtaining the security measures: effort, rewards, system owner's loss, owner's expenses. Moreover, their interpretation coincides with the behavioral interpretation of lower (expenses) and upper (system owner's loss) previsions, linear previsions (the all-knowing oracle). A detailed description of an imprecise security model has been proposed in [72,108].

PreviousUpNext

13  Second-order reliability models

PreviousUpNext

Natural extension is a powerful tool for analyzing the system reliability on the basis of available partial information about the component reliability. However, it has a disadvantage. Let us imagine that two experts provide the following judgements about the MTTF of a component: (1) MTTF is not greater than 10 hours; (2) MTTF is not less than 10 hours. The natural extension produces the resulting MTTF [0,10][10,е)=10. In other words, the absolutely precise MTTF is obtained from too imprecise initial data. This is unrealistic in practice of reliability analysis. The reason of such results is that probabilities of judgements are assumed to be 1. If we assign some different probabilities to judgements, then we obtain more realistic assessments. For example, if the belief to each judgement is 0.5, then, according to [48], the resulting MTTF is greater than 5 hours. Therefore, in order to obtain the accurate and realistic system reliability assessments, it is necessary to take into account some vagueness of information about the component reliability measures, i.e., to assume that expert judgements and statistical information about reliability of a system or its components may be unreliable. This leads to studying the second-order uncertainty models (hierarchical uncertainty models) on which much attention have been focused due to their quite commonality. These models describe the uncertainty of a random quantity by means of two levels. For example, suppose that an expert provides a judgement about the mean level of component performance [96]. If this expert sometimes provides incorrect judgements, we have to take into account some degree of belief to this judgement. In this case, the information about the mean level of component performance can be considered on the first level of the hierarchical model (first-order information) and the degree of belief to the expert judgements is considered on the second level (second-order information). Many papers are devoted to the theoretical [22,36,58,111] and practical [27,33,59] aspects of second-order uncertainty models. It should be noted that the second-order uncertainty models have been studied in reliability. Lindqvist and Langseth in [54] investigated monotone multi-state systems under assumption that probabilities of the component states (first-order probabilities) can be regarded as random variables governed by the Dirichlet probability distribution (second-order probabilities). A comprehensive review of hierarchical models is given in [23] where it is argued that the most common hierarchical model is the Bayesian one [9,34,35]. At the same time, the Bayesian hierarchical model is unrealistic in problems where there is available only partial information about the system behavior.

Most proposed second-order uncertainty models assume that there is a precise second-order probability distribution (or possibility distribution). Moreover, most models use probabilities as a kind of the first-level uncertainty description. Unfortunately, such information is often absent in many applications and additional assumptions may lead to some inaccuracy in results. A study of some tasks related to the homogeneous second-order models without any assumptions about probability distributions has been illustrated by Kozine and Utkin in [48,50]. However, these models are of limited use due to homogeneity of gambles considered on the first-order level. A hierarchical uncertainty model for combining different types of evidence was proposed by Utkin [75,83], where the second-order probabilities can be regarded as confidence weights and the first-order uncertainty is modelled by lower and upper previsions of different gambles. However, the proposed model [75,83] supposes that initial information is given only for one random variable. At the same time, the reliability applications suppose that there is a set of random variables (component TTFs) described by a second-order uncertainty model, and it is necessary to find a model for some function of these variables (system TTF). Suppose that we have a set of weighted expert judgements related to some measures Efij(Xi) of the component reliability behavior, i=1,...,n, j=1,...,mi, i.e., there are lower and upper previsions ELfij and EUfij. Suppose that each expert is characterized by an interval of probabilities [gijL,gijU]. Then the judgements can be represented as
Pr(ELfij г Efij г EUfij) [gijL,gijU],  i г n,  j г mi.
Here the set {ELfij,EUfij} contains the first-order previsions, the set {gijL,gijU} contains the second-order probabilities. Our aim is to produce new judgements which can be regarded as combinations of available ones. In other words, the following tasks can be solved:

  1. Computing the probability bounds [gL,gU] for some new interval [ELg,EUg] of the system linear prevision Eg.

  2. Computing an average interval [ELEg,EUEg] for the system linear prevision Eg (reduction of the second-order model to first-order one).

An imprecise hierarchical reliability model of systems has been studied by Utkin [90]. This model supposes that there is no information about independence of components. A model taking into account the possible independence of components leads to hard non-linear optimization problems. However, this difficulty can be overcome by means of approaches proposed in [84,91]. Some hierarchical models of reliability taking into account the imprecision of parameters of known lifetime distributions are investigated in [81,87].

PreviousUpNext

14  Conclusion remarks and open problems

PreviousUpNext

Many new results have been obtained to apply the imprecise probability theory to reliability analysis of various systems. The imprecise reliability theory is developed step by step with every result. However, the state-of-the-art is only a visible top of the iceberg called the imprecise reliability theory and there are many open theoretical and practical problems, which should be solved in future. Let us note some of them.

It is obvious that the modern systems and equipment are characterized by complexity of structures and variety of initial information. This implies that, on the one hand, it is impossible to adjust all features of a real system to the considered framework. On the other hand, introduction of some additional assumptions for constructing a reasonable model of a system may cancel all advantages of imprecise probabilities. Where are limits for introducing additional assumptions (simplification) in construction of a model? How do possible changes of initial data imprecision influence on results of the system reliability calculations? It is obvious that the stated questions relate to the informational aspect of the imprecise reliability. The same can be said about necessity of studying the effects of possible estimation errors of initial data on resulting reliability measures. This leads to introducing and determining the uncertainty importance measures.

Another important point is how to solve the optimization problems if the function h(g(X)) is not expressed analytically in the explicit form and can be computed only numerically. For example, this function may be a system of integral equations (repairable system). One of the ways to solve the corresponding optimization problems is the well-known simulation technique. However, the development of effective simulation procedures for solving the considered optimization problems is an open problem.

Most results of the imprecise reliability conjecture the strong independence of components or the lack of information about independence. However, the imprecise probability theory allows us to take into account more subtle types of independence [20,29,52] and, thereby, to make reliability analysis to be more flexible and adequate. Therefore, a clear interpretation of independence concepts in terms of the reliability theory is also an open problem, which has to be solved in future.

In spite of the fact that many algorithms and methods for reliability analysis of various systems have been developed, they are rather theoretical and cover some typical systems, typical initial evidence, typical situations. At the same time, real systems are more complex. This has been shown by challenge problems posed in [60] and discussed at the Epistemic Uncertainty Workshop organized by Sandia National Laboratories, Albuquerque, New Mexico, 2002 ( http://www.sandia.gov/epistemic ) and at the Workshop Application of Fuzzy Sets and Fuzzy Logic to Engineering, Pertisau, Austria, 2002 ( http://techmath.uibk.ac.at/research/fuzzy/workshop ). Therefore, practical approaches to analyze real systems (may be approximately) have to be developed.

In order to achieve a required level of the system reliability by minimal costs, the redundancy optimization technique is usually used. A number of redundant components in a system is determined by the required level of reliability and by the component reliability. Various algorithms for determining the optimal number of redundant components are available in the literature. However, most results assume that there exists complete information about reliability. Therefore, the development of efficient algorithms of optimization by partial information is also an open problem.

A similar problem is the product quality control which needs a trade-off between a better product quality and lower production costs by system constraints related to operating feasibility, product specifications, safety and environmental issues. Here results obtained by Augustin [1,2] concerning decision making under partial information about probabilities of states of nature may be a basis for investigating this problem.

It should be noted that the list of open problems may be extended. However, most problems can be partially reduced to methods of solving the considered optimization problems (natural extension) by different conditions.

PreviousUpNext

References

PreviousUpNext
[1]
T. Augustin. On decision making under ambiguous prior and sampling information. In G. de Cooman, T.L. Fine, and T. Seidenfeld, editors, Imprecise Probabilities and Their Applications. Proc. of the 2nd Int. Symposium ISIPTA'01, pages 9-16, Ithaca, USA, June 2001. Shaker Publishing.

[2]
T. Augustin. Expected utility within a generalized concept of probability - a comprehensive framework for decision making under ambiguity. Statistical Papers, 43:5-22, 2002.

[3]
H.-R. Bae, R.V. Grandhi, and R.A. Canfield. Sensitivity analysis of structural response uncertainty propagation using evidence theory. In Proc. of 9-th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, pages 1-11, Atlanta, Georgia, USA, September 2002. AIAA.

[4]
A. Bardossy and I. Bogardi. Fuzzy fatigue life prediction. Structural Safety, 6:25-38, 1989.

[5]
R.E. Barlow and F. Proschan. Mathematical Theory of Reliability. Wiley, New York, 1965.

[6]
R.E. Barlow and F. Proschan. Statistical Theory of Reliability and Life Testing: Probability Models. Holt, Rinehart and Winston, New York, 1975.

[7]
R.E. Barlow and A.S. Wu. Coherent systems with multistate components. Math. Ops. Res., 3:275-281, 1978.

[8]
E.Ju. Barzilovich and V.A. Kashtanov. Some Mathematical Problems of the Complex System Maintenance Theory. Sovetskoe Radio, Moscow, 1971. in Russian.

[9]
J.O. Berger. Statistical Decision Theory and Bayesian Analysis. Springer-Verlag, New York, 1985.

[10]
S. Brocklehurst, B. Littlewood, T. Olovsson, and E. Jonsson. On measurement of operational security. Technical Report PDCS TR 160, City University, London and Chalmers University of Technology, Goteborg, 1994.

[11]
K.Y. Cai. Introduction to Fuzzy Reliability. Kluwer Academic Publishers, Boston, 1996.

[12]
K.Y. Cai. System failure engineering and fuzzy methodology. an introductory overview. Fuzzy Sets and Systems, 83(2):113-133, 1996.

[13]
K.Y. Cai. Software Defect and Operational Profile Modeling. Kluwer Academic Publishers, Dordrecht/Boston, 1998.

[14]
K.Y. Cai, C.Y. Wen, and M.L. Zhang. A critical review on software reliability modeling. Reliability Engineering and System Safety, 32:357-371, 1991.

[15]
K.Y. Cai, C.Y. Wen, and M.L. Zhang. A novel approach to software reliability modeling. Microelectronics and Reliability, 33:2265-2267, 1993.

[16]
A. Cano and S. Moral. Using probability trees to compute marginals with imprecise probability. International Journal of Approximate Reasoning, 29:1-46, 2002.

[17]
F.P.A. Coolen. On Bayesian reliability analysis with informative priors and censoring. Reliability Engineering and System Safety, 53:91-98, 1996.

[18]
F.P.A. Coolen. An imprecise Dirichlet model for Bayesian analysis of failure data including right-censored observations. Reliability Engineering and System Safety, 56:61-68, 1997.

[19]
F.P.A. Coolen and M.J. Newby. Bayesian reliability analysis with imprecise prior probabilities. Reliability Engineering and System Safety, 43:75-85, 1994.

[20]
I. Couso, S. Moral, and P. Walley. Examples of independence for imprecise probabilities. In G. de Cooman, F.G. Cozman, S. Moral, and P. Walley, editors, ISIPTA '99 - Proceedings of the First International Symposium on Imprecise Probabilities and Their Applications, pages 121-130, Zwijnaarde, Belgium, 1999.

[21]
G. de Cooman. On modeling possibilistic uncertainty in two-state reliability theory. Fuzzy Sets and Systems, 83(2):215-238, 1996.

[22]
G. de Cooman. Possibilistic previsions. In EDK, editor, Proceedings of IPMU'98, volume 1, pages 2-9, Paris, 1998.

[23]
G. de Cooman. Precision-imprecision equivalence in a broad class of imprecise hierarchical uncertainty models. Journal of Statistical Planning and Inference, 105(1):175-198, June 2002.

[24]
D. Dubois and H. Kalfsbeek. Elicitation, assessment and pooling of expert judgement using possibility theory. In C.N. Manikopoulos, editor, Proc. of the 8th Inter. Congress of Cybernetics and Systems, pages 360-367, Newark, NJ, 1990. New Jersey Institute of Technology Press.

[25]
D. Dubois and H. Prade. Possibility Theory: An Approach to Computerized Processing of Uncertainty. Plenum Press, New York, 1988.

[26]
L. Ekenberg, M. Boman, and J. Linnerooth-Bayer. Catastrophic risk evaluation. Interim report IR-97-045, IIASA, Austria, October 1997.

[27]
L. Ekenberg and J. ThorbiЎrnson. Second-order decision analysis. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 9:13-38, 2 2001.

[28]
S. Ferson, L. Ginzburg, V. Kreinovich, H.T. Nguyen, and S.A. Starks. Uncertainty in risk analysis: Towards a general second-order approach combining interval, probabilistic, and fuzzy techniques. In Proceedings of FUZZ-IEEE'2002, volume 2, pages 1342-1347, Honolulu, Hawaii, May 2002.

[29]
Th. Fetz. Sets of joint probability measures generated by random sets. In Presentation at Workshop Applications of Fuzzy Sets and Fuzzy Logic to Engineering Problems, Pertisau, Austria, October 2002. http://techmath.uibk.ac.at/research/fuzzy/workshop.html .

[30]
Th. Fetz, M. Oberguggenberger, and S. Pittschmann. Applicatons of possibility and evidence theory in civil engineering. In G. de Cooman, F.G. Cozman, S. Moral, and P. Walley, editors, ISIPTA '99 - Proceedings of the First International Symposium on Imprecise Probabilities and Their Applications, pages 146-153, Zwijnaarde, Belgium, 1999.

[31]
A.N. Freudenthal. Safety and the probability of structural failure. Transactions ASCE, 121:1337-1397, 1956.

[32]
L. Gemoets, V. Kreinovich, and H. Melendez. When to stop testing software? A fuzzy interval approach. In Proceedings of NAFIPS/IFIS/NASA '94, pages 182-186, 1994.

[33]
L. Gilbert, G. de Cooman, and E.E. Kerre. Practical implementation of possibilistic probability mass functions. In Proceedings of Fifth Workshop on Uncertainty Processing (WUPES 2000), pages 90-101, Jindvrichouv Hradec, Czech Republic, June 2000.

[34]
M. Goldstein. The prevision of a prevision. J. Amer. Statist. Soc., 87:817-819, 1983.

[35]
I.J. Good. Some history of the hierarchical Bayesian methodology. In J.M. Bernardo, M.H. DeGroot, D.V. Lindley, and A.F.M. Smith, editors, Bayesian Statistics, pages 489-519. Valencia University Press, Valencia, 1980.

[36]
I. R. Goodman and H. T. Nguyen. Probability updating using second order probabilities and conditional event algebra. Information Sciences, 121(3-4):295-347, 1999.

[37]
J. Goutsias, R.P.S. Mahler, and H.T. Nguyen. Random Sets - Theory and Applications. Springer, New York, 1997.

[38]
S.V. Gurov and L.V. Utkin. Reliability of Systems under Incomplete Information. Lubavich Publ., Saint Petersburg, 1999. in Russian.

[39]
S.V. Gurov, L.V. Utkin, and S.P. Habarov. Interval probability assessments for new lifetime distribution classes. In Proceedings of the Second Int. Conf. on Mathematical Methods in Reliability, volume 1, pages 483-486, Bordeaux, France, 2000.

[40]
J. Hall and J. Lawry. Imprecise probabilities of engineering system failure from random and fuzzy set reliability analysis. In G. de Cooman, T.L. Fine, and T. Seidenfeld, editors, Imprecise Probabilities and Their Applications. Proc. of the 1st Int. Symposium ISIPTA'01, pages 195-204, Ithaca, USA, June 2001. Shaker Publishing.

[41]
E. Hollnagel. Human reliability analysis. Context and control. Academic Press, London, 1993.

[42]
J. Holmberg, K. Hukki, L. Norros, U. Pulkkinen, and P. Pyy. An integrated approach to human reliability analysis - decision analytic dynamic reliability model. Reliability Engineering and System Safety, 65:239-250, 1999.

[43]
D.K. Hsiao, S. Kerr, and S.E. Madnick. Computer Security. Academic Press, New York, 1979.

[44]
A.Y. Khintchine. On unimodal distributions. Izv. Nauchno-Isled. Inst. Mat. Mech., 2:1-7, 1938.

[45]
I. Kozine. Imprecise probabilities relating to prior reliability assessments. In G. de Cooman, F.G. Cozman, S. Moral, and P. Walley, editors, ISIPTA '99 - Proceedings of the First International Symposium on Imprecise Probabilities and Their Applications, pages 241-248, Zwijnaarde, Belgium, 1999.

[46]
I. Kozine and Y. Filimonov. Imprecise reliabilities: Experiences and advances. Reliability Engineering and System Safety, 67:75-83, 2000.

[47]
I.O. Kozine and L.V. Utkin. Generalizing Markov chains to imprecise previsions. In Proceedings of the 5th International Conference on Probabilistic Safety Assessment and Management, pages 383-388, Osaka, Japan, November-December 2000. Universal Academy Press, Tokyo.

[48]
I.O. Kozine and L.V. Utkin. Constructing coherent interval statistical models from unreliable judgements. In E. Zio, M. Demichela, and N. Piccini, editors, Proceedings of the European Conference on Safety and Reliability ESREL2001, volume 1, pages 173-180, Torino, Italy, September 2001.

[49]
I.O. Kozine and L.V. Utkin. Interval-valued finite Markov chains. Reliable Computing, 8(2):97-113, April 2002.

[50]
I.O. Kozine and L.V. Utkin. Processing unreliable judgements with an imprecise hierarchical model. Risk Decision and Policy, 7(3):325-339, 2002.

[51]
I.O. Kozine and L.V. Utkin. Variety of judgements admitted in imprecise statistical reasoning. In The 3-rd Safety and Reliability International Conference, Gdynia, Poland, May 2003. To appear.

[52]
V. P. Kuznetsov. Interval Statistical Models. Radio and Communication, Moscow, 1991. in Russian.

[53]
H. Kwakernaak. Fuzzy random variables: definitions and theorems. Information Sciences, 15:1-29, 1978.

[54]
B. Lindqvist and H. Langseth. Uncertainty bounds for a monotone multistate system. Probability in the Engineering and Informational Sciences, 12:239-260, 1998.

[55]
B. MЎller, M. Beer, W. Graf, and A. Hoffmann. Possibility theory based safety assessment. Comp.-Aided Civil and Infrastruct. Eng., 14:81-91, 1999.

[56]
J. Montero, J. Tejada, and J. Yanez. General structure functions. Kybernetes, 23(3):10-19, 1994.

[57]
S. Nahmias. Fuzzy variable. Fuzzy Sets and Systems, 1:97-110, 1978.

[58]
R. F. Nau. Indeterminate probabilities on finite sets. The Annals of Statistics, 20:1737-1767, 1992.

[59]
H.T. Nguyen, V. Kreinovich, and L. Longpre. Second-order uncertainty as a bridge between probabilistic and fuzzy approaches. In Proceedings of the 2nd Conference of the European Society for Fuzzy Logic and Technology EUSFLAT'01, pages 410-413, England, September 2001.

[60]
W.L. Oberkampf, J.C. Helton, C.A. Joslyn, S.F. Wojtkiewicz, and S. Ferson. Challenge problems: Uncertainty in system response given uncertain parameters. Reliability Engineering and System Safety, 2002. Submitted for publication.

[61]
T. Onisawa. An approach to human reliability in man-machine system using error possibility. Fuzzy Sets and Systems, 27:87-103, 1988.

[62]
G. Shafer. A Mathematical Theory of Evidence. Princeton University Press, 1976.

[63]
K. Tanaka and G.J. Klir. A design condition for incorporating human judgement into monitoring systems. Reliability Engineering and System Safety, 65:251-258, 1999.

[64]
F. Tonon and A. Bernardini. A random set approach to optimisation of uncertain structures. Computers and Structures, 68:583-600, 1998.

[65]
F. Tonon, A. Bernardini, and I. Elishakoff. Concept of random sets as applied to the design of structures and analysis of expert opinions for aircraft crash. Chaos, Solutions and Fractals, 10(11):1855-1868, 1999.

[66]
F. Tonon, A. Bernardini, and A. Mammino. Determination of parameters range in rock engineering by means of random set theory. Reliability Engineering and System Safety, 70:241-261, 3 2000.

[67]
F. Tonon, A. Bernardini, and A. Mammino. Reliability analysis of rock mass response by means of random set theory. Reliability Engineering and System Safety, 70:263-282, 3 2000.

[68]
M.C.M. Troffaes and G. de Cooman. Extension of coherent lower previsions to unbounded random variables. In Proceedings of the Ninth International Conference IPMU 2002 (Information Processing and Management), pages 735-742, Annecy, France, July 2002. ESIA - University of Savoie.

[69]
M.C.M. Troffaes and G. de Cooman. Lower previsions for unbounded random variables. In P. Grzegorzewski, O. Hryniewicz, and M.A. Gil, editors, Soft Methods in Probability, Statistics and Data Analysis, pages 146-155. Phisica-Verlag, Heidelberg, New York, 2002.

[70]
L.V. Utkin. General reliability theory on the basis of upper and lower previsions. In D. Ruan, H.A. Abderrahim, P. D'hondt, and E.E. Kerre, editors, Fuzzy Logic and Intelligent Technologies for Nuclear Science and Industry. Proceedings of the 3rd International FLINS Workshop, pages 36-43, Antwerp, Belgium, September 1998.

[71]
L.V. Utkin. Imprecise reliability analysis by comparative judgements. In Proceedings of the Second Int. Conf. on Mathematical Methods in Reliability, volume 2, pages 1005-1008, Bordeaux, France, 2000.

[72]
L.V. Utkin. Security analysis on the basis of the imprecise probability theory. In M.P. Cottam, D.W.Harvey, R.P. Pape, and J. Tait, editors, Foresight and Precaution. Proc. of ESREL 2000, volume 2, pages 1109-1114, Rotterdam, May 2000. Balkema.

[73]
L.V. Utkin. Assessment of risk under incomplete information. In Proc. of International Scientific School "Modelling and Analysis of Safety, Risk and Quality in Complex Systems", pages 319-322, Saint Petersburg, Russia, June 2001.

[74]
L.V. Utkin. Avoiding the conflicting risk assessments. In Proceedings of International Scientific School "Modelling and Analysis of Safety, Risk and Quality in Complex Systems", pages 58-62, Saint Petersburg, Russia, July 2002.

[75]
L.V. Utkin. A hierarchical uncertainty model under essentially incomplete information. In P. Grzegorzewski, O. Hryniewicz, and M.A. Gil, editors, Soft Methods in Probability, Statistics and Data Analysis, pages 156-163. Phisica-Verlag, Heidelberg, New York, 2002.

[76]
L.V. Utkin. Imprecise calculation with the qualitative information about probability distributions. In P. Grzegorzewski, O. Hryniewicz, and M.A. Gil, editors, Soft Methods in Probability, Statistics and Data Analysis, pages 164-169. Phisica-Verlag, Heidelberg, New York, 2002.

[77]
L.V. Utkin. Interval software reliability models as generalization of probabilistic and fuzzy models. In German Open Conference on Probability and Statistics, pages 55-56, Magdeburg, Germany, March 2002.

[78]
L.V. Utkin. Involving the unimodality condition of discrete probability distributions into imprecise calculations. In Proceedings of the Int. Conf. on Soft Computing and Measurements (SCM'2002), volume 1, pages 53-56, St. Petersburg, Russia, June 2002. Gidrometeoizdat.

[79]
L.V. Utkin. Some structural properties of fuzzy reliability models. In Proceedings of the Int. Conf. on Soft Computing and Measurements (SCM'2002), volume 1, pages 197-200, St. Petersburg, Russia, June 2002. Gidrometeoizdat.

[80]
L.V. Utkin. General expressions for imprecise reliability of monotone systems. Reliability Engineering and System Safety, 2003. Submitted for publication.

[81]
L.V. Utkin. A hierarchical model of reliability by imprecise parameters of lifetime distributions. Reliable Computing, 2003. Submitted for publication.

[82]
L.V. Utkin. Imprecise reliability of cold standby systems. International Journal of Quality and Reliability, 2003. Submitted for publication.

[83]
L.V. Utkin. Imprecise second-order hierarchical uncertainty model. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 11(3), 2003. To Appear.

[84]
L.V. Utkin. Imprecise second-order uncertainty model for a system of independent random variables. 2003. In preparation.

[85]
L.V. Utkin. Imprecise software reliability models. Information Sciences, 2003. Submitted for publication.

[86]
L.V. Utkin. Interval reliability of typical systems with partially known probabilities. European Journal of Operational Research, 2003. To Appear.

[87]
L.V. Utkin. A method for combining the heterogeneous expert judgements by the known type of lifetime probability distributions. European Journal of Operational Research, 2003. Submitted for publication.

[88]
L.V. Utkin. A new efficient algorithm for computing the imprecise reliability of monotone systems. Reliability Engineering and System Safety, 2003. Submitted for publication.

[89]
L.V. Utkin. Reliability models of m-out-of-n systems under incomplete information. Computers and Operations Research, 2003. Submitted for publication.

[90]
L.V. Utkin. A second-order uncertainty model for the calculation of the interval system reliability. Reliability Engineering and System Safety, 79(3):341-351, 2003.
 
[91]
L.V. Utkin. A second-order uncertainty model of independent random variables: An example of the stress-strength reliability. 2003. In preparation.

[92]
L.V. Utkin and S.V. Gurov. A general formal approach for fuzzy reliability analysis in the possibility context. Fuzzy Sets and Systems, 83:203-213, 1996.

[93]
L.V. Utkin and S.V. Gurov. New reliability models on the basis of the theory of imprecise probabilities. In IIZUKA'98 - The 5th International Conference on Soft Computing and Information / Intelligent Systems, volume 2, pages 656-659, Iizuka, Japan, October 1998.

[94]
L.V. Utkin and S.V. Gurov. Steady-state reliability of repairable systems by combined probability and possibility assumptions. Fuzzy Sets and Systems, 97(2):193-202, 1998.

[95]
L.V. Utkin and S.V. Gurov. Imprecise reliability models for the general lifetime distribution classes. In G. de Cooman, F.G. Cozman, S. Moral, and P. Walley, editors, ISIPTA '99 - Proceedings of the First International Symposium on Imprecise Probabilities and Their Applications, pages 333-342, Zwijnaarde, Belgium, 1999.

[96]
L.V. Utkin and S.V. Gurov. Imprecise reliability of general structures. Knowledge and Information Systems, 1(4):459-480, 1999.

[97]
L.V. Utkin and S.V. Gurov. Generalized ageing lifetime distribution classes. In M.P. Cottam, D.W.Harvey, R.P. Pape, and J. Tait, editors, Foresight and Precaution. Proc. of ESREL 2000, volume 2, pages 1539-1545, Rotterdam, May 2000. Balkema.

[98]
L.V. Utkin and S.V. Gurov. New reliability models based on imprecise probabilities. In C. Hsu, editor, Advanced Signal Processing Technology, chapter 6, pages 110-139. World Scientific, 2001.

[99]
L.V. Utkin and S.V. Gurov. Imprecise reliability for the new lifetime distribution classes. Journal of Statistical Planning and Inference, 105(1):215-232, 2002.

[100]
L.V. Utkin, S.V. Gurov, and M.I. Shubinsky. A fuzzy software reliability model with multiple-error introduction and removal. International Journal of Reliability, Quality and Safety Engineering, 9(3):215-228, 2002.

[101]
L.V. Utkin and I.O. Kozine. Conditional previsions in imprecise reliability. In D. Ruan, H.A. Abderrahim, and P. D'Hondt, editors, Intelligent Techniques and Soft Computing in Nuclear Science and Engineering, pages 72-79, Bruges, Belgium, 2000. World Scientific.

[102]
L.V. Utkin and I.O. Kozine. Computing the reliability of complex systems. In G. de Cooman, T.L. Fine, and T. Seidenfeld, editors, Imprecise Probabilities and Their Applications. Proc. of the 1st Int. Symposium ISIPTA'01, pages 324-331, Ithaca, USA, June 2001. Shaker Publishing.

[103]
L.V. Utkin and I.O. Kozine. Different faces of the natural extension. In G. de Cooman, T.L. Fine, and T. Seidenfeld, editors, Imprecise Probabilities and Their Applications. Proc. of the 2nd Int. Symposium ISIPTA'01, pages 316-323, Ithaca, USA, June 2001. Shaker Publishing.

[104]
L.V. Utkin and I.O. Kozine. A reliability model of multi-state units under partial information. In H. Langseth and B. Lindqvist, editors, Proceedings of the Third Int. Conf. on Mathematical Methods in Reliability (Methodology and Practice), pages 643-646, Trondheim, Norway, June 2002. NTNU.

[105]
L.V. Utkin and I.O. Kozine. Stress-strength reliability models under incomplete information. International Journal of General Systems, 31(6):549 - 568, 2002.

[106]
L.V. Utkin and I.O. Kozine. Structural reliability modelling under partial source information. In H. Langseth and B. Lindqvist, editors, Proceedings of the Third Int. Conf. on Mathematical Methods in Reliability (Methodology and Practice), pages 647-650, Trondheim, Norway, June 2002. NTNU.

[107]
L.V. Utkin and I.O. Kozine. Computing system reliability given interval-valued characteristics of the components. Reliability Engineering and System Safety, 2003. Submitted for publication.

[108]
L.V. Utkin and I.B. Shubinsky. Unconventional Methods of the Information System Reliability Assessment. Lubavich Publ., St. Petersburg, 2000. in Russian.

[109]
P. Walley. Statistical Reasoning with Imprecise Probabilities. Chapman and Hall, London, 1991.

[110]
P. Walley. Measures of uncertainty in expert systems. Artificial Intelligence, 83:1-58, 1996.

[111]
P. Walley. Statistical inferences based on a second-order possibility distribution. International Journal of General Systems, 9:337-383, 1997.

[112]
K. Weichselberger. The theory of interval-probability as a unifying concept for uncertainty. International Journal of Approximate Reasoning, 24:149-170, 2000.

[113]
K. Weichselberger. Elementare Grundbegriffe einer allgemeineren Wahrscheinlichkeitsrechnung, volume I Intervallwahrscheinlichkeit als umfassendes Konzept. Physika, Heidelberg, 2001.

[114]
M. Xie. Software Reliability Modeling. World Scientific, 1991.

[115]
R.R. Yager and V. Kreinovich. Decision making under interval probabilities. International Journal of Approximate Reasoning, 22:195-215, 1999.

[116]
L. Yubin, Q. Zhong, and W. Guangyuan. Fuzzy random reliability of structures based on fuzzy random variables. Fuzzy Sets and Systems, 86:345-355, 1997.

PreviousUp 


File translated from TEX by TTH, version 3.21.
On 30 Dec 2002, 20:57.