Page 79 - Trust in ICT 2017
P. 79

Trust in ICT                                                1


            •       Decision Theoretic Approaches: These approaches are built upon a strong foundation of probability
                    theory and so their trust evaluations are compatible with standard statistical decision theory. That
                    is an agent which can calculate its expected utility directly using the output of the model.
            8.3.2   Develop a decision making algorithm for policy decision and enforcement [81]

            •       Exploration and Threshold
                    Griffiths et al.  [82] employ a simple, threshold-based decision-making model. To this end, they
                    define the concepts of untrust and undistrust (in addition to trust and distrust) to represent the
                    notions that a degree of trust may be insufficient for deciding to delegate (or not, in the case of
                    undistrust). Agents who are ‘untrusted’ are only considered for interaction if no explicitly trusted
                    alternatives are available. The eventual decision to interact is made if the degree of trust exceeds a
                    pre-defined threshold, provided by the system designer. In initial cases, the authors require that all
                    trustors  participate  in  a  ‘bootstrapping’  phase  of  a  fixed  duration,  whereby  agents  explore  the
                    society before beginning to use their trust models. While the particular exploration strategy is not
                    discussed,  Griffiths  states  that  any  partner  has  an  equal  chance  of  being  selected  during  the
                    bootstrapping phase.
                    The  SULTAN  model  was  developed  primarily  with  a  view  to  supporting  secure  interactions  in
                    internet applications, in the domain of trust management. These works can be distinguished from
                    other works by their focus on security and implement ability within enterprise systems. Typical
                    decisions necessitating trust, in this context, may be the decision to allow a user access to a sensitive
                    or restricted system resource, or the decision to accept a user’s authorisation key. Trust is generally
                    specified as rules (or policies) provided by users, stating the preconditions of trust.  By taking a
                    probabilistic view of the possible contingencies, the authors quantify risks in terms of Expected Loss
                    and Maximum Allowable Loss. The decision to trust is made using the policies together with a risk
                    threshold, here an interaction will be considered too risky.

                    The FIRE model Huynh et al. employ a more sophisticated variant of the most-trusted strategy for
                    selecting interaction partners which includes exploration. The decision mechanism of FIRE consists
                    of two stages, and can be summarised as follows. The set of potential partners is initially divided
                    into two subsets, based on the ability of the trustor to produce evaluations for those partners. These
                    sets  are  termed  hasTrustValue  and  noTrustValue.  The  most  trusted  candidate  from  the
                    hasTrustValue  is  advanced  to  the  exploration  stage.  In  this  secondary  stage,  the  Boltzmann
                    exploration strategy is used to make a decision between selecting the most trusted agent, or a
                    random one from the noTrustValue set. In this model, the trustor always chooses to delegate.

                    The Boltzmann exploration strategy is useful for decision-making when nothing is known about the
                    candidate set. Given an agent has a choice between a number of actions (i.e. delegation candidates)
                    (a1, a2, …, an) with expected utilities (u1, u2, …, un), the Boltzmann strategy assigns a probability to
                    each action according to the distribution in the equation:






            •       Decision Theoretic Approaches
                    Matt et al. present an approach which combines probabilistic measures of trustworthiness within
                    the context of a logical argumentation framework. In this work, the authors assume the existence
                    of contracts which specify certain guarantees about the interaction outcomes that can be expected.
                    The probabilistic representation of trust is based on the model of Yu and Singh. An agent deliberates
                    by advancing arguments regarding service parameters (e.g. reliability, security) that either attack or
                    support a proposition T, representing the assertion that a particular trustee is trustworthy. A second
                    kind of argument (called a mitigation argument) attacks contract arguments that support T. These
                    arguments represent claims that a particular agent usually violates a contract clause which supports
                    T. The decision to trust is eventually made on the basis of whether the proposition T is supported
                    beyond some cautiousness parameter, which is equivalent to the trusting threshold of Yu and Singh.


                                                                                                           71
   74   75   76   77   78   79   80   81   82   83   84