foundations of computational agents
Definite clauses can be used in a proof by contradiction by allowing rules that give contradictions. For example, in the electrical wiring domain, it is useful to be able to specify that some prediction, such as that light is on, is not true. This will enable diagnostic reasoning to deduce that some switches, lights, or circuit breakers are broken.
The definite-clause language does not allow a contradiction to be stated. However, a simple expansion of the language can allow proof by contradiction.
An integrity constraint is a clause of the form
where the are atoms and is a special atom that is false in all interpretations.
A Horn clause is either a definite clause or an integrity constraint. That is, a Horn clause has either or a normal atom as its head.
Integrity constraints allow the system to prove that some conjunction of atoms is false in all models of a knowledge base. Recall that is the negation of , which is true in an interpretation when is false in that interpretation, and is the disjunction of and , which is true in an interpretation if is true or is true or both are true in the interpretation. The integrity constraint is logically equivalent to .
Unlike a definite-clause knowledge base, a Horn clause knowledge base can imply negations of atoms, as shown in Example 5.17.
Consider the knowledge base :
The atom is false in all models of . To see this, suppose instead that is true in model of . Then and would both be true in (otherwise would not be a model of ). Because is false in and and are true in , the first clause is false in , a contradiction to being a model of . Thus, is true in all models of , which can be written as
Although the language of Horn clauses does not allow disjunctions and negations to be input, disjunctions of negations of atoms can be derived, as the following example shows.
Consider the knowledge base :
Either is false or is false in every model of . If they were both true in some model of , both and would be true in , so the first clause would be false in , a contradiction to being a model of . Similarly, either is false or is false in every model of . Thus
A set of clauses is unsatisfiable if it has no models. A set of clauses is provably inconsistent with respect to a proof procedure if can be derived from the clauses using that proof procedure. If a proof procedure is sound and complete, a set of clauses is provably inconsistent if and only if it is unsatisfiable.
It is always possible to find a model for a set of definite clauses. The interpretation with all atoms true is a model of any set of definite clauses. Thus, a definite-clause knowledge base is always satisfiable. However, a set of Horn clauses can be unsatisfiable.
The set of clauses is unsatisfiable. There is no interpretation that satisfies both clauses. Both and cannot be true in any interpretation.
Both the top-down and the bottom-up proof procedures can be used to prove inconsistency, by using as the query.
Reasoning from contradictions is a very useful tool. For many activities it is useful to know that some combination of assumptions is incompatible. For example, it is useful in planning to know that some combination of actions an agent is contemplating is impossible. When designing a new artifact, it is useful to know that some combination of components cannot work together.
In a diagnostic application it is useful to be able to prove that some components working normally is inconsistent with the observations of the system. Consider a system that has a description of how it is supposed to work and some observations. If the system does not work according to its specification, a diagnostic agent should identify which components could be faulty.
To carry out these tasks it is useful to be able to make assumptions that can be proven to be false.
An assumable is an atom that can be assumed in a proof by contradiction. A proof by contradiction derives a disjunction of the negation of assumables.
With a Horn clause knowledge base and explicit assumables, if the system can prove a contradiction from some assumptions, it can extract those combinations of assumptions that cannot all be true. Instead of proving a query, the system tries to prove , and collects the assumables that are used in a proof.
If is a set of Horn clauses, a conflict of is a set of assumables that, given , implies . That is, is a conflict of if
In this case, an answer is
A minimal conflict is a conflict such that no strict subset is also a conflict.
In Example 5.18, if is the set of assumables, then and are minimal conflicts of ; is also a conflict, but not a minimal conflict.
In the examples that follow, assumables are specified using the assumable keyword followed by one or more assumable atoms separated by commas.
Making assumptions about what is working normally, and deriving what components could be abnormal, is the basis of consistency-based diagnosis. Suppose a fault is something that is wrong with a system. The aim of consistency-based diagnosis is to determine the possible faults based on a model of the system and observations of the system. By making the absence of faults assumable, conflicts can be used to prove what is wrong with the system.
Consider the house wiring example depicted in Figure 5.2 and represented in Example 5.8. Figure 5.8 gives a background knowledge base suitable for consistency-based diagnosis. Normality assumptions, specifying that switches, circuit breakers, and lights must be ok to work as expected, are added to the clauses. There are no clauses for the atoms, but they are made assumable.
The user is able to observe the switch positions and whether a light is lit or dark.
A light cannot be both lit and dark. This knowledge is stated in the following integrity constraints:
Suppose the user observes that all three switches are up, and that and are both dark. This is represented by the atomic clauses
Given the knowledge of Figure 5.8 together with the observations, there are two minimal conflicts:
Thus, it follows that
which means that at least one of the components , , , or must not be ok, and at least one of the components , , or must not be ok.
Given the set of all conflicts, a user can determine what may be wrong with the system being diagnosed. However, sometimes it is more useful to give a disjunction of conjunctions of faults. This lets the user see whether all of the conflicts can be accounted for by a single fault or a pair of faults, or the system perhaps needs more faults.
Given a set of conflicts, a consistency-based diagnosis is a set of assumables that has at least one element in each conflict. A minimal diagnosis is a diagnosis such that no subset is also a diagnosis. For one of the diagnoses, all of its elements must be false in the world being modeled.
In Example 5.21, the disjunction of the negation of the two conflicts is a logical consequence of the clauses. Thus, the conjunction
follows from the knowledge base. This conjunction of disjunctions in conjunctive normal form (CNF) can be distributed into disjunctive normal form (DNF), a disjunction of conjunctions, here of negated atoms:
Thus, either is broken or there is at least one of six double faults.
The propositions that are disjoined together correspond to the seven minimal diagnoses: , , , , , , . The system has proved that one of these combinations must be faulty.
This section presents a bottom-up implementation and a top-down implementation for finding conflicts in Horn clause knowledge bases.
The bottom-up proof procedure for assumables and Horn clauses is an augmented version of the bottom-up algorithm for definite clauses presented in Section 5.3.2.
The modification to that algorithm is that the conclusions are pairs , where is an atom and is a set of assumables that imply in the context of the Horn clause knowledge base .
Initially, the conclusion set is . Clauses can be used to derive new conclusions. If there is a clause such that for each there is some such that , then can be added to . This covers the case of atomic clauses, with , where is added to .
Figure 5.9 gives code for the algorithm. This algorithm is an assumption-based truth maintenance system (ATMS), and can be combined with the incremental addition of clauses and assumables.
When the pair is generated, the assumptions form a conflict.
One refinement of this program is to prune supersets of assumptions. If and are in , where , then can be removed from or not added to . There is no reason to use the extra assumptions to imply . Similarly, if and are in , where , then can be removed from because and any superset – including – are inconsistent with the clauses given, and so nothing more can be learned from considering such sets of assumables.
Consider the axiomatization of Figure 5.8, discussed in Example 5.21.
Initially, in the algorithm of Figure 5.9, has the value
The following shows a sequence of values added to under one sequence of selections:
Thus, the knowledge base entails
The other conflict can be found by continuing the algorithm.
The top-down implementation is similar to the top-down definite-clause interpreter described in Figure 5.4, except the top-level query is to prove , and the assumables encountered in a proof are not proved but collected.
The algorithm is shown in Figure 5.10. Different choices can lead to different conflicts being found. If no choices are available, the algorithm fails.
Consider the representation of the circuit in Example 5.21. The following is a sequence of the values of for one sequence of selections and choices that leads to a conflict:
The set is returned as a conflict. Different choices of the clause to use can lead to another answer.