Knowledge Representation & Reasoning
2. Logic-based knowledge representation and reasoning
2.6. Discussing logic-based planning of popcorn making
The logic-based approach to reasoning about the actions of a robot agent has several attractive advantages over other forms of robot decision making. Given a model formalized as a set of logical axioms, which correctly and completely represents a robot, its actions, and the environment in some model, a robot agent can compute action plans for every goal that it is able to accomplish (Kowalski, 1979). Further, every action plan that the robot agent computes this way is guaranteed to be correct with respect to the formalized model. The set of axioms represents the knowledge about actions, their preconditions, and effects in a very modular, transparent, and explicit way. For every deduced action plan the robot agent can use the proof for the plan existence as an explanation of how the derived plan can achieve the goal. Finally, the competence of the robot agent can be incrementally increased by stating additional axioms formalizing new objects, actions, and environment conditions.
These advantages motivated many researchers to pursue a logic-based approach to reasoning about action. The logical approach to reasoning about action is presented in several textbooks, most notably in the ones by Reiter, 2001; Genesereth 1987; Davis, 1990, and Mueller, 2006. Seminal research papers promoting the logical approach to reasoning about action include McCarthy, 1969; McCarthy, 1968; Hayes, 1977; Hayes, 1979; Hayes, 1985. There have also been attempts to standardize knowledge representation and reasoning, as for example, in the Knowledge Interchange Format (Genesereth, 1998).
One caveat about the logical approach is that its advantages can only be leveraged if the respective robots, environment, and actions can be faithfully formalized as a consistent set of axioms. How to come up with sets of axioms that are consistent, logically entail all intended conclusions, and forbid unwanted ones is often a difficult knowledge engineering and research activity.
The research work on knowledge-based reasoning about actions is much too extensive to be covered in a book chapter. Therefore, in the remainder of this section we sketch relevant extensions to the logic based reasoning approach and refer to textbooks and seminal papers for more thorough and deeper accounts of the respective research topics.
When trying to come up with the appropriate logic-based axiomatizations for reasoning about robot actions one often experiences a lack of expressiveness and limitations of the state transition system model with atomic state transitions. In response to these limitations, various extensions have been proposed to deal with parallel actions (Baral, 1997), temporal aspects of action execution (McDermott, 1982; Allen, 1983; McDermott, 1985}, resource consumption through actions (Dean, 1991; Dechter, 2003}, and other shortcomings of the original approach.
Another limitation that is particularly critical for the control of robot agents is the inability of dealing with nondeterministic and probabilistic aspects of actions (Fagin, 1990). To respond to these limitations researchers have investigated non-montonic (McCarthy, 1980; McCarthy, 1986}, probabilistic (Hanks, 1990; Beetz, 2005) and decision-theoretic approaches (Boutilier, 1998; Haddawy, 1990; Haddawy, 1993; Kaelbling, 1998) to decide on the course of action that has the highest expected utility. In order to further increase the expressive power of the representation mechanisms researchers have also proposed approaches that can reason about the information preconditions of actions, such as knowing the combination before opening a safe (Moore, 1985; Morgenstern, 1987) and for representing not only the state of the environment but also an agent's beliefs with respect to the environment state (Bratman, 1988; Rao, 1991).
Some of the reasoning problems addressed are particularly relevant for autonomous robotics such as the indistinguishability of objects with respect to their perceptual features. Perhaps the best known is the "bomb into the toilet'' problem (McDermott, 1987), in which a robot looking at two identical packages is asked to disarm the bomb by putting it into the toilet. In this case, the proper plan is to put both packages into the toilet because it is the only way to ensure that the bomb is disarmed. This problem is closely related to the so-called symbol grounding problem (Harnad, 1990). The symbol grounding problem entails the problem that the semantics of logic languages assigns symbolic names objects in the real world as their meanings. This is critical because in many cases the perceptual apparatus of robots cannot effectively distinguish between different objects in question. One way of dealing with these problems is to extend the scope of reasoning also to the grounding of symbols. When setting the table the identity of a cup typically does not matter as long as the cup is clean and empty. When disarming a bomb, all candidate individuals should be disarmed in order to avoid an explosion. Only reasoning systems that can reason about the consequences of grounding symbols one way or the other can deal in informed ways with cases where the robot control system cannot properly ground a symbolic representation.
A heated debate about deduction being sufficient for realizing the common-sense capabilities of humans was triggered by the so-called Yale shooting problem (Hanks, 1987). The problems in solving the Yale shooting problem were taken by a number of researchers as an indication that a single logical reasoning mechanism might not be sufficient to solve both prospective and diagnostic reasoning tasks in reasoning about action (McDermott, 1987). One way of dealing with this issue is to employ multiple expert reasoners (Ferrucci, 2010), which, however, also yields the problem of how to deal with situations where different reasoners propose different solutions.
There have also been attempts to formalize more detailed and realistic models of manipulation actions in logic formalisms. This research agenda was originally been put forward in the two naive physics manifestos by Hayes, 1985; Hayes, 1985b). Perhaps the most insight providing activity was a challenge put forward by Davis and colleagues in which they asked for axiomatizing the task of cracking an egg and separating the egg yolk from the egg white (Miller: Common Sense Problem Page). The reasoning problem is that given an activity description of an egg cracking activity the reasoner has to answer an open collection of questions including: "What happens if: The cook brings the egg to impact very quickly? Very slowly? The cook lays the egg in the bowl and exerts steady pressure with his hand? The cook, having cracked the egg, attempts to peel it off its contents like a hard-boiled egg? The bowl is made of looseleaf paper? of soft clay? The bowl is smaller than the egg? The bowl is upside down? The cook tries this procedure with a hard-boiled egg? With a coconut? With an M&M?''} This exercise showed the dependence of the axiomatizations on the proper choice of abstractions and how quickly axiomatizations become overly complex as action models become more realistic (Morgenstern, 2001).
If we apply the knowledge representation and reasoning methods proposed in the previous section to the control of robot agents we see an important weakness of this approach. This weakness is that choosing the state transition system model with atomic state transitions as a model we make the assumption that the effects of actions are independent of the movements that a robot executes. This contradicts the view of robotics as the field that studies how to move the body of the robot to accomplish tasks.