%!TEX root = root.tex
\chapter{The Rules of the Game}
\label{chap:rules}
This chapter outlines the concepts and techniques that underlie reasoning
in Isabelle. Until now, we have proved everything using only induction and
simplification, but any serious verification project requires more elaborate
forms of inference. The chapter also introduces the fundamentals of
predicate logic. The first examples in this chapter will consist of
detailed, low-level proof steps. Later, we shall see how to automate such
reasoning using the methods
\isa{blast},
\isa{auto} and others. Backward or goal-directed proof is our usual style,
but the chapter also introduces forward reasoning, where one theorem is
transformed to yield another.
\section{Natural Deduction}
\index{natural deduction|(}%
In Isabelle, proofs are constructed using inference rules. The
most familiar inference rule is probably \emph{modus ponens}:%
\index{modus ponens@\emph{modus ponens}}
\[ \infer{Q}{P\imp Q & P} \]
This rule says that from $P\imp Q$ and $P$ we may infer~$Q$.
\textbf{Natural deduction} is an attempt to formalize logic in a way
that mirrors human reasoning patterns.
For each logical symbol (say, $\conj$), there
are two kinds of rules: \textbf{introduction} and \textbf{elimination} rules.
The introduction rules allow us to infer this symbol (say, to
infer conjunctions). The elimination rules allow us to deduce
consequences from this symbol. Ideally each rule should mention
one symbol only. For predicate logic this can be
done, but when users define their own concepts they typically
have to refer to other symbols as well. It is best not to be dogmatic.
Our system is not based on pure natural deduction, but includes elements from the sequent calculus
and free-variable tableaux.
Natural deduction generally deserves its name. It is easy to use. Each
proof step consists of identifying the outermost symbol of a formula and
applying the corresponding rule. It creates new subgoals in
an obvious way from parts of the chosen formula. Expanding the
definitions of constants can blow up the goal enormously. Deriving natural
deduction rules for such constants lets us reason in terms of their key
properties, which might otherwise be obscured by the technicalities of its
definition. Natural deduction rules also lend themselves to automation.
Isabelle's
\textbf{classical reasoner} accepts any suitable collection of natural deduction
rules and uses them to search for proofs automatically. Isabelle is designed around
natural deduction and many of its tools use the terminology of introduction
and elimination rules.%
\index{natural deduction|)}
\section{Introduction Rules}
\index{introduction rules|(}%
An introduction rule tells us when we can infer a formula
containing a specific logical symbol. For example, the conjunction
introduction rule says that if we have $P$ and if we have $Q$ then
we have $P\conj Q$. In a mathematics text, it is typically shown
like this:
\[ \infer{P\conj Q}{P & Q} \]
The rule introduces the conjunction
symbol~($\conj$) in its conclusion. In Isabelle proofs we
mainly reason backwards. When we apply this rule, the subgoal already has
the form of a conjunction; the proof step makes this conjunction symbol
disappear.
In Isabelle notation, the rule looks like this:
\begin{isabelle}
\isasymlbrakk?P;\ ?Q\isasymrbrakk\ \isasymLongrightarrow\ ?P\ \isasymand\ ?Q\rulenamedx{conjI}
\end{isabelle}
Carefully examine the syntax. The premises appear to the
left of the arrow and the conclusion to the right. The premises (if
more than one) are grouped using the fat brackets. The question marks
indicate \textbf{schematic variables} (also called
\textbf{unknowns}):\index{unknowns|bold} they may
be replaced by arbitrary formulas. If we use the rule backwards, Isabelle
tries to unify the current subgoal with the conclusion of the rule, which
has the form \isa{?P\ \isasymand\ ?Q}. (Unification is discussed below,
{\S}\ref{sec:unification}.) If successful,
it yields new subgoals given by the formulas assigned to
\isa{?P} and \isa{?Q}.
The following trivial proof illustrates how rules work. It also introduces a
style of indentation. If a command adds a new subgoal, then the next
command's indentation is increased by one space; if it proves a subgoal, then
the indentation is reduced. This provides the reader with hints about the
subgoal structure.
\begin{isabelle}
\isacommand{lemma}\ conj_rule:\ "\isasymlbrakk P;\
Q\isasymrbrakk\ \isasymLongrightarrow\ P\ \isasymandjava.lang.NullPointerException
(Q\ \isasymand\ P)"\isanewline
\isacommand{apply}\ (rule\ conjI)\isanewline
\ \isacommand{apply}\ assumption\isanewline
\isacommand{apply}\ (rule\ conjI)\isanewline
\ \isacommand{apply}\ assumption\isanewline
\isacommand{apply}\ assumption
\end{isabelle}
At the start, Isabelle presents
us with the assumptions (\isa{P} and~\isa{Q}) and with the goal to be proved,
\isa{P\ \isasymandjava.lang.NullPointerException
(Q\ \isasymand\ P)}. We are working backwards, so when we
apply conjunction introduction, the rule removes the outermost occurrence
of the \isa{\isasymand} symbol. To apply a rule to a subgoal, we apply
the proof method \isa{rule} --- here with \isa{conjI}, the conjunction
introduction rule.
\begin{isabelle}
%\isasymlbrakk P;\ Q\isasymrbrakk\ \isasymLongrightarrow\ P\ \isasymand\ Q\
%\isasymand\ P\isanewline
\ 1.\ \isasymlbrakk P;\ Q\isasymrbrakk\ \isasymLongrightarrow\ P\isanewline
\ 2.\ \isasymlbrakk P;\ Q\isasymrbrakk\ \isasymLongrightarrow\ Q\ \isasymand\ P
\end{isabelle}
Isabelle leaves two new subgoals: the two halves of the original conjunction.
The first is simply \isa{P}, which is trivial, since \isa{P} is among
the assumptions. We can apply the \methdx{assumption}
method, which proves a subgoal by finding a matching assumption.
\begin{isabelle}
\ 1.\ \isasymlbrakk P;\ Q\isasymrbrakk\ \isasymLongrightarrow\
Q\ \isasymand\ P
\end{isabelle}
We are left with the subgoal of proving
\isa{Q\ \isasymand\ P} from the assumptions \isa{P} and~\isa{Q}. We apply
\isa{rule conjI} again.
\begin{isabelle}
\ 1.\ \isasymlbrakk P;\ Q\isasymrbrakk\ \isasymLongrightarrow\ Q\isanewline
\ 2.\ \isasymlbrakk P;\ Q\isasymrbrakk\ \isasymLongrightarrow\ P
\end{isabelle}
We are left with two new subgoals, \isa{Q} and~\isa{P}, each of which can be proved
using the \isa{assumption} method.%
\index{introduction rules|)}
\section{Elimination Rules}
\index{elimination rules|(}%
Elimination rules work in the opposite direction from introduction
rules. In the case of conjunction, there are two such rules.
From $P\conj Q$ we infer $P$. also, from $P\conj Q$
we infer $Q$:
\[ \infer{P}{P\conj Q} \qquad \infer{Q}{P\conj Q} \]
Now consider disjunction. There are two introduction rules, which resemble inverted forms of the
conjunction elimination rules:
\[ \infer{P\disj Q}{P} \qquad \infer{P\disj Q}{Q} \]
What is the disjunction elimination rule? The situation is rather different from
conjunction. From $P\disj Q$ we cannot conclude that $P$ is true and we
cannot conclude that $Q$ is true; there are no direct
elimination rules of the sort that we have seen for conjunction. Instead,
there is an elimination rule that works indirectly. If we are trying to prove
something else, say $R$, and we know that $P\disj Q$ holds, then we have to consider
two cases. We can assume that $P$ is true and prove $R$ and then assume that $Q$ is
true and prove $R$ a second time. Here we see a fundamental concept used in natural
deduction: that of the \textbf{assumptions}. We have to prove $R$ twice, under
different assumptions. The assumptions are local to these subproofs and are visible
nowhere else.
In a logic text, the disjunction elimination rule might be shown
like this:
\[ \infer{R}{P\disj Q & \infer*{R}{[P]} & \infer*{R}{[Q]}} \]
The assumptions $[P]$ and $[Q]$ are bracketed
to emphasize that they are local to their subproofs. In Isabelle
notation, the already-familiar \isa{\isasymLongrightarrow} syntax serves the
same purpose:
\begin{isabelle}
\isasymlbrakk?P\ \isasymor\ ?Q;\ ?P\ \isasymLongrightarrow\ ?R;\ ?Q\ \isasymLongrightarrow\ ?R\isasymrbrakk\ \isasymLongrightarrow\ ?R\rulenamedx{disjE}
\end{isabelle}
When we use this sort of elimination rule backwards, it produces
a case split. (We have seen this before, in proofs by induction.) The following proof
illustrates the use of disjunction elimination.
\begin{isabelle}
\isacommand{lemma}\ disj_swap:\ "P\ \isasymor\ Q\
\isasymLongrightarrow\ Q\ \isasymor\ P"\isanewline
\isacommand{apply}\ (erule\ disjE)\isanewline
\ \isacommand{apply}\ (rule\ disjI2)\isanewline
\ \isacommand{apply}\ assumption\isanewline
\isacommand{apply}\ (rule\ disjI1)\isanewline
\isacommand{apply}\ assumption
\end{isabelle}
We assume \isa{P\ \isasymor\ Q} and
must prove \isa{Q\ \isasymor\ P}\@. Our first step uses the disjunction
elimination rule, \isa{disjE}\@. We invoke it using \methdx{erule}, a
method designed to work with elimination rules. It looks for an assumption that
matches the rule's first premise. It deletes the matching assumption,
regards the first premise as proved and returns subgoals corresponding to
the remaining premises. When we apply \isa{erule} to \isa{disjE}, only two
subgoals result. This is better than applying it using \isa{rule}
to get three subgoals, then proving the first by assumption: the other
subgoals would have the redundant assumption
\hbox{\isa{P\ \isasymor\ Q}}.
Most of the time, \isa{erule} is the best way to use elimination rules, since it
replaces an assumption by its subformulas; only rarely does the original
assumption remain useful.
\begin{isabelle}
%P\ \isasymor\ Q\ \isasymLongrightarrow\ Q\ \isasymor\ P\isanewline
\ 1.\ P\ \isasymLongrightarrow\ Q\ \isasymor\ P\isanewline
\ 2.\ Q\ \isasymLongrightarrow\ Q\ \isasymor\ P
\end{isabelle}
These are the two subgoals returned by \isa{erule}. The first assumes
\isa{P} and the second assumes \isa{Q}. Tackling the first subgoal, we
need to show \isa{Q\ \isasymor\ P}\@. The second introduction rule
(\isa{disjI2}) can reduce this to \isa{P}, which matches the assumption.
So, we apply the
\isa{rule} method with \isa{disjI2} \ldots
\begin{isabelle}
\ 1.\ P\ \isasymLongrightarrow\ P\isanewline
\ 2.\ Q\ \isasymLongrightarrow\ Q\ \isasymor\ P
\end{isabelle}
\ldots and finish off with the \isa{assumption}
method. We are left with the other subgoal, which
assumes \isa{Q}.
\begin{isabelle}
\ 1.\ Q\ \isasymLongrightarrow\ Q\ \isasymor\ P
\end{isabelle}
Its proof is similar, using the introduction
rule \isa{disjI1}.
The result of this proof is a new inference rule \isa{disj_swap}, which is neither
an introduction nor an elimination rule, but which might
be useful. We can use it to replace any goal of the form $Q\disj P$
by one of the form $P\disj Q$.%
\index{elimination rules|)}
\section{Destruction Rules: Some Examples}
\index{destruction rules|(}%
Now let us examine the analogous proof for conjunction.
\begin{isabelle}
\isacommand{lemma}\ conj_swap:\ "P\ \isasymand\ Q\ \isasymLongrightarrow\ Q\ \isasymand\ P"\isanewline
\isacommand{apply}\ (rule\ conjI)\isanewline
\ \isacommand{apply}\ (drule\ conjunct2)\isanewline
\ \isacommand{apply}\ assumption\isanewline
\isacommand{apply}\ (drule\ conjunct1)\isanewline
\isacommand{apply}\ assumption
\end{isabelle}
Recall that the conjunction elimination rules --- whose Isabelle names are
\isa{conjunct1} and \isa{conjunct2} --- simply return the first or second half
of a conjunction. Rules of this sort (where the conclusion is a subformula of a
premise) are called \textbf{destruction} rules because they take apart and destroy
a premise.%
\footnote{This Isabelle terminology is not used in standard logic texts,
although the distinction between the two forms of elimination rule is well known.
Girard \cite[page 74]{girard89},\index{Girard, Jean-Yves|fnote}
for example, writes ``The elimination rules
[for $\disj$ and $\exists$] are very
bad. What is catastrophic about them is the parasitic presence of a formula [$R$]
which has no structural link with the formula which is eliminated.''
These Isabelle rules are inspired by the sequent calculus.}
The first proof step applies conjunction introduction, leaving
two subgoals:
\begin{isabelle}
%P\ \isasymand\ Q\ \isasymLongrightarrow\ Q\ \isasymand\ P\isanewline
\ 1.\ P\ \isasymand\ Q\ \isasymLongrightarrow\ Q\isanewline
\ 2.\ P\ \isasymand\ Q\ \isasymLongrightarrow\ P
\end{isabelle}
To invoke the elimination rule, we apply a new method, \isa{drule}.
Think of the \isa{d} as standing for \textbf{destruction} (or \textbf{direct}, if
you prefer). Applying the
second conjunction rule using \isa{drule} replaces the assumption
\isa{P\ \isasymand\ Q} by \isa{Q}.
\begin{isabelle}
\ 1.\ Q\ \isasymLongrightarrow\ Q\isanewline
\ 2.\ P\ \isasymand\ Q\ \isasymLongrightarrow\ P
\end{isabelle}
The resulting subgoal can be proved by applying \isa{assumption}.
The other subgoal is similarly proved, using the \isa{conjunct1} rule and the
\isa{assumption} method.
Choosing among the methods \isa{rule}, \isa{erule} and \isa{drule} is up to
you. Isabelle does not attempt to work out whether a rule
is an introduction rule or an elimination rule. The
method determines how the rule will be interpreted. Many rules
can be used in more than one way. For example, \isa{disj_swap} can
be applied to assumptions as well as to goals; it replaces any
assumption of the form
$P\disj Q$ by a one of the form $Q\disj P$.
Destruction rules are simpler in form than indirect rules such as \isa{disjE},
but they can be inconvenient. Each of the conjunction rules discards half
of the formula, when usually we want to take both parts of the conjunction as new
assumptions. The easiest way to do so is by using an
alternative conjunction elimination rule that resembles \isa{disjE}\@. It is
seldom, if ever, seen in logic books. In Isabelle syntax it looks like this:
\begin{isabelle}
\isasymlbrakk?P\ \isasymand\ ?Q;\ \isasymlbrakk?P;\ ?Q\isasymrbrakk\ \isasymLongrightarrow\ ?R\isasymrbrakk\ \isasymLongrightarrow\ ?R\rulenamedx{conjE}
\end{isabelle}
\index{destruction rules|)}
\begin{exercise}
Use the rule \isa{conjE} to shorten the proof above.
\end{exercise}
\section{Implication}
\index{implication|(}%
At the start of this chapter, we saw the rule \emph{modus ponens}. It is, in fact,
a destruction rule. The matching introduction rule looks like this
in Isabelle:
\begin{isabelle}
(?P\ \isasymLongrightarrow\ ?Q)\ \isasymLongrightarrow\ ?Pjava.lang.NullPointerException
\isasymlongrightarrow\ ?Q\rulenamedx{impI}
\end{isabelle}
And this is \emph{modus ponens}\index{modus ponens@\emph{modus ponens}}:
\begin{isabelle}
\isasymlbrakk?P\ \isasymlongrightarrow\ ?Q;\ ?P\isasymrbrakkjava.lang.NullPointerException
\isasymLongrightarrow\ ?Q
\rulenamedx{mp}
\end{isabelle}
Here is a proof using the implication rules. This
lemma performs a sort of uncurrying, replacing the two antecedents
of a nested implication by a conjunction. The proof illustrates
how assumptions work. At each proof step, the subgoals inherit the previous
assumptions, perhaps with additions or deletions. Rules such as
\isa{impI} and \isa{disjE} add assumptions, while applying \isa{erule} or
\isa{drule} deletes the matching assumption.
\begin{isabelle}
\isacommand{lemma}\ imp_uncurry:java.lang.NullPointerException
"P\ \isasymlongrightarrow\ (Q\
\isasymlongrightarrow\ R)\ \isasymLongrightarrow\ Pjava.lang.NullPointerException
\isasymand\ Q\ \isasymlongrightarrowjava.lang.NullPointerException
R"\isanewline
\isacommand{apply}\ (rule\ impI)\isanewline
\isacommand{apply}\ (erule\ conjE)\isanewline
\isacommand{apply}\ (drule\ mp)\isanewline
\ \isacommand{apply}\ assumption\isanewline
\isacommand{apply}\ (drule\ mp)\isanewline
\ \ \isacommand{apply}\ assumption\isanewline
\ \isacommand{apply}\ assumption
\end{isabelle}
First, we state the lemma and apply implication introduction (\isa{rule impI}),
which moves the conjunction to the assumptions.
\begin{isabelle}
%P\ \isasymlongrightarrow\ Q\ \isasymlongrightarrow\ R\ \isasymLongrightarrow\ P\
%\isasymand\ Q\ \isasymlongrightarrow\ R\isanewline
\ 1.\ \isasymlbrakk P\ \isasymlongrightarrow\ Q\ \isasymlongrightarrow\ R;\ P\ \isasymand\ Q\isasymrbrakk\ \isasymLongrightarrow\ R
\end{isabelle}
Next, we apply conjunction elimination (\isa{erule conjE}), which splits this
conjunction into two parts.
\begin{isabelle}
\ 1.\ \isasymlbrakk P\ \isasymlongrightarrow\ Q\ \isasymlongrightarrow\ R;\ P;java.lang.NullPointerException
Q\isasymrbrakk\ \isasymLongrightarrow\ R
\end{isabelle}
Now, we work on the assumption \isa{P\ \isasymlongrightarrow\ (Qjava.lang.NullPointerException
\isasymlongrightarrow\ R)}, where the parentheses have been inserted for
clarity. The nested implication requires two applications of
\textit{modus ponens}: \isa{drule mp}. The first use yields the
implication \isa{Qjava.lang.NullPointerException
\isasymlongrightarrow\ R}, but first we must prove the extra subgoal
\isa{P}, which we do by assumption.
\begin{isabelle}
\ 1.\ \isasymlbrakk P;\ Q\isasymrbrakk\ \isasymLongrightarrow\ P\isanewline
\ 2.\ \isasymlbrakk P;\ Q;\ Q\ \isasymlongrightarrow\ R\isasymrbrakk\ \isasymLongrightarrow\ R
\end{isabelle}
Repeating these steps for \isa{Qjava.lang.NullPointerException
\isasymlongrightarrow\ R} yields the conclusion we seek, namely~\isa{R}.
\begin{isabelle}
\ 1.\ \isasymlbrakk P;\ Q;\ Q\ \isasymlongrightarrow\ R\isasymrbrakkjava.lang.NullPointerException
\isasymLongrightarrow\ R
\end{isabelle}
The symbols \isa{\isasymLongrightarrow} and \isa{\isasymlongrightarrow}
both stand for implication, but they differ in many respects. Isabelle
uses \isa{\isasymLongrightarrow} to express inference rules; the symbol is
built-in and Isabelle's inference mechanisms treat it specially. On the
other hand, \isa{\isasymlongrightarrow} is just one of the many connectives
available in higher-order logic. We reason about it using inference rules
such as \isa{impI} and \isa{mp}, just as we reason about the other
connectives. You will have to use \isa{\isasymlongrightarrow} in any
context that requires a formula of higher-order logic. Use
\isa{\isasymLongrightarrow} to separate a theorem's preconditions from its
conclusion.%
\index{implication|)}
\medskip
\index{by@\isacommand{by} (command)|(}%
The \isacommand{by} command is useful for proofs like these that use
\isa{assumption} heavily. It executes an
\isacommand{apply} command, then tries to prove all remaining subgoals using
\isa{assumption}. Since (if successful) it ends the proof, it also replaces the
\isacommand{done} symbol. For example, the proof above can be shortened:
\begin{isabelle}
\isacommand{lemma}\ imp_uncurry:java.lang.NullPointerException
"P\ \isasymlongrightarrow\ (Q\
\isasymlongrightarrow\ R)\ \isasymLongrightarrow\ Pjava.lang.NullPointerException
\isasymand\ Q\ \isasymlongrightarrowjava.lang.NullPointerException
R"\isanewline
\isacommand{apply}\ (rule\ impI)\isanewline
\isacommand{apply}\ (erule\ conjE)\isanewline
\isacommand{apply}\ (drule\ mp)\isanewline
\ \isacommand{apply}\ assumption\isanewline
\isacommand{by}\ (drule\ mp)
\end{isabelle}
We could use \isacommand{by} to replace the final \isacommand{apply} and
\isacommand{done} in any proof, but typically we use it
to eliminate calls to \isa{assumption}. It is also a nice way of expressing a
one-line proof.%
\index{by@\isacommand{by} (command)|)}
\section{Negation}
\index{negation|(}%
Negation causes surprising complexity in proofs. Its natural
deduction rules are straightforward, but additional rules seem
necessary in order to handle negated assumptions gracefully. This section
also illustrates the \isa{intro} method: a convenient way of
applying introduction rules.
Negation introduction deduces $\lnot P$ if assuming $P$ leads to a
contradiction. Negation elimination deduces any formula in the
presence of $\lnot P$ together with~$P$:
\begin{isabelle}
(?P\ \isasymLongrightarrow\ False)\ \isasymLongrightarrow\ \isasymnot\ ?P%
\rulenamedx{notI}\isanewline
\isasymlbrakk{\isasymnot}\ ?P;\ ?P\isasymrbrakk\ \isasymLongrightarrow\ ?R%
\rulenamedx{notE}
\end{isabelle}
%
Classical logic allows us to assume $\lnot P$
when attempting to prove~$P$:
\begin{isabelle}
(\isasymnot\ ?P\ \isasymLongrightarrow\ ?P)\ \isasymLongrightarrow\ ?P%
\rulenamedx{classical}
\end{isabelle}
\index{contrapositives|(}%
The implications $P\imp Q$ and $\lnot Q\imp\lnot P$ are logically
equivalent, and each is called the
\textbf{contrapositive} of the other. Four further rules support
reasoning about contrapositives. They differ in the placement of the
negation symbols:
\begin{isabelle}
\isasymlbrakk?Q;\ \isasymnot\ ?P\ \isasymLongrightarrow\ \isasymnot\ ?Q\isasymrbrakk\ \isasymLongrightarrow\ ?P%
\rulename{contrapos_pp}\isanewline
\isasymlbrakk?Q;\ ?P\ \isasymLongrightarrow\ \isasymnot\ ?Q\isasymrbrakk\ \isasymLongrightarrowjava.lang.NullPointerException
\isasymnot\ ?P%
\rulename{contrapos_pn}\isanewline
\isasymlbrakk{\isasymnot}\ ?Q;\ \isasymnot\ ?P\ \isasymLongrightarrow\ ?Q\isasymrbrakk\ \isasymLongrightarrow\ ?P%
\rulename{contrapos_np}\isanewline
\isasymlbrakk{\isasymnot}\ ?Q;\ ?P\ \isasymLongrightarrow\ ?Q\isasymrbrakk\ \isasymLongrightarrow\ \isasymnot\ ?P%
\rulename{contrapos_nn}
\end{isabelle}
%
These rules are typically applied using the \isa{erule} method, where
their effect is to form a contrapositive from an
assumption and the goal's conclusion.%
\index{contrapositives|)}
The most important of these is \isa{contrapos_np}. It is useful
for applying introduction rules to negated assumptions. For instance,
the assumption $\lnot(P\imp Q)$ is equivalent to the conclusion $P\imp Q$ and we
might want to use conjunction introduction on it.
Before we can do so, we must move that assumption so that it
becomes the conclusion. The following proof demonstrates this
technique:
\begin{isabelle}
\isacommand{lemma}\ "\isasymlbrakk{\isasymnot}(P{\isasymlongrightarrow}Q);\
\isasymnot(R{\isasymlongrightarrow}Q)\isasymrbrakk\ \isasymLongrightarrowjava.lang.NullPointerException
R"\isanewline
\isacommand{apply}\ (erule_tac\ Q = "R{\isasymlongrightarrow}Q"\ \isakeyword{in}java.lang.NullPointerException
contrapos_np)\isanewline
\isacommand{apply}\ (intro\ impI)\isanewline
\isacommand{by}\ (erule\ notE)
\end{isabelle}
%
There are two negated assumptions and we need to exchange the conclusion with the
second one. The method \isa{erule contrapos_np} would select the first assumption,
which we do not want. So we specify the desired assumption explicitly
using a new method, \isa{erule_tac}. This is the resulting subgoal:
\begin{isabelle}
\ 1.\ \isasymlbrakk{\isasymnot}\ (P\ \isasymlongrightarrow\ Q);\ \isasymnotjava.lang.NullPointerException
R\isasymrbrakk\ \isasymLongrightarrow\ R\ \isasymlongrightarrow\ Q%
\end{isabelle}
The former conclusion, namely \isa{R}, now appears negated among the assumptions,
while the negated formula \isa{R\ \isasymlongrightarrow\ Q} becomes the new
conclusion.
We can now apply introduction rules. We use the \methdx{intro} method, which
repeatedly applies the given introduction rules. Here its effect is equivalent
to \isa{rule impI}.
\begin{isabelle}
\ 1.\ \isasymlbrakk{\isasymnot}\ (P\ \isasymlongrightarrow\ Q);\ \isasymnot\ R;java.lang.NullPointerException
R\isasymrbrakk\ \isasymLongrightarrow\ Q%
\end{isabelle}
We can see a contradiction in the form of assumptions \isa{\isasymnot\ R}
and~\isa{R}, which suggests using negation elimination. If applied on its own,
\isa{notE} will select the first negated assumption, which is useless.
Instead, we invoke the rule using the
\isa{by} command.
Now when Isabelle selects the first assumption, it tries to prove \isa{Pjava.lang.NullPointerException
\isasymlongrightarrow\ Q} and fails; it then backtracks, finds the
assumption \isa{\isasymnot~R} and finally proves \isa{R} by assumption. That
concludes the proof.
\medskip
The following example may be skipped on a first reading. It involves a
peculiar but important rule, a form of disjunction introduction:
\begin{isabelle}
(\isasymnot \ ?Q\ \isasymLongrightarrow \ ?P)\ \isasymLongrightarrow \ ?P\ \isasymor \ ?Q%
\rulenamedx{disjCI}
\end{isabelle}
This rule combines the effects of \isa{disjI1} and \isa{disjI2}. Its great
advantage is that we can remove the disjunction symbol without deciding
which disjunction to prove. This treatment of disjunction is standard in sequent
and tableau calculi.
\begin{isabelle}
\isacommand{lemma}\ "(P\ \isasymor\ Q)\ \isasymand\ R\
\isasymLongrightarrow\ P\ \isasymor\ (Q\ \isasymand\ R)"\isanewline
\isacommand{apply}\ (rule\ disjCI)\isanewline
\isacommand{apply}\ (elim\ conjE\ disjE)\isanewline
\ \isacommand{apply}\ assumption
\isanewline
\isacommand{by}\ (erule\ contrapos_np,\ rule\ conjI)
\end{isabelle}
%
The first proof step to applies the introduction rules \isa{disjCI}.
The resulting subgoal has the negative assumption
\hbox{\isa{\isasymnot(Q\ \isasymand\ R)}}.
\begin{isabelle}
\ 1.\ \isasymlbrakk(P\ \isasymor\ Q)\ \isasymand\ R;\ \isasymnot\ (Q\ \isasymandjava.lang.NullPointerException
R)\isasymrbrakk\ \isasymLongrightarrow\ P%
\end{isabelle}
Next we apply the \isa{elim} method, which repeatedly applies
elimination rules; here, the elimination rules given
in the command. One of the subgoals is trivial (\isa{\isacommand{apply} assumption}),
leaving us with one other:
\begin{isabelle}
\ 1.\ \isasymlbrakk{\isasymnot}\ (Q\ \isasymand\ R);\ R;\ Q\isasymrbrakk\ \isasymLongrightarrow\ P%
\end{isabelle}
%
Now we must move the formula \isa{Q\ \isasymand\ R} to be the conclusion. The
combination
\begin{isabelle}
\ \ \ \ \ (erule\ contrapos_np,\ rule\ conjI)
\end{isabelle}
is robust: the \isa{conjI} forces the \isa{erule} to select a
conjunction. The two subgoals are the ones we would expect from applying
conjunction introduction to
\isa{Q~\isasymand~R}:
\begin{isabelle}
\ 1.\ \isasymlbrakk R;\ Q;\ \isasymnot\ P\isasymrbrakk\ \isasymLongrightarrowjava.lang.NullPointerException
Q\isanewline
\ 2.\ \isasymlbrakk R;\ Q;\ \isasymnot\ P\isasymrbrakk\ \isasymLongrightarrow\ R%
\end{isabelle}
They are proved by assumption, which is implicit in the \isacommand{by}
command.%
\index{negation|)}
\section{Interlude: the Basic Methods for Rules}
We have seen examples of many tactics that operate on individual rules. It
may be helpful to review how they work given an arbitrary rule such as this:
\[ \infer{Q}{P@1 & \ldots & P@n} \]
Below, we refer to $P@1$ as the \bfindex{major premise}. This concept
applies only to elimination and destruction rules. These rules act upon an
instance of their major premise, typically to replace it by subformulas of itself.
Suppose that the rule above is called~\isa{R}\@. Here are the basic rule
methods, most of which we have already seen:
\begin{itemize}
\item
Method \isa{rule\ R} unifies~$Q$ with the current subgoal, replacing it
by $n$ new subgoals: instances of $P@1$, \ldots,~$P@n$.
This is backward reasoning and is appropriate for introduction rules.
\item
Method \isa{erule\ R} unifies~$Q$ with the current subgoal and
simultaneously unifies $P@1$ with some assumption. The subgoal is
replaced by the $n-1$ new subgoals of proving
instances of $P@2$,
\ldots,~$P@n$, with the matching assumption deleted. It is appropriate for
elimination rules. The method
\isa{(rule\ R,\ assumption)} is similar, but it does not delete an
assumption.
\item
Method \isa{drule\ R} unifies $P@1$ with some assumption, which it
then deletes. The subgoal is
replaced by the $n-1$ new subgoals of proving $P@2$, \ldots,~$P@n$; an
$n$th subgoal is like the original one but has an additional assumption: an
instance of~$Q$. It is appropriate for destruction rules.
\item
Method \isa{frule\ R} is like \isa{drule\ R} except that the matching
assumption is not deleted. (See {\S}\ref{sec:frule} below.)
\end{itemize}
Other methods apply a rule while constraining some of its
variables. The typical form is
\begin{isabelle}
\ \ \ \ \ \methdx{rule_tac}\ $v@1$ = $t@1$ \isakeyword{and} \ldots \isakeyword{and}
$v@k$ =
$t@k$ \isakeyword{in} R
\end{isabelle}
This method behaves like \isa{rule R}, while instantiating the variables
$v@1$, \ldots,
$v@k$ as specified. We similarly have \methdx{erule_tac}, \methdx{drule_tac} and
\methdx{frule_tac}. These methods also let us specify which subgoal to
operate on. By default it is the first subgoal, as with nearly all
methods, but we can specify that rule \isa{R} should be applied to subgoal
number~$i$:
\begin{isabelle}
\ \ \ \ \ rule_tac\ [$i$] R
\end{isabelle}
\section{Unification and Substitution}\label{sec:unification}
\index{unification|(}%
As we have seen, Isabelle rules involve schematic variables, which begin with
a question mark and act as
placeholders for terms. \textbf{Unification} --- well known to Prolog programmers --- is the act of
making two terms identical, possibly replacing their schematic variables by
terms. The simplest case is when the two terms are already the same. Next
simplest is \textbf{pattern-matching}, which replaces variables in only one of the
terms. The
\isa{rule} method typically matches the rule's conclusion
against the current subgoal. The
\isa{assumption} method matches the current subgoal's conclusion
against each of its assumptions. Unification can instantiate variables in both terms; the \isa{rule} method can do this if the goal
itself contains schematic variables. Other occurrences of the variables in
the rule or proof state are updated at the same time.
Schematic variables in goals represent unknown terms. Given a goal such
as $\exists x.\,P$, they let us proceed with a proof. They can be
filled in later, sometimes in stages and often automatically.
\begin{pgnote}
If unification fails when you think it should succeed, try setting the Proof General flag \pgmenu{Isabelle} $>$ \pgmenu{Settings} $>$
\pgmenu{Trace Unification},
which makes Isabelle show the cause of unification failures (in Proof
General's \pgmenu{Trace} buffer).
\end{pgnote}
\noindent
For example, suppose we are trying to prove this subgoal by assumption:
\begin{isabelle}
\ 1.\ P\ (a,\ f\ (b,\ g\ (e,\ a),\ b),\ a)\ \isasymLongrightarrow \ P\ (a,\ f\ (b,\ g\ (c,\ a),\ b),\ a)
\end{isabelle}
The \isa{assumption} method having failed, we try again with the flag set:
\begin{isabelle}
\isacommand{apply} assumption
\end{isabelle}
In this trivial case, the output clearly shows that \isa{e} clashes with \isa{c}:
\begin{isabelle}
Clash: e =/= c
\end{isabelle}
Isabelle uses
\textbf{higher-order} unification, which works in the
typed $\lambda$-calculus. The procedure requires search and is potentially
undecidable. For our purposes, however, the differences from ordinary
unification are straightforward. It handles bound variables
correctly, avoiding capture. The two terms
\isa{{\isasymlambda}x.\ f(x,z)} and \isa{{\isasymlambda}y.\ f(y,z)} are
trivially unifiable because they differ only by a bound variable renaming. The two terms \isa{{\isasymlambda}x.\ ?P} and
\isa{{\isasymlambda}x.\ t x} are not unifiable; replacing \isa{?P} by
\isa{t x} is forbidden because the free occurrence of~\isa{x} would become
bound. Unfortunately, even if \isa{trace_unify_fail} is set, Isabelle displays no information about this type of failure.
\begin{warn}
Higher-order unification sometimes must invent
$\lambda$-terms to replace function variables,
which can lead to a combinatorial explosion. However, Isabelle proofs tend
to involve easy cases where there are few possibilities for the
$\lambda$-term being constructed. In the easiest case, the
function variable is applied only to bound variables,
as when we try to unify \isa{{\isasymlambda}x\ y.\ f(?h x y)} and
\isa{{\isasymlambda}x\ y.\ f(x+y+a)}. The only solution is to replace
\isa{?h} by \isa{{\isasymlambda}x\ y.\ x+y+a}. Such cases admit at most
one unifier, like ordinary unification. A harder case is
unifying \isa{?h a} with~\isa{a+b}; it admits two solutions for \isa{?h},
namely \isa{{\isasymlambda}x.~a+b} and \isa{{\isasymlambda}x.~x+b}.
Unifying \isa{?h a} with~\isa{a+a+b} admits four solutions; their number is
exponential in the number of occurrences of~\isa{a} in the second term.
\end{warn}
\subsection{Substitution and the {\tt\slshape subst} Method}
\label{sec:subst}
\index{substitution|(}%
Isabelle also uses function variables to express \textbf{substitution}.
A typical substitution rule allows us to replace one term by
another if we know that two terms are equal.
\[ \infer{P[t/x]}{s=t & P[s/x]} \]
The rule uses a notation for substitution: $P[t/x]$ is the result of
replacing $x$ by~$t$ in~$P$. The rule only substitutes in the positions
designated by~$x$. For example, it can
derive symmetry of equality from reflexivity. Using $x=s$ for~$P$
replaces just the first $s$ in $s=s$ by~$t$:
\[ \infer{t=s}{s=t & \infer{s=s}{}} \]
The Isabelle version of the substitution rule looks like this:
\begin{isabelle}
\isasymlbrakk?t\ =\ ?s;\ ?P\ ?s\isasymrbrakk\ \isasymLongrightarrow\ ?Pjava.lang.NullPointerException
?t
\rulenamedx{ssubst}
\end{isabelle}
Crucially, \isa{?P} is a function
variable. It can be replaced by a $\lambda$-term
with one bound variable, whose occurrences identify the places
in which $s$ will be replaced by~$t$. The proof above requires \isa{?P}
to be replaced by \isa{{\isasymlambda}x.~x=s}; the second premise will then
be \isa{s=s} and the conclusion will be \isa{t=s}.
The \isa{simp} method also replaces equals by equals, but the substitution
rule gives us more control. Consider this proof:
\begin{isabelle}
\isacommand{lemma}java.lang.NullPointerException
"\isasymlbrakk x\ =\ f\ x;\ odd(f\ x)\isasymrbrakk\ \isasymLongrightarrow\
odd\ x"\isanewline
\isacommand{by}\ (erule\ ssubst)
\end{isabelle}
%
The assumption \isa{x\ =\ f\ x}, if used for rewriting, would loop,
replacing \isa{x} by \isa{f x} and then by
\isa{f(f x)} and so forth. (Here \isa{simp}
would see the danger and would re-orient the equality, but in more complicated
cases it can be fooled.) When we apply the substitution rule,
Isabelle replaces every
\isa{x} in the subgoal by \isa{f x} just once. It cannot loop. The
resulting subgoal is trivial by assumption, so the \isacommand{by} command
proves it implicitly.
We are using the \isa{erule} method in a novel way. Hitherto,
the conclusion of the rule was just a variable such as~\isa{?R}, but it may
be any term. The conclusion is unified with the subgoal just as
it would be with the \isa{rule} method. At the same time \isa{erule} looks
for an assumption that matches the rule's first premise, as usual. With
\isa{ssubst} the effect is to find, use and delete an equality
assumption.
The \methdx{subst} method performs individual substitutions. In simple cases,
it closely resembles a use of the substitution rule. Suppose a
proof has reached this point:
\begin{isabelle}
\ 1.\ \isasymlbrakk P\ x\ y\ z;\ Suc\ x\ <\ y\isasymrbrakk \ \isasymLongrightarrow \ f\ z\ =\ x\ *\ y%
\end{isabelle}
Now we wish to apply a commutative law:
\begin{isabelle}
?m\ *\ ?n\ =\ ?n\ *\ ?m%
\rulename{mult.commute}
\end{isabelle}
Isabelle rejects our first attempt:
\begin{isabelle}
apply (simp add: mult.commute)
\end{isabelle}
The simplifier notices the danger of looping and refuses to apply the
rule.%
\footnote{More precisely, it only applies such a rule if the new term is
smaller under a specified ordering; here, \isa{x\ *\ y}
is already smaller than
\isa{y\ *\ x}.}
%
The \isa{subst} method applies \isa{mult.commute} exactly once.
\begin{isabelle}
\isacommand{apply}\ (subst\ mult.commute)\isanewline
\ 1.\ \isasymlbrakk P\ x\ y\ z;\ Suc\ x\ <\ y\isasymrbrakk java.lang.NullPointerException
\isasymLongrightarrow \ f\ z\ =\ y\ *\ x%
\end{isabelle}
As we wanted, \isa{x\ *\ y} has become \isa{y\ *\ x}.
\medskip
This use of the \methdx{subst} method has the same effect as the command
\begin{isabelle}
\isacommand{apply}\ (rule\ mult.commute [THEN ssubst])
\end{isabelle}
The attribute \isa{THEN}, which combines two rules, is described in
{\S}\ref{sec:THEN} below. The \methdx{subst} method is more powerful than
applying the substitution rule. It can perform substitutions in a subgoal's
assumptions. Moreover, if the subgoal contains more than one occurrence of
the left-hand side of the equality, the \methdx{subst} method lets us specify which occurrence should be replaced.
\subsection{Unification and Its Pitfalls}
Higher-order unification can be tricky. Here is an example, which you may
want to skip on your first reading:
\begin{isabelle}
\isacommand{lemma}\ "\isasymlbrakk x\ =\
f\ x;\ triple\ (f\ x)\ (f\ x)\ x\isasymrbrakkjava.lang.NullPointerException
\isasymLongrightarrow\ triple\ x\ x\ x"\isanewline
\isacommand{apply}\ (erule\ ssubst)\isanewline
\isacommand{back}\isanewline
\isacommand{back}\isanewline
\isacommand{back}\isanewline
\isacommand{back}\isanewline
\isacommand{apply}\ assumption\isanewline
\isacommand{done}
\end{isabelle}
%
By default, Isabelle tries to substitute for all the
occurrences. Applying \isa{erule\ ssubst} yields this subgoal:
\begin{isabelle}
\ 1.\ triple\ (f\ x)\ (f\ x)\ x\ \isasymLongrightarrow\ triple\ (f\ x)\ (f\ x)\ (f\ x)
\end{isabelle}
The substitution should have been done in the first two occurrences
of~\isa{x} only. Isabelle has gone too far. The \commdx{back}
command allows us to reject this possibility and demand a new one:
\begin{isabelle}
\ 1.\ triple\ (f\ x)\ (f\ x)\ x\ \isasymLongrightarrow\ triple\ x\ (f\ x)\ (f\ x)
\end{isabelle}
%
Now Isabelle has left the first occurrence of~\isa{x} alone. That is
promising but it is not the desired combination. So we use \isacommand{back}
again:
\begin{isabelle}
\ 1.\ triple\ (f\ x)\ (f\ x)\ x\ \isasymLongrightarrow\ triple\ (f\ x)\ x\ (f\ x)
\end{isabelle}
%
This also is wrong, so we use \isacommand{back} again:
\begin{isabelle}
\ 1.\ triple\ (f\ x)\ (f\ x)\ x\ \isasymLongrightarrow\ triple\ x\ x\ (f\ x)
\end{isabelle}
%
And this one is wrong too. Looking carefully at the series
of alternatives, we see a binary countdown with reversed bits: 111,
011, 101, 001. Invoke \isacommand{back} again:
\begin{isabelle}
\ 1.\ triple\ (f\ x)\ (f\ x)\ x\ \isasymLongrightarrow\ triple\ (f\ x)\ (f\ x)\ x%
\end{isabelle}
At last, we have the right combination! This goal follows by assumption.%
\index{unification|)}
\medskip
This example shows that unification can do strange things with
function variables. We were forced to select the right unifier using the
\isacommand{back} command. That is all right during exploration, but \isacommand{back}
should never appear in the final version of a proof. You can eliminate the
need for \isacommand{back} by giving Isabelle less freedom when you apply a rule.
One way to constrain the inference is by joining two methods in a
\isacommand{apply} command. Isabelle applies the first method and then the
second. If the second method fails then Isabelle automatically backtracks.
This process continues until the first method produces an output that the
second method can use. We get a one-line proof of our example:
\begin{isabelle}
\isacommand{lemma}\ "\isasymlbrakk x\ =\ f\ x;\ triple\ (f\ x)\ (f\ x)\ x\isasymrbrakk\
\isasymLongrightarrow\ triple\ x\ x\ x"\isanewline
\isacommand{apply}\ (erule\ ssubst,\ assumption)\isanewline
\isacommand{done}
\end{isabelle}
\noindent
The \isacommand{by} command works too, since it backtracks when
proving subgoals by assumption:
\begin{isabelle}
\isacommand{lemma}\ "\isasymlbrakk x\ =\ f\ x;\ triple\ (f\ x)\ (f\ x)\ x\isasymrbrakk\
\isasymLongrightarrow\ triple\ x\ x\ x"\isanewline
\isacommand{by}\ (erule\ ssubst)
\end{isabelle}
The most general way to constrain unification is
by instantiating variables in the rule. The method \isa{rule_tac} is
similar to \isa{rule}, but it
makes some of the rule's variables denote specified terms.
Also available are {\isa{drule_tac}} and \isa{erule_tac}. Here we need
\isa{erule_tac} since above we used \isa{erule}.
\begin{isabelle}
\isacommand{lemma}\ "\isasymlbrakk x\ =\ f\ x;\ triple\ (f\ x)\ (f\ x)\ x\isasymrbrakk\ \isasymLongrightarrow\ triple\ x\ x\ x"\isanewline
\isacommand{by}\ (erule_tac\ P = "\isasymlambda u.\ triple\ u\ u\ x"\
\isakeyword{in}\ ssubst)
\end{isabelle}
%
To specify a desired substitution
requires instantiating the variable \isa{?P} with a $\lambda$-expression.
The bound variable occurrences in \isa{{\isasymlambda}u.\ P\ ujava.lang.NullPointerException
u\ x} indicate that the first two arguments have to be substituted, leaving
the third unchanged. With this instantiation, backtracking is neither necessary
nor possible.
An alternative to \isa{rule_tac} is to use \isa{rule} with a theorem
modified using~\isa{of}, described in
{\S}\ref{sec:forward} below. But \isa{rule_tac}, unlike \isa{of}, can
express instantiations that refer to
\isasymAnd-bound variables in the current subgoal.%
\index{substitution|)}
\section{Quantifiers}
\index{quantifiers!universal|(}%
Quantifiers require formalizing syntactic substitution and the notion of
arbitrary value. Consider the universal quantifier. In a logic
book, its introduction rule looks like this:
\[ \infer{\forall x.\,P}{P} \]
Typically, a proviso written in English says that $x$ must not
occur in the assumptions. This proviso guarantees that $x$ can be regarded as
arbitrary, since it has not been assumed to satisfy any special conditions.
Isabelle's underlying formalism, called the
\bfindex{meta-logic}, eliminates the need for English. It provides its own
universal quantifier (\isasymAnd) to express the notion of an arbitrary value.
We have already seen another operator of the meta-logic, namely
\isa\isasymLongrightarrow, which expresses inference rules and the treatment
of assumptions. The only other operator in the meta-logic is \isa\isasymequiv,
which can be used to define constants.
\subsection{The Universal Introduction Rule}
Returning to the universal quantifier, we find that having a similar quantifier
as part of the meta-logic makes the introduction rule trivial to express:
\begin{isabelle}
(\isasymAnd x.\ ?P\ x)\ \isasymLongrightarrow\ {\isasymforall}x.\ ?P\ x\rulenamedx{allI}
\end{isabelle}
The following trivial proof demonstrates how the universal introduction
rule works.
\begin{isabelle}
\isacommand{lemma}\ "{\isasymforall}x.\ P\ x\ \isasymlongrightarrow\ P\ x"\isanewline
\isacommand{apply}\ (rule\ allI)\isanewline
\isacommand{by}\ (rule\ impI)
\end{isabelle}
The first step invokes the rule by applying the method \isa{rule allI}.
\begin{isabelle}
\ 1.\ \isasymAnd x.\ P\ x\ \isasymlongrightarrow\ P\ x
\end{isabelle}
Note that the resulting proof state has a bound variable,
namely~\isa{x}. The rule has replaced the universal quantifier of
higher-order logic by Isabelle's meta-level quantifier. Our goal is to
prove
\isa{P\ x\ \isasymlongrightarrow\ P\ x} for arbitrary~\isa{x}; it is
an implication, so we apply the corresponding introduction rule (\isa{impI}).
\begin{isabelle}
\ 1.\ \isasymAnd x.\ P\ x\ \isasymLongrightarrow\ P\ x
\end{isabelle}
This last subgoal is implicitly proved by assumption.
\subsection{The Universal Elimination Rule}
Now consider universal elimination. In a logic text,
the rule looks like this:
\[ \infer{P[t/x]}{\forall x.\,P} \]
The conclusion is $P$ with $t$ substituted for the variable~$x$.
Isabelle expresses substitution using a function variable:
\begin{isabelle}
{\isasymforall}x.\ ?P\ x\ \isasymLongrightarrow\ ?P\ ?x\rulenamedx{spec}
\end{isabelle}
This destruction rule takes a
universally quantified formula and removes the quantifier, replacing
the bound variable \isa{x} by the schematic variable \isa{?x}. Recall that a
schematic variable starts with a question mark and acts as a
placeholder: it can be replaced by any term.
The universal elimination rule is also
available in the standard elimination format. Like \isa{conjE}, it never
appears in logic books:
\begin{isabelle}
\isasymlbrakk \isasymforall x.\ ?P\ x;\ ?P\ ?x\ \isasymLongrightarrow \ ?R\isasymrbrakk \ \isasymLongrightarrow \ ?R%
\rulenamedx{allE}
\end{isabelle}
The methods \isa{drule~spec} and \isa{erule~allE} do precisely the
same inference.
To see how $\forall$-elimination works, let us derive a rule about reducing
the scope of a universal quantifier. In mathematical notation we write
\[ \infer{P\imp\forall x.\,Q}{\forall x.\,P\imp Q} \]
with the proviso ``$x$ not free in~$P$.'' Isabelle's treatment of
substitution makes the proviso unnecessary. The conclusion is expressed as
\isa{Pjava.lang.NullPointerException
\isasymlongrightarrow\ ({\isasymforall}x.\ Q\ x)}. No substitution for the
variable \isa{P} can introduce a dependence upon~\isa{x}: that would be a
bound variable capture. Let us walk through the proof.
\begin{isabelle}
\isacommand{lemma}\ "(\isasymforall x.\ P\ \isasymlongrightarrow \ Q\ x)\
\isasymLongrightarrow \ P\ \isasymlongrightarrow \ (\isasymforall x.\ Qjava.lang.NullPointerException
x)"
\end{isabelle}
First we apply implies introduction (\isa{impI}),
which moves the \isa{P} from the conclusion to the assumptions. Then
we apply universal introduction (\isa{allI}).
\begin{isabelle}
\isacommand{apply}\ (rule\ impI,\ rule\ allI)\isanewline
\ 1.\ \isasymAnd x.\ \isasymlbrakk{\isasymforall}x.\ P\ \isasymlongrightarrow\ Qjava.lang.NullPointerException
x;\ P\isasymrbrakk\ \isasymLongrightarrow\ Q\ x
\end{isabelle}
As before, it replaces the HOL
quantifier by a meta-level quantifier, producing a subgoal that
binds the variable~\isa{x}. The leading bound variables
(here \isa{x}) and the assumptions (here \isa{{\isasymforall}x.\ Pjava.lang.NullPointerException
\isasymlongrightarrow\ Q\ x} and \isa{P}) form the \textbf{context} for the
conclusion, here \isa{Q\ x}. Subgoals inherit the context,
although assumptions can be added or deleted (as we saw
earlier), while rules such as \isa{allI} add bound variables.
Now, to reason from the universally quantified
assumption, we apply the elimination rule using the \isa{drule}
method. This rule is called \isa{spec} because it specializes a universal formula
to a particular term.
\begin{isabelle}
\isacommand{apply}\ (drule\ spec)\isanewline
\ 1.\ \isasymAnd x.\ \isasymlbrakk P;\ P\ \isasymlongrightarrow\ Q\ (?x2java.lang.NullPointerException
x)\isasymrbrakk\ \isasymLongrightarrow\ Q\ x
\end{isabelle}
Observe how the context has changed. The quantified formula is gone,
replaced by a new assumption derived from its body. We have
removed the quantifier and replaced the bound variable
by the curious term
\isa{?x2~x}. This term is a placeholder: it may become any term that can be
built from~\isa{x}. (Formally, \isa{?x2} is an unknown of function type, applied
to the argument~\isa{x}.) This new assumption is an implication, so we can use
\emph{modus ponens} on it, which concludes the proof.
\begin{isabelle}
\isacommand{by}\ (drule\ mp)
\end{isabelle}
Let us take a closer look at this last step. \emph{Modus ponens} yields
two subgoals: one where we prove the antecedent (in this case \isa{P}) and
one where we may assume the consequent. Both of these subgoals are proved
by the
\isa{assumption} method, which is implicit in the
\isacommand{by} command. Replacing the \isacommand{by} command by
\isa{\isacommand{apply} (drule\ mp, assumption)} would have left one last
subgoal:
\begin{isabelle}
\ 1.\ \isasymAnd x.\ \isasymlbrakk P;\ Q\ (?x2\ x)\isasymrbrakkjava.lang.NullPointerException
\isasymLongrightarrow\ Q\ x
\end{isabelle}
The consequent is \isa{Q} applied to that placeholder. It may be replaced by any
term built from~\isa{x}, and here
it should simply be~\isa{x}. The assumption need not
be identical to the conclusion, provided the two formulas are unifiable.%
\index{quantifiers!universal|)}
\subsection{The Existential Quantifier}
\index{quantifiers!existential|(}%
The concepts just presented also apply
to the existential quantifier, whose introduction rule looks like this in
Isabelle:
\begin{isabelle}
?P\ ?x\ \isasymLongrightarrow\ {\isasymexists}x.\ ?P\ x\rulenamedx{exI}
\end{isabelle}
If we can exhibit some $x$ such that $P(x)$ is true, then $\exists x.
P(x)$ is also true. It is a dual of the universal elimination rule, and
logic texts present it using the same notation for substitution.
The existential
elimination rule looks like this
in a logic text:
\[ \infer{Q}{\exists x.\,P & \infer*{Q}{[P]}} \]
%
It looks like this in Isabelle:
\begin{isabelle}
\isasymlbrakk{\isasymexists}x.\ ?P\ x;\ \isasymAnd x.\ ?P\ x\ \isasymLongrightarrow\ ?Q\isasymrbrakk\ \isasymLongrightarrow\ ?Q\rulenamedx{exE}
\end{isabelle}
%
Given an existentially quantified theorem and some
formula $Q$ to prove, it creates a new assumption by removing the quantifier. As with
the universal introduction rule, the textbook version imposes a proviso on the
quantified variable, which Isabelle expresses using its meta-logic. It is
enough to have a universal quantifier in the meta-logic; we do not need an existential
quantifier to be built in as well.
\begin{exercise}
Prove the lemma
\[ \exists x.\, P\conj Q(x)\Imp P\conj(\exists x.\, Q(x)). \]
\emph{Hint}: the proof is similar
to the one just above for the universal quantifier.
\end{exercise}
\index{quantifiers!existential|)}
\subsection{Renaming a Bound Variable: {\tt\slshape rename_tac}}
\index{assumptions!renaming|(}\index{*rename_tac (method)|(}%
When you apply a rule such as \isa{allI}, the quantified variable
becomes a new bound variable of the new subgoal. Isabelle tries to avoid
changing its name, but sometimes it has to choose a new name in order to
avoid a clash. The result may not be ideal:
\begin{isabelle}
\isacommand{lemma}\ "x\ <\ y\ \isasymLongrightarrow \ \isasymforall x\ y.\ P\ x\
(f\ y)"\isanewline
\isacommand{apply}\ (intro allI)\isanewline
\ 1.\ \isasymAnd xa\ ya.\ x\ <\ y\ \isasymLongrightarrow \ P\ xa\ (f\ ya)
\end{isabelle}
%
The names \isa{x} and \isa{y} were already in use, so the new bound variables are
called \isa{xa} and~\isa{ya}. You can rename them by invoking \isa{rename_tac}:
\begin{isabelle}
\isacommand{apply}\ (rename_tac\ v\ w)\isanewline
\ 1.\ \isasymAnd v\ w.\ x\ <\ y\ \isasymLongrightarrow \ P\ v\ (f\ w)
\end{isabelle}
Recall that \isa{rule_tac}\index{*rule_tac (method)!and renaming}
instantiates a
theorem with specified terms. These terms may involve the goal's bound
variables, but beware of referring to variables
like~\isa{xa}. A future change to your theories could change the set of names
produced at top level, so that \isa{xa} changes to~\isa{xb} or reverts to~\isa{x}.
It is safer to rename automatically-generated variables before mentioning them.
If the subgoal has more bound variables than there are names given to
\isa{rename_tac}, the rightmost ones are renamed.%
\index{assumptions!renaming|)}\index{*rename_tac (method)|)}
\subsection{Reusing an Assumption: {\tt\slshape frule}}
\label{sec:frule}
\index{assumptions!reusing|(}\index{*frule (method)|(}%
Note that \isa{drule spec} removes the universal quantifier and --- as
usual with elimination rules --- discards the original formula. Sometimes, a
universal formula has to be kept so that it can be used again. Then we use a new
method: \isa{frule}. It acts like \isa{drule} but copies rather than replaces
the selected assumption. The \isa{f} is for \emph{forward}.
In this example, going from \isa{P\ a} to \isa{P(h(h~a))}
requires two uses of the quantified assumption, one for each~\isa{h}
in~\isa{h(h~a)}.
\begin{isabelle}
\isacommand{lemma}\ "\isasymlbrakk{\isasymforall}x.\ P\ x\ \isasymlongrightarrow\ P\ (h\ x);
\ P\ a\isasymrbrakk\ \isasymLongrightarrow\ P(h\ (h\ a))"
\end{isabelle}
%
Examine the subgoal left by \isa{frule}:
\begin{isabelle}
\isacommand{apply}\ (frule\ spec)\isanewline
\ 1.\ \isasymlbrakk{\isasymforall}x.\ P\ x\ \isasymlongrightarrow\ P\ (h\ x);\ P\ a;\ P\ ?x\ \isasymlongrightarrow\ P\ (h\ ?x)\isasymrbrakk\ \isasymLongrightarrow\ P\ (h\ (h\ a))
\end{isabelle}
It is what \isa{drule} would have left except that the quantified
assumption is still present. Next we apply \isa{mp} to the
implication and the assumption~\isa{P\ a}:
\begin{isabelle}
\isacommand{apply}\ (drule\ mp,\ assumption)\isanewline
\ 1.\ \isasymlbrakk{\isasymforall}x.\ P\ x\ \isasymlongrightarrow\ P\ (h\ x);\ P\ a;\ P\ (h\ a)\isasymrbrakk\ \isasymLongrightarrow\ P\ (h\ (h\ a))
\end{isabelle}
%
We have created the assumption \isa{P(h\ a)}, which is progress. To
continue the proof, we apply \isa{spec} again. We shall not need it
again, so we can use
\isa{drule}.
\begin{isabelle}
\isacommand{apply}\ (drule\ spec)\isanewline
\ 1.\ \isasymlbrakk P\ a;\ P\ (h\ a);\ P\ ?x2\
\isasymlongrightarrow \ P\ (h\ ?x2)\isasymrbrakk \ \isasymLongrightarrow java.lang.NullPointerException
P\ (h\ (h\ a))
\end{isabelle}
%
The new assumption bridges the gap between \isa{P(h\ a)} and \isa{P(h(h\ a))}.
\begin{isabelle}
\isacommand{by}\ (drule\ mp)
\end{isabelle}
\medskip
\emph{A final remark}. Replacing this \isacommand{by} command with
\begin{isabelle}
\isacommand{apply}\ (drule\ mp,\ assumption)
\end{isabelle}
would not work: it would add a second copy of \isa{P(h~a)} instead
of the desired assumption, \isa{P(h(h~a))}. The \isacommand{by}
command forces Isabelle to backtrack until it finds the correct one.
Alternatively, we could have used the \isacommand{apply} command and bundled the
\isa{drule mp} with \emph{two} calls of \isa{assumption}. Or, of course,
we could have given the entire proof to \isa{auto}.%
\index{assumptions!reusing|)}\index{*frule (method)|)}
\subsection{Instantiating a Quantifier Explicitly}
\index{quantifiers!instantiating}
We can prove a theorem of the form $\exists x.\,P\, x$ by exhibiting a
suitable term~$t$ such that $P\,t$ is true. Dually, we can use an
assumption of the form $\forall x.\,P\, x$ to generate a new assumption $P\,t$ for
a suitable term~$t$. In many cases,
Isabelle makes the correct choice automatically, constructing the term by
unification. In other cases, the required term is not obvious and we must
specify it ourselves. Suitable methods are \isa{rule_tac}, \isa{drule_tac}
and \isa{erule_tac}.
We have seen (just above, {\S}\ref{sec:frule}) a proof of this lemma:
\begin{isabelle}
\isacommand{lemma}\ "\isasymlbrakk \isasymforall x.\ P\ x\
\isasymlongrightarrow \ P\ (h\ x);\ P\ a\isasymrbrakk java.lang.NullPointerException
\isasymLongrightarrow \ P(h\ (h\ a))"
\end{isabelle}
We had reached this subgoal:
\begin{isabelle}
\ 1.\ \isasymlbrakk{\isasymforall}x.\ P\ x\ \isasymlongrightarrow\ P\ (hjava.lang.NullPointerException
x);\ P\ a;\ P\ (h\ a)\isasymrbrakk\ \isasymLongrightarrow\ P\ (h\ (h\ a))
\end{isabelle}
%
The proof requires instantiating the quantified assumption with the
term~\isa{h~a}.
\begin{isabelle}
\isacommand{apply}\ (drule_tac\ x\ =\ "h\ a"\ \isakeyword{in}java.lang.NullPointerException
spec)\isanewline
\ 1.\ \isasymlbrakk P\ a;\ P\ (h\ a);\ P\ (h\ a)\ \isasymlongrightarrow java.lang.NullPointerException
P\ (h\ (h\ a))\isasymrbrakk \ \isasymLongrightarrow \ P\ (h\ (h\ a))
\end{isabelle}
We have forced the desired instantiation.
\medskip
Existential formulas can be instantiated too. The next example uses the
\textbf{divides} relation\index{divides relation}
of number theory:
\begin{isabelle}
?m\ dvd\ ?n\ \isasymequiv\ {\isasymexists}k.\ ?n\ =\ ?m\ *\ k
\rulename{dvd_def}
\end{isabelle}
Let us prove that multiplication of natural numbers is monotone with
respect to the divides relation:
\begin{isabelle}
\isacommand{lemma}\ mult_dvd_mono:\ "{\isasymlbrakk}i\ dvd\ m;\ j\ dvd\
n\isasymrbrakk\ \isasymLongrightarrow\ i*j\ dvd\ (m*n\ ::\ nat)"\isanewline
\isacommand{apply}\ (simp\ add:\ dvd_def)
\end{isabelle}
%
Unfolding the definition of divides has left this subgoal:
\begin{isabelle}
\ 1.\ \isasymlbrakk \isasymexists k.\ m\ =\ i\ *\ k;\ \isasymexists k.\ njava.lang.NullPointerException
=\ j\ *\ k\isasymrbrakk \ \isasymLongrightarrow \ \isasymexists k.\ m\ *java.lang.NullPointerException
n\ =\ i\ *\ j\ *\ k
\end{isabelle}
%
Next, we eliminate the two existential quantifiers in the assumptions:
\begin{isabelle}
\isacommand{apply}\ (erule\ exE)\isanewline
\ 1.\ \isasymAnd k.\ \isasymlbrakk \isasymexists k.\ n\ =\ j\ *\ k;\ m\ =java.lang.NullPointerException
i\ *\ k\isasymrbrakk \ \isasymLongrightarrow \ \isasymexists k.\ m\ *\ njava.lang.NullPointerException
=\ i\ *\ j\ *\ k%
\isanewline
\isacommand{apply}\ (erule\ exE)
\isanewline
\ 1.\ \isasymAnd k\ ka.\ \isasymlbrakk m\ =\ i\ *\ k;\ n\ =\ j\ *java.lang.NullPointerException
ka\isasymrbrakk \ \isasymLongrightarrow \ \isasymexists k.\ m\ *\ n\ =\ ijava.lang.NullPointerException
*\ j\ *\ k
\end{isabelle}
%
The term needed to instantiate the remaining quantifier is~\isa{k*ka}. But
\isa{ka} is an automatically-generated name. As noted above, references to
such variable names makes a proof less resilient to future changes. So,
first we rename the most recent variable to~\isa{l}:
\begin{isabelle}
\isacommand{apply}\ (rename_tac\ l)\isanewline
\ 1.\ \isasymAnd k\ l.\ \isasymlbrakk m\ =\ i\ *\ k;\ n\ =\ j\ *\ l\isasymrbrakk java.lang.NullPointerException
\isasymLongrightarrow \ \isasymexists k.\ m\ *\ n\ =\ i\ *\ j\ *\ k%
\end{isabelle}
We instantiate the quantifier with~\isa{k*l}:
\begin{isabelle}
\isacommand{apply}\ (rule_tac\ x="k*l"\ \isakeyword{in}\ exI)\ \isanewline
\ 1.\ \isasymAnd k\ ka.\ \isasymlbrakk m\ =\ i\ *\ k;\ n\ =\ j\ *java.lang.NullPointerException
ka\isasymrbrakk \ \isasymLongrightarrow \ m\ *\ n\ =\ ijava.lang.NullPointerException
*\ j\ *\ (k\ *\ ka)
\end{isabelle}
%
The rest is automatic, by arithmetic.
\begin{isabelle}
\isacommand{apply}\ simp\isanewline
\isacommand{done}\isanewline
\end{isabelle}
\section{Description Operators}
\label{sec:SOME}
\index{description operators|(}%
HOL provides two description operators.
A \textbf{definite description} formalizes the word ``the,'' as in
``the greatest divisior of~$n$.''
It returns an arbitrary value unless the formula has a unique solution.
An \textbf{indefinite description} formalizes the word ``some,'' as in
``some member of~$S$.'' It differs from a definite description in not
requiring the solution to be unique: it uses the axiom of choice to pick any
solution.
\begin{warn}
Description operators can be hard to reason about. Novices
should try to avoid them. Fortunately, descriptions are seldom required.
\end{warn}
\subsection{Definite Descriptions}
\index{descriptions!definite}%
A definite description is traditionally written $\iota x. P(x)$. It denotes
the $x$ such that $P(x)$ is true, provided there exists a unique such~$x$;
otherwise, it returns an arbitrary value of the expected type.
Isabelle uses \sdx{THE} for the Greek letter~$\iota$.
%(The traditional notation could be provided, but it is not legible on screen.)
We reason using this rule, where \isa{a} is the unique solution:
\begin{isabelle}
\isasymlbrakk P\ a;\ \isasymAnd x.\ P\ x\ \isasymLongrightarrow \ x\ =\ a\isasymrbrakk \
\isasymLongrightarrow \ (THE\ x.\ P\ x)\ =\ a%
\rulenamedx{the_equality}
\end{isabelle}
For instance, we can define the
cardinality of a finite set~$A$ to be that
$n$ such that $A$ is in one-to-one correspondence with $\{1,\ldots,n\}$. We can then
prove that the cardinality of the empty set is zero (since $n=0$ satisfies the
description) and proceed to prove other facts.
A more challenging example illustrates how Isabelle/HOL defines the least number
operator, which denotes the least \isa{x} satisfying~\isa{P}:%
\index{least number operator|see{\protect\isa{LEAST}}}
\begin{isabelle}
(LEAST\ x.\ P\ x)\ = (THE\ x.\ P\ x\ \isasymand \ (\isasymforall y.java.lang.NullPointerException
P\ y\ \isasymlongrightarrow \ x\ \isasymle \ y))
\end{isabelle}
%
Let us prove the analogue of \isa{the_equality} for \sdx{LEAST}\@.
\begin{isabelle}
\isacommand{theorem}\ Least_equality:\isanewline
\ \ \ \ \ "\isasymlbrakk P\ (k::nat);\ \ \isasymforall x.\ P\ x\ \isasymlongrightarrow \ k\ \isasymle \ x\isasymrbrakk \ \isasymLongrightarrow \ (LEAST\ x.\ P\ x)\ =\ k"\isanewline
\isacommand{apply}\ (simp\ add:\ Least_def)\isanewline
\isanewline
\ 1.\ \isasymlbrakk P\ k;\ \isasymforall x.\ P\ x\ \isasymlongrightarrow \ k\ \isasymle \ x\isasymrbrakk \isanewline
\isaindent{\ 1.\ }\isasymLongrightarrow \ (THE\ x.\ P\ x\ \isasymand \ (\isasymforall y.\ P\ y\ \isasymlongrightarrow \ x\ \isasymle \ y))\ =\ k%
\end{isabelle}
The first step has merely unfolded the definition.
\begin{isabelle}
\isacommand{apply}\ (rule\ the_equality)\isanewline
\isanewline
\ 1.\ \isasymlbrakk P\ k;\ \isasymforall x.\ P\ x\ \isasymlongrightarrow \ kjava.lang.NullPointerException
\isasymle \ x\isasymrbrakk \ \isasymLongrightarrow \ P\ k\ \isasymand java.lang.NullPointerException
(\isasymforall y.\ P\ y\ \isasymlongrightarrow \ k\ \isasymle \ y)\isanewline
\ 2.\ \isasymAnd x.\ \isasymlbrakk P\ k;\ \isasymforall x.\ P\ x\ \isasymlongrightarrow \ k\ \isasymle \ x;\ P\ x\ \isasymand \ (\isasymforall y.\ P\ y\ \isasymlongrightarrow \ x\ \isasymle \ y)\isasymrbrakk \isanewline
\ \ \ \ \ \ \ \ \isasymLongrightarrow \ x\ =\ k%
\end{isabelle}
As always with \isa{the_equality}, we must show existence and
uniqueness of the claimed solution,~\isa{k}. Existence, the first
subgoal, is trivial. Uniqueness, the second subgoal, follows by antisymmetry:
\begin{isabelle}
\isasymlbrakk x\ \isasymle \ y;\ y\ \isasymle \ x\isasymrbrakk \ \isasymLongrightarrow \ x\ =\ y%
\rulename{order_antisym}
\end{isabelle}
The assumptions imply both \isa{k~\isasymle~x} and \isa{x~\isasymle~k}. One
call to \isa{auto} does it all:
\begin{isabelle}
\isacommand{by}\ (auto\ intro:\ order_antisym)
\end{isabelle}
\subsection{Indefinite Descriptions}
\index{Hilbert's $\varepsilon$-operator}%
\index{descriptions!indefinite}%
An indefinite description is traditionally written $\varepsilon x. P(x)$ and is
known as Hilbert's $\varepsilon$-operator. It denotes
some $x$ such that $P(x)$ is true, provided one exists.
Isabelle uses \sdx{SOME} for the Greek letter~$\varepsilon$.
Here is the definition of~\cdx{inv},\footnote{In fact, \isa{inv} is defined via a second constant \isa{inv_into}, which we ignore here.} which expresses inverses of
functions:
\begin{isabelle}
inv\ f\ \isasymequiv \ \isasymlambda y.\ SOME\ x.\ f\ x\ =\ y%
\rulename{inv_def}
\end{isabelle}
Using \isa{SOME} rather than \isa{THE} makes \isa{inv~f} behave well
even if \isa{f} is not injective. As it happens, most useful theorems about
\isa{inv} do assume the function to be injective.
The inverse of \isa{f}, when applied to \isa{y}, returns some~\isa{x} such that
\isa{f~x~=~y}. For example, we can prove \isa{inv~Suc} really is the inverse
of the \isa{Suc} function
\begin{isabelle}
\isacommand{lemma}\ "inv\ Suc\ (Suc\ n)\ =\ n"\isanewline
\isacommand{by}\ (simp\ add:\ inv_def)
\end{isabelle}
\noindent
The proof is a one-liner: the subgoal simplifies to a degenerate application of
\isa{SOME}, which is then erased. In detail, the left-hand side simplifies
to \isa{SOME\ x.\ Suc\ x\ =\ Suc\ n}, then to \isa{SOME\ x.\ x\ =\ n} and
finally to~\isa{n}.
We know nothing about what
\isa{inv~Suc} returns when applied to zero. The proof above still treats
\isa{SOME} as a definite description, since it only reasons about
situations in which the value is described uniquely. Indeed, \isa{SOME}
satisfies this rule:
\begin{isabelle}
\isasymlbrakk P\ a;\ \isasymAnd x.\ P\ x\ \isasymLongrightarrow \ x\ =\ a\isasymrbrakk \
\isasymLongrightarrow \ (SOME\ x.\ P\ x)\ =\ a%
\rulenamedx{some_equality}
\end{isabelle}
To go further is
tricky and requires rules such as these:
\begin{isabelle}
P\ x\ \isasymLongrightarrow \ P\ (SOME\ x.\ P\ x)
\rulenamedx{someI}\isanewline
\isasymlbrakk P\ a;\ \isasymAnd x.\ P\ x\ \isasymLongrightarrow \ Qjava.lang.NullPointerException
x\isasymrbrakk \ \isasymLongrightarrow \ Q\ (SOME\ x.\ P\ x)
\rulenamedx{someI2}
\end{isabelle}
Rule \isa{someI} is basic: if anything satisfies \isa{P} then so does
\hbox{\isa{SOME\ x.\ P\ x}}. The repetition of~\isa{P} in the conclusion makes it
difficult to apply in a backward proof, so the derived rule \isa{someI2} is
also provided.
\medskip
For example, let us prove the \rmindex{axiom of choice}:
\begin{isabelle}
\isacommand{theorem}\ axiom_of_choice:
\ "(\isasymforall x.\ \isasymexists y.\ P\ x\ y)\ \isasymLongrightarrow \
\isasymexists f.\ \isasymforall x.\ P\ x\ (f\ x)"\isanewline
\isacommand{apply}\ (rule\ exI,\ rule\ allI)\isanewline
\ 1.\ \isasymAnd x.\ \isasymforall x.\ \isasymexists y.\ P\ x\ yjava.lang.NullPointerException
\isasymLongrightarrow \ P\ x\ (?f\ x)
\end{isabelle}
%
We have applied the introduction rules; now it is time to apply the elimination
rules.
\begin{isabelle}
\isacommand{apply}\ (drule\ spec,\ erule\ exE)\isanewline
\ 1.\ \isasymAnd x\ y.\ P\ (?x2\ x)\ y\ \isasymLongrightarrow \ P\ x\ (?f\ x)
\end{isabelle}
\noindent
The rule \isa{someI} automatically instantiates
\isa{f} to \hbox{\isa{\isasymlambda x.\ SOME y.\ P\ x\ y}}, which is the choice
function. It also instantiates \isa{?x2\ x} to \isa{x}.
\begin{isabelle}
\isacommand{by}\ (rule\ someI)\isanewline
\end{isabelle}
\subsubsection{Historical Note}
The original purpose of Hilbert's $\varepsilon$-operator was to express an
existential destruction rule:
\[ \infer{P[(\varepsilon x. P) / \, x]}{\exists x.\,P} \]
This rule is seldom used for that purpose --- it can cause exponential
blow-up --- but it is occasionally used as an introduction rule
for the~$\varepsilon$-operator. Its name in HOL is \tdxbold{someI_ex}.%%
\index{description operators|)}
\section{Some Proofs That Fail}
\index{proofs!examples of failing|(}%
Most of the examples in this tutorial involve proving theorems. But not every
conjecture is true, and it can be instructive to see how
proofs fail. Here we attempt to prove a distributive law involving
the existential quantifier and conjunction.
\begin{isabelle}
\isacommand{lemma}\ "({\isasymexists}x.\ P\ x)\ \isasymand\
({\isasymexists}x.\ Q\ x)\ \isasymLongrightarrow\ {\isasymexists}x.\ P\ xjava.lang.NullPointerException
\isasymand\ Q\ x"
\end{isabelle}
The first steps are routine. We apply conjunction elimination to break
the assumption into two existentially quantified assumptions.
Applying existential elimination removes one of the quantifiers.
\begin{isabelle}
\isacommand{apply}\ (erule\ conjE)\isanewline
\isacommand{apply}\ (erule\ exE)\isanewline
\ 1.\ \isasymAnd x.\ \isasymlbrakk{\isasymexists}x.\ Q\ x;\ P\ x\isasymrbrakk\ \isasymLongrightarrow\ {\isasymexists}x.\ P\ x\ \isasymand\ Q\ x
\end{isabelle}
%
When we remove the other quantifier, we get a different bound
variable in the subgoal. (The name \isa{xa} is generated automatically.)
\begin{isabelle}
\isacommand{apply}\ (erule\ exE)\isanewline
\ 1.\ \isasymAnd x\ xa.\ \isasymlbrakk P\ x;\ Q\ xa\isasymrbrakkjava.lang.NullPointerException
--> --------------------
--> maximum size reached
--> --------------------
¤ Diese beiden folgenden Angebotsgruppen bietet das Unternehmen0.86Angebot
Wie Sie bei der Firma Beratungs- und Dienstleistungen beauftragen können
¤
|
Lebenszyklus
Die hierunter aufgelisteten Ziele sind für diese Firma wichtig
Ziele
Entwicklung einer Software für die statische Quellcodeanalyse
|