Skip to Content.
Sympa Menu

coq-club - [Coq-Club] Goedel's paradox, canonical definitions vs. instances/models (dependent type theory and the language of species), expressiveness vs. automation, re-evaluating natural deduction

coq-club AT inria.fr

Subject: The Coq mailing list

List archive

[Coq-Club] Goedel's paradox, canonical definitions vs. instances/models (dependent type theory and the language of species), expressiveness vs. automation, re-evaluating natural deduction


Chronological Thread 
  • From: Ken Kubota <mail AT kenkubota.de>
  • To: cl-isabelle-users AT lists.cam.ac.uk, hol-info AT lists.sourceforge.net, coq-club AT inria.fr
  • Cc: "Prof. Peter B. Andrews" <andrews AT cmu.edu>, "Prof. Lawrence C. Paulson" <lp15 AT cam.ac.uk>, "Prof. Shinichi Mochizuki" <motizuki AT kurims.kyoto-u.ac.jp>
  • Subject: [Coq-Club] Goedel's paradox, canonical definitions vs. instances/models (dependent type theory and the language of species), expressiveness vs. automation, re-evaluating natural deduction
  • Date: Tue, 26 Jan 2016 15:27:32 +0100
  • Authentication-results: mail3-smtp-sop.national.inria.fr; spf=None smtp.pra=mail AT kenkubota.de; spf=Pass smtp.mailfrom=mail AT kenkubota.de; spf=None smtp.helo=postmaster AT plasma6.jpberlin.de
  • Ironport-phdr: 9a23:/ztgZRaLvvJna2tjDVMARDv/LSx+4OfEezUN459isYplN5qZpM66bnLW6fgltlLVR4KTs6sC0LqJ9fi4EjFYqb+681k8M7V0HycfjssXmwFySOWkMmbcaMDQUiohAc5ZX0Vk9XzoeWJcGcL5ekGA6ibqtW1aJBzzOEJPK/jvHcaK1oLsh7/0o8WYPl0ArQH+SI0xBS3+lR/WuMgSjNkqAYcK4TyNnEF1ff9Lz3hjP1OZkkW0zM6x+Jl+73YY4Kp5pIZoGJ/3dKUgTLFeEC9ucyVsvJWq5lH/Sl6E4j4dSWwcnx5MDk3Y9xzgRb/1szDmrax20SzcNMawULNwETSl6+JqThHvoCMGLCIitnrQl9Z7gaxcuh2s4Rpy38qca4aMcfF6Y6n1fNUARGMHUNwCeTZGB9afYo1HLPcLOvpR5934rloKhRizAwqxDuTzw3lEiymljuUBz+09HFSej0QbFNUUvSGR9Y2tOQ==

Dear Members of the Research Community,

Considering the results of Shinichi Mochizuki (see section 2 below) at

http://www.kurims.kyoto-u.ac.jp/~motizuki/Inter-universal%20Teichmuller%20Theory%20IV.pdf

as well as offering a further response to Ramana Kumar's comment (see section
3
below) at

https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2015-July/msg00149.html

and answering Andrei Popescu's comment (see section 4 below) at

https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2015-December/msg00059.html

is a good opportunity to re-evaluate natural deduction, and to summarize and
conclude the discussion.

Most of the statements about the Isabelle proof assistant software should
also
apply to the HOL and the Coq proof assistants.


1. Goedel's paradox (Goedel's First Incompleteness Theorem)

As shown previously at

https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-January/msg00029.html
because of the non-definability of a Goedel numbering (or encoding) function
within Q0 (or more precisely: the non-expressibility, since it is not a
well-formed formula) due to the violation of type restrictions on lambda
application (case (b) in the definition of wffs (well-formed formulae) [cf.
Andrews, 2002, p. 211]), Goedel's First Incompleteness Theorem is not a
mathematical theorem (or metatheorem), and the same is also true of Goedel's
Second Incompleteness Theorem, since both theorems would require a wff
representing the Goedel numbering (or encoding) function.

This violation of type restrictions in formal logic or mathematics
corresponds
to the philosophical insight that Goedel's construction of a proposition
involving self-referencing negativity ("I am not provable") has to be
considered as an antinomy, as both Russell's paradox ("the set of all sets
that
are not members of themselves") and Goedel's paradox known as the proposition
in Goedel's First Incompleteness Theorem ("I am not provable", or originally:
"We therefore have before us a proposition that says about itself that it is
not provable" [Gödel, 1931, p. 598]) have the two constitutive properties of
antinomies: self-reference and negativity (negation).

In the presentation of Goedel's First Incompleteness Theorem with the
Isabelle
proof assistant software, the definition of the Goedel encoding function was
obtained by using the metamathematical notion "term" (level 1) as a concrete
mathematical type symbol (level 2) [cf. Andrews, 2002, p. 210], which is a
confusion of two different logical levels (or language levels), and hence a
violation of type restrictions.

Generally speaking, due to certain decisions in the design of Isabelle the
stratification of the three different logical levels (as, for example, in the
R0 implementation) collapse into a single one, which requires justification
and
certain measures to prevent the confusion of logical levels (or language
levels) within the single sphere obtained.


2. Canonical definitions vs. instances/models (dependent type theory and the
language of species)

In his development of a "language of species", in which a "'species' is a
'type
of mathematical object', such as a 'group', a 'ring', a 'scheme', etc."
[Mochizuki, 2015, p. 66], one of Professor Mochizuki's interests is to
distinguish canonical definitions from arbitrary choices: "The fact that the
data involved in a species is given by abstract set-theoretic formulas
imparts
a certain canonicality to the mathematical notion constituted by the species,
a
canonicality that is not shared, for instance, by mathematical objects whose
construction depends on an invocation of the axiom of choice in some
particular
ZFC-model [...]. Moreover, by furnishing a stock of such 'canonical notions',
the theory of species allows one, in effect, to compute the extent of
deviation
of various 'non-canonical objects' [i.e., whose construction depends upon the
invocation of the axiom of choice!] from a sort of 'canonical norm'."
[Mochizuki, 2015, p. 70]

This is also one of the motivations for dependent type theory, in which the
same can be expressed even more elegantly. The creator of R0 initially
intended
to prove that the definition of natural numbers by John von Neumann is a
specific instance (model) of the canonical definition with the Peano axioms.
For practical reasons, a simpler, but conceptually equivalent proof was
carried
out: The group property of the exclusive disjunction (XOR), which in
dependent
type theory can be directly expressed by the theorem
Grp o XOR
as proven at

http://www.kenkubota.de/files/R0_draft_excerpt_with_definitions_and_groups.pdf
(p. 416)

using the canonical definitions of groups (cf. p. 359) and of XOR (cf. p.
358).
Furthermore, for the canonical definition of groups the proof is given that
every group has a unique identity element (cf. p. 359 ff.), and this property
is then transferred to the XOR group by performing basically only a few
substitutions (cf. pp. 417 ff.).

The expression of the relationship between a canonical definition and a
specific instance in dependent type theory has a number of advantages:
1. It does not require a meta-theory (an additional theory layer on top of
the
formal/object language) such as, for example, model theory, since the
relationship can be expressed _within_ the formal language (object language)
itself (in philosophy this would be called "immanent").
2. It does not use (axiomatic) set theory.
3. It does not require the Axiom of Choice (which exists neither in Q0 nor in
R0).
For the critique of the concept of meta-theory and of model theory, cf.
[Kubota, 2013, pp. 13 ff.].
For the critique of the concept of meta-theory and of (axiomatic) set theory,
see also

https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2015-December/msg00042.html

Please note that seeking an "explicit description of certain tasks typically
executed at an implicit, intuitive level by mathematicians" [Mochizuki, 2015,
p. 66] is the aim of type theory - there are nearly identical formulations in
the work of Andrews - and that "the emphasis in the theory of species lies in
the programs — i.e., 'software' — that yield the desired output data, not on
the output data itself" [Mochizuki, 2015, p. 69] is also one of the key
features of lambda-calculus. For comparison, the corresponding quote in
Andrews' work is: "[M]ost mathematicians do make mental distinctions between
different types of mathematical objects, and use different types of letters
to
denote them, so it might be claimed that type theory provides a more natural
formalization of mathematics as it is actually practiced. In addition,
explicit
syntactic distinctions between expressions denoting intuitively different
types
of mathematical entities are very useful in computerized systems for
exploring
and applying mathematics [...]." [Andrews, 2002, p. 205]


3. Expressiveness vs. automation

The main notion of proof verification, as done by the R0 implementation, is
"expressiveness", focusing on principle (philosophical) questions such as
expressibility, definability or provability without regard to practical
criteria such as efficiency or automation in carrying out proofs. Hence proof
verification is very close to establishing a logistic system like Q0 or R0,
"finding a formulation of type theory which is as expressive as possible,
allowing mathematical ideas to be expressed precisely with a minimum of
circumlocutions, and which is as simple and economical as is possible without
sacrificing expressiveness." [Andrews, 2002, p. 205]

Ramana Kumar's statement that "there are already several variations of
theorem
provers implementing a roughly equivalent logic", e.g., "Isabelle/HOL, HOL4,
HOL Light", is correct if one considers logical equivalence only. But the aim
here is to find the most adequate or natural formulation. In most cases this
coincides with "The Principle of Reductionism" [Kubota, 2015, p. 11]. For
example, "HOL Light [...] has 'ten primitive rules of inference' [Harrison,
2009, p. 61]" [Kubota, 2015, p. 14], but Q0 has only a single one.

For this reason, I emphasized the definability of any well-formed formula
(including all connectives and quantifiers) and derivability of any rule
(including the rule of modus ponens) on the basis of only a very small set of
primitive symbols and rules, namely
- a single rule of inference,
- a single variable binder,
- only two primitive constants, and
- only two primitive types
in the presentation of Q0 at

https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2015-July/msg00147.html

Furthermore, since natural deduction requires more rules of inference than
axiomatic (Hilbert-style) deductive systems, Hilbert-style logistic systems
like Q0 are the preferred choice for finding a formulation of type theory as
discussed above.

Like Church's type theory [cf. Church, 1940] known for its "precise
formulation
of the syntax" [Paulson, 1989, p. 5], Andrews' Q0 is known for its precision
and accuracy due to its simple, clean and elegant design. "Church formalizes
syntax, including quantifiers, in the typed [lambda]-calculus. His technique
is
now standard in generic theorem proving. See [...] Andrews [1] for the formal
development." [Paulson, 1989, p. 5]

Andrews' definition of "the universal quantifier [...] defined in terms of
truth" [Paulson, 1989, p. 14] has gained Paulson's attention, and in
dependent
type theory one can also add type abstraction. For comparison, see:
http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-175.pdf (p. 14)

http://www.kenkubota.de/files/R0_draft_excerpt_with_definitions_and_groups.pdf
(p. 356)

Another example for the precision of the formal language is the
non-definability (non-expressibility) of a Goedel numbering (or encoding)
function within Q0 or R0. Isabelle/HOL currently lacks this precision as
shown
at

https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-January/msg00029.html


4. Natural deduction and logical framework (Isabelle et al.)

However, for the practical purpose of automation, natural deduction provides
the huge advantage of making metatheorems symbolically representable instead
of, for example, only applying proof templates via recursive file inclusion
as
in the R0 implementation. So if Paulson writes: "Natural deduction is far
superior for automated proof." [Paulson, 1989, p. 3], this is obviously true,
but only half of the truth; the other half of the truth is that the precision
of Hilbert-style systems (i.e., Church and Andrews) is lost when opting for
natural deduction. Hence the decision between Hilbert-style systems and
natural
deduction is a question of weighting the advantages and disadvantages
depending
on the purpose (proof verification vs. automated deduction), and furthermore,
in the case of natural deduction, additional measures have to be taken in
order
to preserve the strict distinctions between the logical levels. So for proof
verification I would advise axiomatic (Hilbert-style) deductive systems, and
for automated deduction natural deduction, but in the latter case the
additional measures are important.

A logical framework provides the means to implement several object logics,
which is not necessary if one regards R0 as the natural and ideal logic.
Practical reasons, such as re-using existing source code, may be relevant,
however, in general "I would advise against using some kind of meta-logic or
meta-language, as additional features may weaken the rigor of (or even create
an inconsistency in) the logical kernel" as stated earlier at

https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2015-December/msg00018.html
But there is no principle objection if additional measures are taken in order
to preserve the strict distinctions between the logical levels as specified at

https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-January/msg00029.html

Additional measures have to be implemented in both a concrete and a general
way.

a) At the concrete level, for the choice of
- natural deduction (with the danger of not distinguishing levels 2 and 3)
such
an implementation seems to be, at the first glance, provided with schematic
(syntactical) variables, and with "Meta-logic" [Nipkow, 2015, p. 11]
operators
having a lower precedence than the "Logic" [Nipkow, 2015, p. 11] operators,
but
for the choice of
- a logical framework (with the danger of not distinguishing levels 1 and 2)
not, since the metamathematical notion "term" (level 1) is treated as a
concrete mathematical type symbol (level 2) in Paulson's presentation of
Goedel's First Incompleteness Theorem.

b) At the general level, the implementation of automated deduction can
receive
its philosophical legitimation only through the ideal formulation of type
theory as expressed by Andrews quoted above. This can be achieved by
establishing a one-to-one correspondence between the theorems/metatheorems of
the implementations of both automated deduction and proof verification,
ideally
by a metamathematical proof arithmetizing both systems, or at least by
comparison of canonical definitions and proofs. My preference would be the
generation of R0 code by proofs obtained in Isabelle/HOL, such that the
Isabelle/HOL proofs could be verified by the R0 implementation.


If there are any further e-mail comments, please allow me to postpone answers
where appropriate until my forthcoming publication of R0 [cf. Kubota, 2015].


Kind regards,

Ken Kubota


References

Andrews, Peter B. (2002), An Introduction to Mathematical Logic and Type
Theory: To Truth Through Proof. Second edition. Dordrecht / Boston / London:
Kluwer Academic Publishers. ISBN 1-4020-0763-9. DOI:
10.1007/978-94-015-9934-4.

Church, Alonzo (1940), "A Formulation of the Simple Theory of Types". In:
Journal of Symbolic Logic 5, pp. 56-68.

Gödel, Kurt (1931), "On formally undecidable propositions of Principia
mathematica and related systems I". In: Heijenoort, Jean van, ed. (1967),
From
Frege to Gödel: A Source Book in Mathematical Logic, 1879-1931. Cambridge,
Massachusetts: Harvard University Press, pp. 596-616.

Kubota, Ken (2013), On Some Doubts Concerning the Formal Correctness of
Gödel's
Incompleteness Theorem. Berlin: Owl of Minerva Press. ISBN 978-3-943334-04-3.
DOI: 10.4444/100.101. See: http://dx.doi.org/10.4444/100.101

Kubota, Ken (2015), On the Theory of Mathematical Forms (Draft of May 18,
2015). Unpublished manuscript. SHA-512: a0dfe205eb1a2cb29efaa579d68fa2e5
45af74d8cd6c270cf4c95ed1ba6f7944 fdcffaef2e761c8215945a9dcd535a50
011d8303fd59f2c8a4e6f64125867dc4. DOI: 10.4444/100.10. See:
http://dx.doi.org/10.4444/100.10

Kubota, Ken (2015a), Excerpt from [Kubota, 2015] (pp. 356-364, pp. 411-420,
and
pp. 754-755). Available online at
http://www.kenkubota.de/files/R0_draft_excerpt_with_definitions_and_groups.pdf

(January 24, 2016). SHA-512: 67a48ee12a61583bd82c286136459c0c
9fd77f5fd7bd820e8b6fe6384d977eda 4d92705dac5b3f97e7981d7b709fc783
3d1047d1831bc357eb57b263b44c885b.

Mochizuki, Shinichi (2015), "Inter-universal Teichmuller Theory IV:
Log-volume
Computations and Set-theoretic Foundations" (December 2015). Available online
at
http://www.kurims.kyoto-u.ac.jp/~motizuki/Inter-universal%20Teichmuller%20Theory%20IV.pdf

(January 24, 2016). SHA-512: 93372b4acba90195fc6ffc2e16830464
b3bf27c4bdea156a915d3e7e5fdaad2a 4c85d0e1917064a8afd59e1a3af29a70
a438252bf2cfcdb90b12cd20f272c179.

Nipkow, Tobias (2015), "What's in Main" (May 25, 2015). Available online at
https://isabelle.in.tum.de/dist/Isabelle2015/doc/main.pdf (January 24, 2016).

Paulson, Lawrence C. (1989), "A Formulation of the Simple Theory of Types
(for
Isabelle)". Available online at
http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-175.pdf (January 24, 2016).

____________________

Ken Kubota
doi: 10.4444/100
http://dx.doi.org/10.4444/100




Archive powered by MHonArc 2.6.18.

Top of Page