coq-club AT inria.fr
Subject: The Coq mailing list
List archive
[Coq-Club] On the natural language of formal logic and mathematics - Re: Henkin's paper "Completeness in the Theory of Types" (1950); Ramsey (and Chwistek)
Chronological Thread
- From: Ken Kubota <mail AT kenkubota.de>
- To: Hugo Herbelin <Hugo.Herbelin AT inria.fr>
- Cc: HOL-info list <hol-info AT lists.sourceforge.net>, coq-club AT inria.fr, "Prof. Peter B. Andrews" <andrews AT cmu.edu>, María Manzano <mara AT usal.es>, Roger Bishop Jones <rbj AT rbjones.com>, Russell O'Connor <roconnor AT theorem.ca>, "Prof. Andrew M. Pitts" <Andrew.Pitts AT cl.cam.ac.uk>, John Harrison <John.Harrison AT cl.cam.ac.uk>
- Subject: [Coq-Club] On the natural language of formal logic and mathematics - Re: Henkin's paper "Completeness in the Theory of Types" (1950); Ramsey (and Chwistek)
- Date: Sun, 5 Feb 2017 20:04:16 +0100
- Authentication-results: mail2-smtp-roc.national.inria.fr; spf=None smtp.pra=mail AT kenkubota.de; spf=Pass smtp.mailfrom=mail AT kenkubota.de; spf=None smtp.helo=postmaster AT plasma5.jpberlin.de
- Ironport-phdr: 9a23:RfWc+BwJrm2KVhLXCy+O+j09IxM/srCxBDY+r6Qd0esWIJqq85mqBkHD//Il1AaPBtSHra0ZwLOP4+igATVGusnR9ihaMdRlbFwst4Y/p0QYGsmLCEn2frbBThcRO4B8bmJj5GyxKkNPGczzNBX4q3y26iMOSF2kbVImbsy8IIPZjty22uau4NWTJlwQ3HvuKY91eTWrogKZlsQMi4ZmK6B5njvUrXwOW+lMz25sJFS7nhDm58728oQ1oApKvPd01MNGGYbndaMkRPQMCT0nN0g26cvoqBPEUQLJ6nZKATZeqQZBHwWQtEKyZZz2qCav87MlgCQ=
Dear Hugo Herbelin,
For the delayed answer I have to apologize. I wanted to update the diagram
and
paper in discussion at
http://doi.org/10.4444/100.111
first, and there were also some other obligations to be fulfilled before
going
into detail with your numerous suggestions and your kind effort to start a
formalization, for which I would like to thank you.
Andrews mentions _expressiveness_ as the main criterion for the design of Q0:
"Therefore we shall turn our attention to finding a formulation of type
theory
which is as expressive as possible, allowing mathematical ideas to be
expressed
precisely with a minimum of circumlocutions, and which is as simple and
economical as is possible without sacrificing expressiveness. The reader will
observe that the formal language we find arises very naturally from a few
fundamental design decisions." (For the references, please see the paper
linked
above.)
Instead of the term _expressiveness_, I also use the term _reducibility_,
since
practically all mathematical notions can be reduced to a minimal set of
symbols
and rules. This method of reduction is not only a self-purpose, but lays bare
the inner laws of logic, e.g., by showing the relations between primitive and
derived symbols, and between primitive and derived rules of inference.
In the HOL handbook, this is reflected by Andrew Pitts' consideration
regarding
Q0, in which all rules of inference are reduced to a single rule of inference
(which is possible in a Hilbert-style logic like Q0):
"From a logical point of view, it would be better to have a simpler
substitution primitive, such as 'Rule R' of Andrews' logic Q0, and then to
derive more complex rules from it." [Gordon/Melham, 1993, p. 213]
https://sourceforge.net/p/hol/mailman/message/35287517/
On the question of expressiveness/reducibility I have written extensively at
https://sourceforge.net/p/hol/mailman/message/35330349/
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-September/msg00014.html
In this respect, Coq seems to follow a different path. For example, in
[Coquand
and Paulin, 1990], inductively defined types are introduced as an extension
of
the language (of Church's simple type theory [Church, 1940]), whereas the
concept of expressiveness tries to avoid any extension of the language where
possible. For this reason, in section "2 Design Decisions" of my paper linked
above, I quoted Andrews' introduction of _induction_ and _recursion_ by
emphasizing that both are implemented "without extending the formal
language".
There may be practical considerations (e.g., computability) to extend the
language, but from a logical point of view, this is not desirable.
For the same reason, an extension of the grammar in order to introduce
ordered
pairs should be avoided. In HOL, this extension are the compound types, which
can be used to create lists and cartesian products and which do not exist in
Q0, see:
http://freefr.dl.sourceforge.net/project/hol/hol/kananaskis-10/kananaskis-10-logic.pdf
(pp. 9, 11)
Peter B. Andrews introducing ordered pairs (without extending the formal
language): "The expression [λg.gxy] can be used to represent the ordered pair
<x, y>". (For the references, please see: http://doi.org/10.4444/100.111)
In this spirit, in R0, I defined the ordered pair as
## definition of ordered pair (no type variable)
:= ODPR0 [\x.[\y.[\g.(g x y)]]]
[Kubota, 2015, p. 371] which naturally expresses the concept that both the
first and second element are supplied first (via lambda application), and the
information (function g) what to do with the (then created) pair only
afterwards.
With type abstraction (binding of type variables with lambda), also the types
can be specified by
## definition of ordered pair (three type variables)
:= ODPR3 [\t.[\x:t.[\u.[\y:u.[\v.[\g:vut.(gxy)]]]]]]
[Kubota, 2015, p. 375] which naturally expresses cartesian products T x U (t
x
u), without having to use language extensions such as the HOL compound types.
(One might consider placing all type variables first: PROD :=
[\t.[\u.[\v.[\x:t.[\y:u.[\g:vut.(gxy)]]]]]], hence PROD a b would represent
the
cartesian product a x b.)
With ordered pairs, vectors (lists) can be constructed easily, including
dependently typed constructors for vectors (specified _within_ the language,
"immanent", and not as an extension).
Andrews uses the pairing mechanism for the definition of the conjunction, see
https://plato.stanford.edu/entries/type-theory-church/#ForBasEqu
http://www.kenkubota.de/files/R0_draft_excerpt_with_definitions_and_groups.pdf
(p. 356)
Then your statement that "the logical formalism which is implemented keeps
evolving" appears rather as a disadvantage than as an advantage, since the
language is freely modified, although the concepts to be formalized could be
expressed naturally without extending the formal language. Freek Wiedijk
argued
similarly: "Also, the foundations of Coq are sufficiently complicated that
they
are tinkered with, and therefore change between versions of the system."
The strength of the concept of expressiveness is demonstrated by Q0, in which
-
although still simple type theory (i.e., without type variables) - a large
part
of mathematics can be expressed very naturally. The only further development
in
R0 was the introduction of type variables and their binding with the
abstraction operator (lambda), which is the type abstraction as suggested by
Mike Gordon for HOL. This allows all concepts of mathematics to be expressed
very naturally, even structurally dependent types, which is not possible in
PVS.
With Q0, Andrews also avoided a problem of traditional (i.e., Gordon-style)
HOL; for example, whereas HOL and all descendants still use the epsilon
operator, which already Mike Gordon himself called "suspicious" (as it
results
in a rather awkward expression and renders the use of the Axiom of Choice
inevitable), Q0 uses the description operator instead. (John Harrison
mentioned
a similar motivation for HOL Light: "In particular HOL Light's foundations
address a couple of the issues [mentioned in] the HOL documentation: the
complicated primitive substitution rule and [...] the way the epsilon
operator
is plumbed deep into the foundations." -
https://sourceforge.net/p/hol/mailman/message/35444612/)
A second aspect is the question of classical vs. constructive foundations.
All
constructive foundations, including Coq, use the encoding of proofs based on
the Curry-Howard isomorphism. But the concept of expressiveness yields the
_natural_ expression of mathematics, hence the desire is not an _encoded_,
but
an _unencoded_ (direct) form of expression. With Q0/R0, one can express
mathematical concepts very naturally.
A third point is the formulation of the language. A typical example is the
formulation of typing rules as in Coq, which appear similar to the rules of
inference in natural deduction:
https://coq.inria.fr/distrib/current/refman/Reference-Manual006.html#Typed-terms
Following the tradition of Russell and Church, only the means of the language
should be used to infer the type of a term. Therefore types should be
inferred
syntactically, but not by (outward) rules for type inference that became
independent (in German: "Verselbständigung"). You will not find rules
expressed
in that manner in the work of Andrews. He intuitively felt that typing rules
arise from lambda conversion and have to be expressed _within_ the formal
language, i.e., by the means of the language, provided by its syntactic
features.
Of course, Coq's way of expressing mathematics is perfectly legitimate and
produced impressive results. In particular, I like Russell O'Connor's proof
of
what is called Goedel's First Incompleteness Theorem, as it quite elegantly
uses three different types for the three different language levels, taking
advantage of "type safety by coding using an inductive type" [Russell
O'Connor,
email to Ken Kubota, February 13, 2016] (unlike, for example, Paulson's
proof;
I cannot judge about John Harrison's proof since I am not really fluent in
ML).
(As I do not share the semantic approach (model theory), which is the
philosophic prerequisite required to use the notion "completeness" in a
meaningful way, I avoid the term "incompleteness". Russell O'Connor uses the
more careful expression "arithmetic incompleteness". From a purely syntactic
perspective, Goedel's proof only states that there is an arithmetic
well-formed
formula which is neither provable nor refutable. But there is no expectation
that every (arithmetic) wff is meaningful just because it is well-formed.
Without such an expectation evoked by meta-theoretic approaches such as the
semantic approach, by definition there is no gap between syntax and semantics
commonly called incompleteness.)
But if one consequently follows the concept of expressiveness (reducibility),
a
formal language is obtained which can be considered the ideal and natural
language of formal logic and mathematics.
There are chances that I will be in Paris in the course of this year, so we
could continue the discussion at a meeting, if you like.
Kind regards,
Ken Kubota
____________________
Ken Kubota
http://doi.org/10.4444/100
> Am 08.12.2016 um 21:15 schrieb Hugo Herbelin
> <Hugo.Herbelin AT inria.fr>:
>
> Dear Ken Kubota,
>
> On Sun, Dec 04, 2016 at 02:31:16PM +0100, Ken Kubota wrote:
>> Dear Members of the Research Community,
>>
>> Concerning Henkin's paper "Completeness in the Theory of Types" (1950), I
>> have
>> a question.
>> Theorem 2 is claimed to be false in Andrews' newest publication that forms
>> part
>> of the volume on Leon Henkin edited by Manzano et al.: "Thus, Theorem 2 of
>> [Henkin, 1950] (which asserts the completeness and soundness of C) is
>> technically incorrect. The apparently trivial soundness assertion is
>> false."
>> [Andrews, 2014b, p. 70]
>> For an extended quote, see the very end of the section "Criticism" of
>> http://doi.org/10.4444/100.111
>>
>> Have there been any formalization efforts for Theorem 2 of [Henkin, 1950],
>> such
>> that the above claim can be verified (or refuted) by mechanized reasoning
>> (formalization in a computer)?
>>
>> Following a purely syntactic approach, I usually skip semantic issues, but
>> this nevertheless seems an interesting question to me.
>
> As far as I can judge, [Andrews, 1972] is more than a claim. It is a
> theorem showing that the statement of Theorem 2 of [Henkin, 1950] is
> incorrect as stated, since the given definition of general models
> fails to ensure that functional extensionality necessarily holds. So,
> there is little hope to have Theorem 2 of [Henkin, 1950] being
> formally provable as it is, unless changing the definition of models
> as suggested e.g. by [Andrews, 1972].
>
> On the other side, I see hope to formally verify the theorem proved in
> [Andrews, 1972], what I started to do for the fun of it in the
> attached file, if ever you are interested to see how such a
> formalization could look like.
>
> Incidentally, since you are interested in Henkin's completeness
> proofs, you may be interested in a draft paper I wrote with Danko Ilik
> on the computational content of Henkin's proof in the first-order case
> with recursively enumerable theories [1], trying to answer the
> following question: assume you have a proof of validity of a formula;
> by Henkin's proof of completeness you know that there is a proof of
> this formula; which it is effectively?
>
> [1]
> http://pauillac.inria.fr/~herbelin/articles/godel-completeness-draft16.pdf
>
>> According to Roger Bishop Jones's advice at
>> https://sourceforge.net/p/hol/mailman/message/35435344/
>> now Ramsey (and Chwistek) was added to the diagram at
>> http://doi.org/10.4444/100.111
>> who first suggested the simple theory of types according to footnote 1 of
>> [Church, 1940],
>>
>> Both Chwistek and Ramsey are mentioned in the "Introduction to the Second
>> Edition" of "Principia Mathematica" in some way, but I am not familiar
>> with the
>> details of how strong the impact on Russell's own theory was.
>> In paragraph 9 of Carnap's 1929 German-language "Abriss der Logistik"
>> (referenced in Church's footnote 1), Ramsey is mentioned, but not Chwistek
>> (who
>> is referenced indirectly through p. xiv of the Introduction to the 2nd ed.
>> of
>> PM - see Carnap, pp. 21 f.).
>> In Quine's introduction to Russell's 1908 essay "Mathematical Logic as
>> Based on
>> the Theory of Types" only Ramsey is mentioned (van Heijenoort, 1967, p.
>> 152).
>>
>>
>> It is interesting to see that both Ramsey and Henkin ("Identity as a
>> logical
>> primitive", 1975) prefer the term 'identity' rather than 'equality', which
>> is
>> also my preference, as mentioned at
>> https://sourceforge.net/p/hol/mailman/message/35446249/
>>
>> https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-October/msg00070.html
>>
>> Ramsey: "[M]athematics does not consist of tautologies, but of [...]
>> 'equations', for which I should prefer to substitute 'identities'. [...]
>> [I]t
>> is interesting to see whether a theory of mathematics could not be
>> constructed
>> with identities for its foundation. I have spent a lot of time developing
>> such
>> a theory, and found that it was faced with what seemed to me insuperable
>> difficulties."
>>
>> Quoted in: [Andrews, 2014b, p. 67]
>> Available online at: http://www.hist-analytic.com/Ramsey.htm
>>
>>
>> For the references, please see:
>> http://doi.org/10.4444/100.111
>
> I read your document and I would like to make 2 remarks.
>
> About Coq, it would like to clarify for the record that the logical
> formalism which is implemented keeps evolving. In the very first
> versions, it was just Coquand and Huet's Calculus of Constructions,
> i.e. the functional pure type system with all dependent products over
> the two sorts Prop:Type. In version 2.9, a cumulative hierarchy of
> universes was added giving the equivalent of Luo's Extended Calculus
> of Constructions without Σ-types (or alternatively Miquel's CCω, i.e.,
> if my memories are correct, Coquand's Generalized Calculus of
> Constructions with an additional subtyping Prop ⊂ Type(1) which it did
> not have). In version 4.3, a universe of impredicative types (intended
> to be inhabited by relevant "programs") was explicitly separated from
> the isomorphic universe of impredicative propositions (intended to be
> proof-irrelevant). Then, the next radical change was in version 5.0
> which introduced native inductive types. This made the new system,
> which Coquand and Paulin designed, a variant of Martin-Löf's
> intensional type theory with an impredicative universe of propositions
> and an impredicative universe of types. It is at this time that the
> logic underlying Coq started to be called the Calculus of Inductive
> Constructions. Another significant change arrived in version 8.0 with
> the renouncement to the impredicative universe of types. With this
> impredicative universe, Coq was strongly committed to a setting
> inconsistent with the combination of classical logic and of the
> intensional axiom of choice stated in the impredicative universe of
> propositions. By dropping it, Coq started to be usable as a framework
> for "classical" mathematics, with a range of axioms available in the
> standard library as extensions of the core logic. Later, in version
> 8.5, a subtle feature of the criterion for ensuring well-foundedness
> of functions was dropped so as to make the system compatible with
> propositional extensionality. And this is certainly not the end of the
> story since other evolutions are in the implementation pipeline:
> inductive-inductive types, inductive-recursive types and higher
> inductive types, thanks to Matthieu Sozeau and Cyprien Mangin.
>
> My second remark is about the classical vs constructive foundations,
> which makes sense historically but which I believe tends to be
> replaced by approaches where logics are assembled from basic
> components which can be consistently combined or not together, and for
> which even the components traditionally considered as "classical" have
> "constructive" contents.
>
> For instance, on a domain I'm familiar with, classical reasoning is
> "constructive" and the technical apparatus beyond it is actually known
> for several decades. It is Kolmogorov's double negation translation
> for reduction of classical reasoning to intuitionistic reasoning and
> Gentzen-Prawitz's cut-elimination procedure for computing with
> classical proofs. That classical logic computes became clear
> retrospectively thanks to Griffin, Parigot, Krivine, Murthy, and
> others, who made an explicit connection with programming
> languages. Even if classical reasoning introduces ideal objects
> (e.g. the "platonistic" witness of ∃b(b=0 ↔ A)), any proof using
> classical reasoning eventually computes effective witnesses for simple
> enough formulas such as Σ⁰₁-formulas, getting rid in the process of
> the ideal objects involved in the proof.
>
> As another example, the intensional axiom of choice has an immediate
> constructive interpretation given by Σ-types in Martin-Löf's type
> theory, but restrictions of it can still be combined in a constructive
> way with classical logic using Spector's bar recursion and variants of
> it.
>
> In any case, thanks for your interest in logical foundations.
>
> Hugo Herbelin
> <andrews72.v>
- [Coq-Club] On the natural language of formal logic and mathematics - Re: Henkin's paper "Completeness in the Theory of Types" (1950); Ramsey (and Chwistek), Ken Kubota, 02/05/2017
Archive powered by MHonArc 2.6.18.