coq-club AT inria.fr
Subject: The Coq mailing list
List archive
- From: Andreas Abel <andreas.abel AT ifi.lmu.de>
- To: Matthieu Sozeau <matthieu.sozeau AT gmail.com>, Jason Gross <jasongross9 AT gmail.com>
- Cc: Erik Palmgren <palmgren AT math.su.se>, Coq Club <coq-club AT inria.fr>, Thierry Coquand <Thierry.Coquand AT cse.gu.se>
- Subject: Re: [Coq-Club] Slow checking of destructs
- Date: Tue, 12 Nov 2013 18:02:58 +0100
In Agda, we have polymorphic projections, for the same reasons. Also, since recently, projections are eagerly reduced, leading to much smaller terms...
Btw. wrt. to reducing term size, the following paper by Jason Reed is very interesting:
Jason Reed: Redundancy Elimination for LF. Electr. Notes Theor. Comput. Sci. 199: 89-106 (2008)
Seems like you can get term size from O(n^2) down to O(n). I am starting to look into this.
Cheers,
Andreas
On 09.11.2013 12:50, Matthieu Sozeau wrote:
Using the fast projections made the compilation time of a 5kloc development
drop from 20 minutes to 15 seconds.
It was full of nested sigmas and parameterized records.
-- Matthieu
On 9 nov. 2013, at 12:40, Jason Gross
<jasongross9 AT gmail.com>
wrote:
I agree with Matthieu. I've experienced the same problem, both with the size
of the term in general, and with destructing nested sigma types in
particular. My category theory library
(https://bitbucket.org/JasonGross/catdb, but in the process of being merged
into https://github.com/HoTT/HoTT) sped up by a factor of two when I decided
to parameterize categories only on the type of objects, rather than on both
the type of objects and on the type of morphisms. In my experience, Coq
seems to start getting painfully slow when the word count of the goal reaches
around half-a-million terms. Additionally, I've run into cases where fully
destructing a nested sigma type takes several seconds, while destructing the
analogous record takes much less than a second.
Matthieu, do you happen to have any benchmarks or examples on how much your
changes actually speed up nested sigmas? (If you're looking for test cases,
I think the old version of the issig tactics in types/Record.v in HoTT, and
the examples they get applied to there and in my category theory library,
would be good test cases. The proof consists of essentially nothing but
destructing a nested sigma and a record, and applying reflexivity, and still
manages to take several seconds (or more---I think I might be able to dig out
a case where the old version of the tactics, which didn't use
isequiv_adjointify, took over a minute).)
On Sat, Nov 9, 2013 at 6:26 AM, Matthieu Sozeau
<matthieu.sozeau AT gmail.com>
wrote:
Dear Erik,
I would suspect that the representation of record projections is the
culprit
here. It's easy to check: Set Printing All in the goals where you're doing the
destruct and see if they're reasonably big or not. The type annotations of
sigma
projections (and records with parameters) can result in exponentially larger
fully explicit terms, hence longer abstractions/substitutions/retypechecking
used in destruct. Qed/Defined re-typecheck these huge terms.
One solution is to replace sigmas with specific records (with no
parameters)
when possible. Another one is to use a more efficient representation of record
projections, without parameters (part of my polyproj branch at
github.com/mattam82/coq.git and soon to be merged with the trunk). If you want
to try it out I can provide guidance.
Cheers,
-- Matthieu
On 8 nov. 2013, at 21:14, Erik Palmgren
<palmgren AT math.su.se>
wrote:
Dear Coq users,
I have been working (together with Olov Wilander) on an implementation of
categories of setoids with equality on objects. These are thus not the usual
E-categories of setoids. It relies heavily on proof irrelevant families of
setoids. Some of the constructions have turned remarkable slow to check in
Coq. For instance a 3000+ lines development (see below) took almost two
hours to check on a modern laptop. The most time consuming steps are various
destructs with patterns and the command Defined. Previously in this list, it
has been pointed out that complicated destruct patterns can generate many
cases, that may generate huge terms difficult to check. I can not really see
that in my example, that so many cases are generated. Perhaps it has instead
something to do with the many type dependencies that arise. If someone has a
hint as to what makes this example so slow, I would be very grateful.
The example is
CategoricalUniverse.v
in
http://people.su.se/~epalm/coq/czf_and_setoids/
(Accompanying paper: czf&setoids_rvsd.pdf in the same directory).
Erik Palmgren
--
Andreas Abel <>< Du bist der geliebte Mensch.
Theoretical Computer Science, University of Munich
Oettingenstr. 67, D-80538 Munich, GERMANY
andreas.abel AT ifi.lmu.de
http://www2.tcs.ifi.lmu.de/~abel/
- [Coq-Club] Slow checking of destructs, Erik Palmgren, 11/08/2013
- Re: [Coq-Club] Slow checking of destructs, Matthieu Sozeau, 11/09/2013
- Re: [Coq-Club] Slow checking of destructs, Jason Gross, 11/09/2013
- Re: [Coq-Club] Slow checking of destructs, Matthieu Sozeau, 11/09/2013
- Re: [Coq-Club] Slow checking of destructs, Matthieu Sozeau, 11/09/2013
- Re: [Coq-Club] Slow checking of destructs, Andreas Abel, 11/12/2013
- Re: [Coq-Club] Slow checking of destructs, Matthieu Sozeau, 11/09/2013
- Re: [Coq-Club] Slow checking of destructs, Erik Palmgren, 11/12/2013
- Re: [Coq-Club] Slow checking of destructs, Jason Gross, 11/09/2013
- Re: [Coq-Club] Slow checking of destructs, Matthieu Sozeau, 11/09/2013
Archive powered by MHonArc 2.6.18.