coq-club AT inria.fr
Subject: The Coq mailing list
List archive
Re: [Coq-Club] Using a Model of Human Analogical Reasoning to drive Coq fully autonomously
Chronological Thread
- From: Gregory Malecha <gmalecha AT cs.harvard.edu>
- To: Coq Club <coq-club AT inria.fr>
- Subject: Re: [Coq-Club] Using a Model of Human Analogical Reasoning to drive Coq fully autonomously
- Date: Sat, 26 Mar 2016 23:53:35 -0700
- Authentication-results: mail3-smtp-sop.national.inria.fr; spf=None smtp.pra=gmalecha AT cs.harvard.edu; spf=None smtp.mailfrom=gmalecha AT cs.harvard.edu; spf=None smtp.helo=postmaster AT mail.eecs.harvard.edu
- Ironport-phdr: 9a23:E94z4hRLEJK3vgVIusvGvqRKptpsv+yvbD5Q0YIujvd0So/mwa64YxON2/xhgRfzUJnB7Loc0qyN4/CmAzZLu8vR+Fk5M7VyFDY9wf0MmAIhBMPXQWbaF9XNKxIAIcJZSVV+9Gu6O0UGUOz3ZlnVv2HgpWVKQka3CwN5K6zPF5LIiIzvjqbpq82VOV8D3mftKZpJbzyI7izp/vEMhoVjLqtjgjDomVBvP9ps+GVzOFiIlAz97MrjtLRq8iBXpu5zv5UYCfayV+0CQLdZFDUrNXwurI2u7EGbDFjH2nxJeWIP2jFMHgKNuBr9R9L6tjbwnut7wiiTe8PsG+MaQzOnuohxRRDvkjZPEjc9/WrXg4Qkg6dSpB+qpxVXyJWSYIiPKvNkcuXQcc5MFjkJZdpYSyEUWtD0VIAIFedUeL8A94Q=
Hello, Hani --
What format are you looking for the information to be in? Without doing anything on the Coq implementation side, you can run the ideslvae protocol and parse the results (you might want to issue 'Set Printing All' first so you don't need to worry about notations). I know of several projects that have done something similar to this.
I did a very hacky implementation a while ago:
Val Roberts at UCSD has done something nicer for PeaCoq:
http://goto.ucsd.edu/peacoq/ (probably check out the source code)
On Sat, Mar 26, 2016 at 5:40 PM, Hani Awni <hani.awni AT gmail.com> wrote:
Hello,What would be the best way to get the current proof state at the point in interactive proving where Coq is expecting the human to suggest the next tactic? For example, I need the information for Proof General's display of the list of goals when it is waiting for the user to input "intros", "trivial", etc.For background, I am a cognitive scientist and budding computer scientist working in Dr. Hummel's Abstract Relational Reasoning cognitive science lab at UIUC, where we have a python model (called LISA) of how humans solve relational analogies. The idea is, during an attempt at using Coq to prove a human-specified theorem, our model should be able to recognize the abstract, structural similarity of the current goals and proof state to previous examples it has seen, and consequently suggest the appropriate tactic string, including specific mappings like what variable to apply over.In order for this to happen, I must automatically translate the proof state given by Coq into a series of propositions for LISA to understand. I can design this translation, but the first step in this is getting my hands on Coq's current goals list, whether in OCaml or Lisp or Python.On the #coq IRC, it was suggested that I implement a new tactic using a Coq plugin in OCaml, because there I could acquire the proof state; however, a tactic specifies the new proof state, whereas our model can only provide the tactic to use. Can I recursively attempt a new tactic in place of my current one?This seems like a basic need, so I feel like I have overlooked something very basic in my searching; feel free to point out the obvious approaches first.Thank you,Hani Awni
gregory malecha
- [Coq-Club] Using a Model of Human Analogical Reasoning to drive Coq fully autonomously, Hani Awni, 03/27/2016
- Re: [Coq-Club] Using a Model of Human Analogical Reasoning to drive Coq fully autonomously, Gregory Malecha, 03/27/2016
- Re: [Coq-Club] Using a Model of Human Analogical Reasoning to drive Coq fully autonomously, Clément Pit--Claudel, 03/28/2016
- Re: [Coq-Club] Using a Model of Human Analogical Reasoning to drive Coq fully autonomously, Emilio Jesús Gallego Arias, 03/28/2016
- Re: [Coq-Club] Using a Model of Human Analogical Reasoning to drive Coq fully autonomously, Gregory Malecha, 03/27/2016
Archive powered by MHonArc 2.6.18.