Learning2Reason
What?
This is an NWO-funded project started in 2011 to develop novel machine learning algorithms suitable for formal, computer-assisted, and automated reasoning in the presence of large amount of previous knowledge.
Who?
The project is carried out at the Intelligent Systems section at RU Nijmegen. Both the Machine Learning group and the Foundations group participate in the project, each bringing their expertise. The current contributors are:
Background to the Research
In recent years, large corpora of formally expressed knowledge have become available in the fields of formal mathematics, software verification, and real-world ontologies. Examples of such formal mathematical libraries are the Mizar Mathematical Library, the Constructive Coq Repository at Nijmegen, the Flyspeck project, the The Archive of Formal Proofs, and the Mathematical Components project. Real-world ontologies include for example the Cyc Knowledge Base, the Suggested Upper Merged Ontology (SUMO), and the YAGO semantic knowledge base. To derive new knowledge from such large corpora, one uses computer-assisted and automated reasoning methods, which typically use one-problem-at-a-time approaches (e.g., resolution and tableaux proving). This symbolic (or deductive) approach typically does not consider the knowledge contained in previous proofs, and it suffers from a fast-growing search space.
This makes computer-assisted reasoning a suitable domain for complementary heuristic (or inductive) AI approaches, like machine learning, which rely on learning from the large amount of previous knowledge to heuristically control the search space, e.g., by estimating the usefulness of existing lemmas for proving a new result.
The Basis
A starting point for the project are several datasets extracted from the Mizar Mathematical Library, and existing state-of-the-art machine learning and automated reasoning methods. We evaluate these methods on the datasets, adapt them to the specifics of the mathematical domains, and combine them together in novel metasystems like the Machine Learner for Automated Reasoning. We are also interested in getting mathematical datasets coming from other systems, and in experimenting with different learning and reasoning tools.
The Tasks
Our focus so far has been on the following tasks:
Developing suitable (structural, semantic) kernels for classification in the formal mathematical domain
Formulating the learning task as an instance of different paradigms (binary or multi-class classification, (multi-output) ranking)
Developing multi-output rankers for the domain
Defining and creating suitable representations of the mathematical data, and obtaining precise datasets and benchmarks for learning from mathematical libraries
Defining proper machine learning evaluation metrics
Plugging the newly developed machine learning tools into reasoning metasystems like the Machine Learner for Automated Reasoning
Evaluation of the combined learning/deductive systems on standard large-theory benchmarks like the MPTP Challenge
The Results
Our current prototypes/systems/evaluations and software developed so far are:
A kernel-based multi output ranking algorithm which outperforms the SNoW system ( in naive Bayes mode) and the APRILS system on our test datasets. A paper describing the experiments is submitted. Some graphs showing an AUC comparison between our algorithms, APRILS and SNoW can be found here and here. A second graph comparing the performance of Malarea with SNoW and the performance of Malarea with our algorithm can be found here.
E-MaLeS: Machine Learning of Strategies for E prover. We use machine learning methods to find optimal strategies for the E ATP system.
The MPTP2078 benchmark of 2078 related large-theory problems extending the earlier MPTP Challenge benchmark.
Detailed analysis of MML proof dependencies (with Jesse Alama), their use for training SNoW on all MML proofs, and for comparing human MML proofs with proofs found by ATPs using the trained advice.
Machine Learning Connection Prover: (MaLeCoP - with Jiri Vyskocil) A connection tableaux prover using machine learning from a large body of solved problems and successful proof decisions to guide the internal ATP proof search process.
Publications
E. Tsivtsivadze, J. Urban, H. Geuvers, T. Heskes. Semantic Graph Kernels for Automated Reasoning. SDM 2011: 795–803, SIAM / Omnipress, 2011.
Daniel Kühlwein, Josef Urban, Evgeni Tsivtsivadze, Herman Geuvers, Tom Heskes: Learning2Reason. Calculemus/MKM 2011: 298-300
Jesse Alama, Daniel Kühlwein, Josef Urban: Automated and Human Proofs in General Mathematics: An Initial Comparison. LPAR 2012, LNCS 7180: 37-45, 2012.
J. Urban, J. Vyskocil, and P. Stepanek. MaLeCoP: Machine Learning Connection Prover. TABLEAUX 2011, LNCS 6793: 263–277, 2011.
J. Urban. An Overview of Methods for Large-Theory Automated Theorem Proving (Invited Paper). ATE 2011, CEUR 760: 3–8. 2011.
D. Kuehlwein, J. Urban, E. Tsivtsivadze, H. Geuvers, and T. Heskes. Multi-Output Ranking for Automated Reasoning. KDIR 2011.