Author Description

Login to generate an author description

Ask a Question About This Mathematician

Abstract We present a modular proof of strong normalization for the Calculus of Constructions of Coquand and Huet (1985, 1988). This result was first proved by Coquand (1986), but our … Abstract We present a modular proof of strong normalization for the Calculus of Constructions of Coquand and Huet (1985, 1988). This result was first proved by Coquand (1986), but our proof is more perspicious. The method consists of a little juggling with some systems in the cube of Barendregt (1989), which provides a fine structure of the calculus of constructions. It is proved that the strong normalization of the calculus of constructions is equivalent with the strong normalization of F ω. In order to give the proof, we first establish some properties of various type systems. Therefore, we present a general framework of typed lambda calculi, including many well-known ones.
In this paper we present the algebraic-λ-cube, an extension of Barendregt's λ-cube with first- and higher-order algebraic rewriting. We show that strong normalization is a modular property of all the … In this paper we present the algebraic-λ-cube, an extension of Barendregt's λ-cube with first- and higher-order algebraic rewriting. We show that strong normalization is a modular property of all the systems in the algebraic-λ-cube, provided that the first-order rewrite rules are non-duplicating and the higher-order rules satisfy the general schema of Jouannaud and Okada. We also prove that local confluence is a modular property of all the systems in the algebraic-λ-cube, provided that the higher-order rules do not introduce critical pairs. This property and the strong normalization result imply the modularity of confluence.
We study different formalizations of finite sets in homotopy type theory to obtain a general definition that exhibits both the computational facilities and the proof principles expected from finite sets. … We study different formalizations of finite sets in homotopy type theory to obtain a general definition that exhibits both the computational facilities and the proof principles expected from finite sets. We use higher inductive types to define the type K(A) of "finite sets over type A" à la Kuratowski without assuming that K(A) has decidable equality. We show how to define basic functions and prove basic properties after which we give two applications of our definition.
We present an approach to type theory in which the typing judgments do not have explicit contexts. Instead of judgments of shape "Gamma |- A : B", our systems just … We present an approach to type theory in which the typing judgments do not have explicit contexts. Instead of judgments of shape "Gamma |- A : B", our systems just have judgments of shape "A : B". A key feature is that we distinguish free and bound variables even in pseudo-terms. Specifically we give the rules of the "Pure Type System" class of type theories in this style. We prove that the typing judgments of these systems correspond in a natural way with those of Pure Type Systems as traditionally formulated. I.e., our systems have exactly the same well-typed terms as traditional presentations of type theory. Our system can be seen as a type theory in which all type judgments share an identical, infinite, typing context that has infinitely many variables for each possible type. For this reason we call our system "Gamma_infinity". This name means to suggest that our type judgment "A : B" should be read as "Gamma_infinity |- A : B", with a fixed infinite type context called "Gamma_infinity".
A bisimulation for a coalgebra of a functor on the category of sets can be described via a coalgebra in the category of relations, of a lifted functor. A final … A bisimulation for a coalgebra of a functor on the category of sets can be described via a coalgebra in the category of relations, of a lifted functor. A final coalgebra then gives rise to the coinduction principle, which states that two bisimilar elements are equal. For polynomial functors, this leads to well-known descriptions. In the present paper we look at the dual notion of "apartness". Intuitively, two elements are apart if there is a positive way to distinguish them. Phrased differently: two elements are apart if and only if they are not bisimilar. Since apartness is an inductive notion, described by a least fixed point, we can give a proof system, to derive that two elements are apart. This proof system has derivation rules and two elements are apart if and only if there is a finite derivation (using the rules) of this fact. We study apartness versus bisimulation in two separate ways. First, for weak forms of bisimulation on labelled transition systems, where silent (tau) steps are included, we define an apartness notion that corresponds to weak bisimulation and another apartness that corresponds to branching bisimulation. The rules for apartness can be used to show that two states of a labelled transition system are not branching bismilar. To support the apartness view on labelled transition systems, we cast a number of well-known properties of branching bisimulation in terms of branching apartness and prove them. Next, we also study the more general categorical situation and show that indeed, apartness is the dual of bisimilarity in a precise categorical sense: apartness is an initial algebra and gives rise to an induction principle. In this analogy, we include the powerset functor, which gives a semantics to non-deterministic choice in process-theory.
We present a system that utilizes machine learning for tactic proof search in the Coq Proof Assistant. In a similar vein as the TacticToe project for HOL4, our system predicts … We present a system that utilizes machine learning for tactic proof search in the Coq Proof Assistant. In a similar vein as the TacticToe project for HOL4, our system predicts appropriate tactics and finds proofs in the form of tactic scripts. To do this, it learns from previous tactic scripts and how they are applied to proof states. The performance of the system is evaluated on the Coq Standard Library. Currently, our predictor can identify the correct tactic to be applied to a proof state 23.4% of the time. Our proof searcher can fully automatically prove 39.3% of the lemmas. When combined with the CoqHammer system, the two systems together prove 56.7% of the library's lemmas.
The Agora system is a prototype for Formal Mathematics, with an aim to support developing and documenting large formalizations of mathematics in a proof assistant. The functions implemented in Agora … The Agora system is a prototype for Formal Mathematics, with an aim to support developing and documenting large formalizations of mathematics in a proof assistant. The functions implemented in Agora include in-browser editing, strong AI/ATP proof advice, verification, and HTML rendering. The HTML rendering contains hyperlinks and provides on-demand explanation of the proof state for each proof step. In the present paper we show the prototype Flyspeck Wiki as an instance of Agora for HOL Light formalizations. The wiki can be used for formalizations of mathematics and for writing informal wiki pages about mathematics. Such informal pages may contain islands of formal text, which is used here for providing an initial cross-linking between Hales's informal Flyspeck book, and the formal Flyspeck development. The Agora platform intends to address distributed wiki-style collaboration on large formalization projects, in particular both the aspect of immediate editing, verification and rendering of formal code, and the aspect of gradual and mutual refactoring and correspondence of the initial informal text and its formalization. Here, we highlight these features within the Flyspeck Wiki.
We develop a dependent type theory that is based purely on inductive and coinductive types, and the corresponding recursion and corecursion principles. This results in a type theory with a … We develop a dependent type theory that is based purely on inductive and coinductive types, and the corresponding recursion and corecursion principles. This results in a type theory with a small set of rules, while still being fairly expressive. For example, all well-known basic types and type formers that are needed for using this type theory as a logic are definable: propositional connectives, like falsity, conjunction, disjunction, and function space, dependent function space, existential quantification, equality, natural numbers, vectors etc. The reduction relation on terms consists solely of a rule for recursion and a rule for corecursion. The reduction relations for well-known types arise from that. To further support the introduction of this new type theory, we also prove fundamental properties of its term calculus. Most importantly, we prove subject reduction and strong normalisation of the reduction relation, which gives computational meaning to the terms.
We present a proof of strong normalization of proof-reduction in a general system of natural deduction called truth table natural deduction.In previous work, we have defined truth table natural deduction, … We present a proof of strong normalization of proof-reduction in a general system of natural deduction called truth table natural deduction.In previous work, we have defined truth table natural deduction, which is a method for deriving intuitionistic derivation rules for a connective from its truth table.This yields natural deduction rules for each connective separately.Moreover, these rules adhere to a standard format which gives rise to a general notions of detour and permutation conversion for natural deductions.The aim is to remove all convertibilities and obtain a deduction in normal form.In general, conversion of truth table natural deductions is non-deterministic, which makes it more challenging to study.It has already been shown that this conversion is weakly normalizing.To prove strong normalization, we construct a conversionpreserving translation from deductions to terms in an extension of simply typed lambda calculus
A bisimulation for a coalgebra of a functor on the category of sets can be described via a coalgebra in the category of relations, of a lifted functor. A final … A bisimulation for a coalgebra of a functor on the category of sets can be described via a coalgebra in the category of relations, of a lifted functor. A final coalgebra then gives rise to the coinduction principle, which states that two bisimilar elements are equal. For polynomial functors, this leads to well-known descriptions. In the present paper we look at the dual notion of apartness. Intuitively, two elements are apart if there is a positive way to distinguish them. Phrased differently: two elements are apart if and only if they are not bisimilar. Since apartness is an inductive notion, described by a least fixed point, we can give a proof system, to derive that two elements are apart. This proof system has derivation rules and two elements are apart if and only if there is a finite derivation (using the rules) of this fact. We study apartness versus bisimulation in two separate ways. First, for weak forms of bisimulation on labelled transition systems, where silent (tau) steps are included, we define an apartness notion that corresponds to weak bisimulation and another apartness that corresponds to branching bisimulation. The rules for apartness can be used to show that two states of a labelled transition system are not branching bismilar. To support the apartness view on labelled transition systems, we cast a number of well-known properties of branching bisimulation in terms of branching apartness and prove them. Next, we also study the more general categorical situation and show that indeed, apartness is the dual of bisimilarity in a precise categorical sense: apartness is an initial algebra and gives rise to an induction principle. In this analogy, we include the powerset functor, which gives a semantics to non-deterministic choice in process-theory.
We study different formalizations of finite sets in homotopy type theory to obtain a general definition that exhibits both the computational facilities and the proof principles expected from finite sets. … We study different formalizations of finite sets in homotopy type theory to obtain a general definition that exhibits both the computational facilities and the proof principles expected from finite sets. We use higher inductive types to define the type K(A) of "finite sets over type A" à la Kuratowski without assuming that K(A) has decidable equality. We show how to define basic functions and prove basic properties after which we give two applications of our definition.
Programs with control are usually modeled using lambda calculus extended with control operators. Instead of modifying lambda calculus, we consider a different model of computation. We introduce continuation calculus, or … Programs with control are usually modeled using lambda calculus extended with control operators. Instead of modifying lambda calculus, we consider a different model of computation. We introduce continuation calculus, or CC, a deterministic model of computation that is evaluated using only head reduction, and argue that it is suitable for modeling programs with control. It is demonstrated how to define programs, specify them, and prove them correct. This is shown in detail by presenting in CC a list multiplication program that prematurely returns when it encounters a zero. The correctness proof includes termination of the program. In continuation calculus we can model both call-by-name and call-by-value. In addition, call-by-name functions can be applied to call-by-value results, and conversely.
The `mathematical language' Automath, conceived by N.G. de Bruijn in 1968, was the first theorem prover actually working and was used for checking many specimina of mathematical content. Its goals … The `mathematical language' Automath, conceived by N.G. de Bruijn in 1968, was the first theorem prover actually working and was used for checking many specimina of mathematical content. Its goals and syntactic ideas inspired Th. Coquand and G. Huet to develop the calculus of constructions, CC, which was one of the first widely used interactive theorem provers and forms the basis for the widely used Coq system. The original syntax of Automath is not easy to grasp. Yet, it is essentially based on a derivation system that is similar to the Calculus of Constructions (`CC'). The relation between the Automath syntax and CC has not yet been sufficiently described, although there are many references in the type theory community to Automath. In this paper we focus on the backgrounds and on some uncommon aspects of the syntax of Automath. We expose the fundamental aspects of a `generic' Automath system, encapsulating the most common versions of Automath. We present this generic Automath system in a modern syntactic frame. The obtained system makes use of {\lambda}D, a direct extension of CC with definitions.
We present twenty-five C programs, as a benchmark for C program verification using formal methods. This benchmark can be used for system demonstration, for comparison of verification effort between systems, … We present twenty-five C programs, as a benchmark for C program verification using formal methods. This benchmark can be used for system demonstration, for comparison of verification effort between systems, and as a friendly competition. For this last purpose, we give a scoring formula that allows a verification system to score up to a hundred points.
The goal of this project is to (i) accumulate annotated informal/formal mathematical corpora suitable for training semi-automated translation between informal and formal mathematics by statistical machine-translation methods, (ii) to develop … The goal of this project is to (i) accumulate annotated informal/formal mathematical corpora suitable for training semi-automated translation between informal and formal mathematics by statistical machine-translation methods, (ii) to develop such methods oriented at the formalization task, and in particular (iii) to combine such methods with learning-assisted automated reasoning that will serve as a strong semantic component. We describe these ideas, the initial set of corpora, and some initial experiments done over them.
Labelled transitions systems can be studied in terms of modal logic and in terms of bisimulation. These two notions are connected by Hennessy-Milner theorems, that show that two states are … Labelled transitions systems can be studied in terms of modal logic and in terms of bisimulation. These two notions are connected by Hennessy-Milner theorems, that show that two states are bisimilar precisely when they satisfy the same modal logic formulas. Recently, apartness has been studied as a dual to bisimulation, which also gives rise to a dual version of the Hennessy-Milner theorem: two states are apart precisely when there is a modal formula that distinguishes them. In this paper, we introduce ``directed'' versions of Hennessy-Milner theorems that characterize when the theory of one state is included in the other. For this we introduce ``positive modal logics'' that only allow a limited use of negation. Furthermore, we introduce directed notions of bisimulation and apartness, and then show that, for this positive modal logic, the theory of $s$ is included in the theory of $t$ precisely when $s$ is directed bisimilar to $t$. Or, in terms of apartness, we show that $s$ is directed apart from $t$ precisely when the theory of $s$ is not included in the theory of $t$. From the directed version of the Hennessy-Milner theorem, the original result follows. In particular, we study the case of branching bisimulation and Hennessy-Milner Logic with Until (HMLU) as a modal logic. We introduce ``directed branching bisimulation'' (and directed branching apartness) and ``Positive Hennessy-Milner Logic with Until'' (PHMLU) and we show the directed version of the Hennessy-Milner theorems. In the process, we show that every HMLU formula is equivalent to a Boolean combination of Positive HMLU formulas, which is a very non-trivial result. This gives rise to a sublogic of HMLU that is equally expressive but easier to reason about.
The notion of $\alpha$-equivalence between $\lambda$-terms is commonly used to identify terms that are considered equal. However, due to the primitive treatment of free variables, this notion falls short when … The notion of $\alpha$-equivalence between $\lambda$-terms is commonly used to identify terms that are considered equal. However, due to the primitive treatment of free variables, this notion falls short when comparing subterms occurring within a larger context. Depending on the usage of the Barendregt convention (choosing different variable names for all involved binders), it will equate either too few or too many subterms. We introduce a formal notion of context-sensitive $\alpha$-equivalence, where two open terms can be compared within a context that resolves their free variables. We show that this equivalence coincides exactly with the notion of bisimulation equivalence. Furthermore, we present an efficient $O(n\log n)$ runtime hashing scheme that identifies $\lambda$-terms modulo context-sensitive $\alpha$-equivalence, generalizing over traditional bisimulation partitioning algorithms and improving upon a previously established $O(n\log^2 n)$ bound for a hashing modulo ordinary $\alpha$-equivalence by Maziarz et al. Hashing $\lambda$-terms is useful in many applications that require common subterm elimination and structure sharing. We have employed the algorithm to obtain a large-scale, densely packed, interconnected graph of mathematical knowledge from the Coq proof assistant for machine learning purposes.
In Section 8.7, we considered a well-known theorem from number theory, and we have given a mathematical proof of it in Section 8.8. We now revisit this theorem and its … In Section 8.7, we considered a well-known theorem from number theory, and we have given a mathematical proof of it in Section 8.8. We now revisit this theorem and its proof, which are reproduced below, and translate it into the formal λD-format.
In the previous chapters we have become acquainted with the use of λD for doing mathematics, by selecting a few examples and investigating the issues that we came across. In the previous chapters we have become acquainted with the use of λD for doing mathematics, by selecting a few examples and investigating the issues that we came across.
We present twenty-five C programs, as a benchmark for C program verification using formal methods. This benchmark can be used for system demonstration, for comparison of verification effort between systems, … We present twenty-five C programs, as a benchmark for C program verification using formal methods. This benchmark can be used for system demonstration, for comparison of verification effort between systems, and as a friendly competition. For this last purpose, we give a scoring formula that allows a verification system to score up to a hundred points.
The goal of this project is to (i) accumulate annotated informal/formal mathematical corpora suitable for training semi-automated translation between informal and formal mathematics by statistical machine-translation methods, (ii) to develop … The goal of this project is to (i) accumulate annotated informal/formal mathematical corpora suitable for training semi-automated translation between informal and formal mathematics by statistical machine-translation methods, (ii) to develop such methods oriented at the formalization task, and in particular (iii) to combine such methods with learning-assisted automated reasoning that will serve as a strong semantic component. We describe these ideas, the initial set of corpora, and some initial experiments done over them.
Continuation Calculus (CC), introduced by Geron and Geuvers, is a simple foundational model for functional computation. It is closely related to lambda calculus and term rewriting, but it has no … Continuation Calculus (CC), introduced by Geron and Geuvers, is a simple foundational model for functional computation. It is closely related to lambda calculus and term rewriting, but it has no variable binding and no pattern matching. It is Turing complete and evaluation is deterministic. Notions like "call-by-value" and "call-by-name" computation are available by choosing appropriate function definitions: e.g. there is a call-by-value and a call-by-name addition function. In the present paper we extend CC with types, to be able to define data types in a canonical way, and functions over these data types, defined by iteration. Data type definitions follow the so-called "Scott encoding" of data, as opposed to the more familiar "Church encoding". The iteration scheme comes in two flavors: a call-by-value and a call-by-name iteration scheme. The call-by-value variant is a double negation variant of call-by-name iteration. The double negation translation allows to move between call-by-name and call-by-value.
In the ‘real world’ of logical and mathematical texts, definitions are indispensable. This is particularly the case when the amount of ‘knowledge’ begins to grow. In the ‘real world’ of logical and mathematical texts, definitions are indispensable. This is particularly the case when the amount of ‘knowledge’ begins to grow.
The Agora system is a prototype "Wiki for Formal Mathematics", with an aim to support developing and documenting large formalizations of mathematics in a proof assistant. The functions implemented in … The Agora system is a prototype "Wiki for Formal Mathematics", with an aim to support developing and documenting large formalizations of mathematics in a proof assistant. The functions implemented in Agora include in-browser editing, strong AI/ATP proof advice, verification, and HTML rendering. The HTML rendering contains hyperlinks and provides on-demand explanation of the proof state for each proof step. In the present paper we show the prototype Flyspeck Wiki as an instance of Agora for HOL Light formalizations. The wiki can be used for formalizations of mathematics and for writing informal wiki pages about mathematics. Such informal pages may contain islands of formal text, which is used here for providing an initial cross-linking between Hales's informal Flyspeck book, and the formal Flyspeck development. The Agora platform intends to address distributed wiki-style collaboration on large formalization projects, in particular both the aspect of immediate editing, verification and rendering of formal code, and the aspect of gradual and mutual refactoring and correspondence of the initial informal text and its formalization. Here, we highlight these features within the Flyspeck Wiki.
To improve on existing models of interaction with a proof assistant (PA), in particular for storage and replay of proofs, we in- troduce three related concepts, those of: a proof … To improve on existing models of interaction with a proof assistant (PA), in particular for storage and replay of proofs, we in- troduce three related concepts, those of: a proof movie, consisting of frames which record both user input and the corresponding PA response; a camera, which films a user's interactive session with a PA as a movie; and a proviola, which replays a movie frame-by-frame to a third party. In this paper we describe the movie data structure and we discuss a proto- type implementation of the camera and proviola based on the ProofWeb system. ProofWeb uncouples the interaction with a PA via a web- interface (the client) from the actual PA that resides on the server. Our camera films a movie by "listening" to the ProofWeb communication. The first reason for developing movies is to uncouple the reviewing of a formal proof from the PA used to develop it: the movie concept enables users to discuss small code fragments without the need to install the PA or to load a whole library into it. Other advantages include the possibility to develop a separate com- mentary track to discuss or explain the PA interaction. We assert that a combined camera+proviola provides a generic layer between a client (user) and a server (PA). Finally we claim that movies are the right type of data to be stored in an encyclopedia of formalized mathematics, based on our experience in filming the Coq standard library.
The notion of α-equivalence between λ-terms is commonly used to identify terms that are considered equal. However, due to the primitive treatment of free variables, this notion falls short when … The notion of α-equivalence between λ-terms is commonly used to identify terms that are considered equal. However, due to the primitive treatment of free variables, this notion falls short when comparing subterms occurring within a larger context. Depending on the usage of the Barendregt convention (choosing different variable names for all involved binders), it will equate either too few or too many subterms. We introduce a formal notion of context-sensitive α-equivalence, where two open terms can be compared within a context that resolves their free variables. We show that this equivalence coincides exactly with the notion of bisimulation equivalence. Furthermore, we present an efficient O ( n log n ) runtime hashing scheme that identifies λ-terms modulo context-sensitive α-equivalence, generalizing over traditional bisimulation partitioning algorithms and improving upon a previously established O ( n log 2 n ) bound for a hashing modulo ordinary α-equivalence by Maziarz et al. Hashing λ-terms is useful in many applications that require common subterm elimination and structure sharing. We hav employed the algorithm to obtain a large-scale, densely packed, interconnected graph of mathematical knowledge from the Coq proof assistant for machine learning purposes.
The notion of α-equivalence between λ-terms is commonly used to identify terms that are considered equal. However, due to the primitive treatment of free variables, this notion falls short when … The notion of α-equivalence between λ-terms is commonly used to identify terms that are considered equal. However, due to the primitive treatment of free variables, this notion falls short when comparing subterms occurring within a larger context. Depending on the usage of the Barendregt convention (choosing different variable names for all involved binders), it will equate either too few or too many subterms. We introduce a formal notion of context-sensitive α-equivalence, where two open terms can be compared within a context that resolves their free variables. We show that this equivalence coincides exactly with the notion of bisimulation equivalence. Furthermore, we present an efficient O ( n log n ) runtime hashing scheme that identifies λ-terms modulo context-sensitive α-equivalence, generalizing over traditional bisimulation partitioning algorithms and improving upon a previously established O ( n log 2 n ) bound for a hashing modulo ordinary α-equivalence by Maziarz et al. Hashing λ-terms is useful in many applications that require common subterm elimination and structure sharing. We hav employed the algorithm to obtain a large-scale, densely packed, interconnected graph of mathematical knowledge from the Coq proof assistant for machine learning purposes.
The notion of $\alpha$-equivalence between $\lambda$-terms is commonly used to identify terms that are considered equal. However, due to the primitive treatment of free variables, this notion falls short when … The notion of $\alpha$-equivalence between $\lambda$-terms is commonly used to identify terms that are considered equal. However, due to the primitive treatment of free variables, this notion falls short when comparing subterms occurring within a larger context. Depending on the usage of the Barendregt convention (choosing different variable names for all involved binders), it will equate either too few or too many subterms. We introduce a formal notion of context-sensitive $\alpha$-equivalence, where two open terms can be compared within a context that resolves their free variables. We show that this equivalence coincides exactly with the notion of bisimulation equivalence. Furthermore, we present an efficient $O(n\log n)$ runtime hashing scheme that identifies $\lambda$-terms modulo context-sensitive $\alpha$-equivalence, generalizing over traditional bisimulation partitioning algorithms and improving upon a previously established $O(n\log^2 n)$ bound for a hashing modulo ordinary $\alpha$-equivalence by Maziarz et al. Hashing $\lambda$-terms is useful in many applications that require common subterm elimination and structure sharing. We have employed the algorithm to obtain a large-scale, densely packed, interconnected graph of mathematical knowledge from the Coq proof assistant for machine learning purposes.
The `mathematical language' Automath, conceived by N.G. de Bruijn in 1968, was the first theorem prover actually working and was used for checking many specimina of mathematical content. Its goals … The `mathematical language' Automath, conceived by N.G. de Bruijn in 1968, was the first theorem prover actually working and was used for checking many specimina of mathematical content. Its goals and syntactic ideas inspired Th. Coquand and G. Huet to develop the calculus of constructions, CC, which was one of the first widely used interactive theorem provers and forms the basis for the widely used Coq system. The original syntax of Automath is not easy to grasp. Yet, it is essentially based on a derivation system that is similar to the Calculus of Constructions (`CC'). The relation between the Automath syntax and CC has not yet been sufficiently described, although there are many references in the type theory community to Automath. In this paper we focus on the backgrounds and on some uncommon aspects of the syntax of Automath. We expose the fundamental aspects of a `generic' Automath system, encapsulating the most common versions of Automath. We present this generic Automath system in a modern syntactic frame. The obtained system makes use of {\lambda}D, a direct extension of CC with definitions.
Labelled transitions systems can be studied in terms of modal logic and in terms of bisimulation. These two notions are connected by Hennessy-Milner theorems, that show that two states are … Labelled transitions systems can be studied in terms of modal logic and in terms of bisimulation. These two notions are connected by Hennessy-Milner theorems, that show that two states are bisimilar precisely when they satisfy the same modal logic formulas. Recently, apartness has been studied as a dual to bisimulation, which also gives rise to a dual version of the Hennessy-Milner theorem: two states are apart precisely when there is a modal formula that distinguishes them. In this paper, we introduce ``directed'' versions of Hennessy-Milner theorems that characterize when the theory of one state is included in the other. For this we introduce ``positive modal logics'' that only allow a limited use of negation. Furthermore, we introduce directed notions of bisimulation and apartness, and then show that, for this positive modal logic, the theory of $s$ is included in the theory of $t$ precisely when $s$ is directed bisimilar to $t$. Or, in terms of apartness, we show that $s$ is directed apart from $t$ precisely when the theory of $s$ is not included in the theory of $t$. From the directed version of the Hennessy-Milner theorem, the original result follows. In particular, we study the case of branching bisimulation and Hennessy-Milner Logic with Until (HMLU) as a modal logic. We introduce ``directed branching bisimulation'' (and directed branching apartness) and ``Positive Hennessy-Milner Logic with Until'' (PHMLU) and we show the directed version of the Hennessy-Milner theorems. In the process, we show that every HMLU formula is equivalent to a Boolean combination of Positive HMLU formulas, which is a very non-trivial result. This gives rise to a sublogic of HMLU that is equally expressive but easier to reason about.
A bisimulation for a coalgebra of a functor on the category of sets can be described via a coalgebra in the category of relations, of a lifted functor. A final … A bisimulation for a coalgebra of a functor on the category of sets can be described via a coalgebra in the category of relations, of a lifted functor. A final coalgebra then gives rise to the coinduction principle, which states that two bisimilar elements are equal. For polynomial functors, this leads to well-known descriptions. In the present paper we look at the dual notion of "apartness". Intuitively, two elements are apart if there is a positive way to distinguish them. Phrased differently: two elements are apart if and only if they are not bisimilar. Since apartness is an inductive notion, described by a least fixed point, we can give a proof system, to derive that two elements are apart. This proof system has derivation rules and two elements are apart if and only if there is a finite derivation (using the rules) of this fact. We study apartness versus bisimulation in two separate ways. First, for weak forms of bisimulation on labelled transition systems, where silent (tau) steps are included, we define an apartness notion that corresponds to weak bisimulation and another apartness that corresponds to branching bisimulation. The rules for apartness can be used to show that two states of a labelled transition system are not branching bismilar. To support the apartness view on labelled transition systems, we cast a number of well-known properties of branching bisimulation in terms of branching apartness and prove them. Next, we also study the more general categorical situation and show that indeed, apartness is the dual of bisimilarity in a precise categorical sense: apartness is an initial algebra and gives rise to an induction principle. In this analogy, we include the powerset functor, which gives a semantics to non-deterministic choice in process-theory.
We present a system that utilizes machine learning for tactic proof search in the Coq Proof Assistant. In a similar vein as the TacticToe project for HOL4, our system predicts … We present a system that utilizes machine learning for tactic proof search in the Coq Proof Assistant. In a similar vein as the TacticToe project for HOL4, our system predicts appropriate tactics and finds proofs in the form of tactic scripts. To do this, it learns from previous tactic scripts and how they are applied to proof states. The performance of the system is evaluated on the Coq Standard Library. Currently, our predictor can identify the correct tactic to be applied to a proof state 23.4% of the time. Our proof searcher can fully automatically prove 39.3% of the lemmas. When combined with the CoqHammer system, the two systems together prove 56.7% of the library's lemmas.
A bisimulation for a coalgebra of a functor on the category of sets can be described via a coalgebra in the category of relations, of a lifted functor. A final … A bisimulation for a coalgebra of a functor on the category of sets can be described via a coalgebra in the category of relations, of a lifted functor. A final coalgebra then gives rise to the coinduction principle, which states that two bisimilar elements are equal. For polynomial functors, this leads to well-known descriptions. In the present paper we look at the dual notion of apartness. Intuitively, two elements are apart if there is a positive way to distinguish them. Phrased differently: two elements are apart if and only if they are not bisimilar. Since apartness is an inductive notion, described by a least fixed point, we can give a proof system, to derive that two elements are apart. This proof system has derivation rules and two elements are apart if and only if there is a finite derivation (using the rules) of this fact. We study apartness versus bisimulation in two separate ways. First, for weak forms of bisimulation on labelled transition systems, where silent (tau) steps are included, we define an apartness notion that corresponds to weak bisimulation and another apartness that corresponds to branching bisimulation. The rules for apartness can be used to show that two states of a labelled transition system are not branching bismilar. To support the apartness view on labelled transition systems, we cast a number of well-known properties of branching bisimulation in terms of branching apartness and prove them. Next, we also study the more general categorical situation and show that indeed, apartness is the dual of bisimilarity in a precise categorical sense: apartness is an initial algebra and gives rise to an induction principle. In this analogy, we include the powerset functor, which gives a semantics to non-deterministic choice in process-theory.
We present a proof of strong normalization of proof-reduction in a general system of natural deduction called truth table natural deduction.In previous work, we have defined truth table natural deduction, … We present a proof of strong normalization of proof-reduction in a general system of natural deduction called truth table natural deduction.In previous work, we have defined truth table natural deduction, which is a method for deriving intuitionistic derivation rules for a connective from its truth table.This yields natural deduction rules for each connective separately.Moreover, these rules adhere to a standard format which gives rise to a general notions of detour and permutation conversion for natural deductions.The aim is to remove all convertibilities and obtain a deduction in normal form.In general, conversion of truth table natural deductions is non-deterministic, which makes it more challenging to study.It has already been shown that this conversion is weakly normalizing.To prove strong normalization, we construct a conversionpreserving translation from deductions to terms in an extension of simply typed lambda calculus
We present twenty-five C programs, as a benchmark for C program verification using formal methods. This benchmark can be used for system demonstration, for comparison of verification effort between systems, … We present twenty-five C programs, as a benchmark for C program verification using formal methods. This benchmark can be used for system demonstration, for comparison of verification effort between systems, and as a friendly competition. For this last purpose, we give a scoring formula that allows a verification system to score up to a hundred points.
We present twenty-five C programs, as a benchmark for C program verification using formal methods. This benchmark can be used for system demonstration, for comparison of verification effort between systems, … We present twenty-five C programs, as a benchmark for C program verification using formal methods. This benchmark can be used for system demonstration, for comparison of verification effort between systems, and as a friendly competition. For this last purpose, we give a scoring formula that allows a verification system to score up to a hundred points.
We study different formalizations of finite sets in homotopy type theory to obtain a general definition that exhibits both the computational facilities and the proof principles expected from finite sets. … We study different formalizations of finite sets in homotopy type theory to obtain a general definition that exhibits both the computational facilities and the proof principles expected from finite sets. We use higher inductive types to define the type K(A) of "finite sets over type A" à la Kuratowski without assuming that K(A) has decidable equality. We show how to define basic functions and prove basic properties after which we give two applications of our definition.
We study different formalizations of finite sets in homotopy type theory to obtain a general definition that exhibits both the computational facilities and the proof principles expected from finite sets. … We study different formalizations of finite sets in homotopy type theory to obtain a general definition that exhibits both the computational facilities and the proof principles expected from finite sets. We use higher inductive types to define the type K(A) of "finite sets over type A" à la Kuratowski without assuming that K(A) has decidable equality. We show how to define basic functions and prove basic properties after which we give two applications of our definition.
We develop a dependent type theory that is based purely on inductive and coinductive types, and the corresponding recursion and corecursion principles. This results in a type theory with a … We develop a dependent type theory that is based purely on inductive and coinductive types, and the corresponding recursion and corecursion principles. This results in a type theory with a small set of rules, while still being fairly expressive. For example, all well-known basic types and type formers that are needed for using this type theory as a logic are definable: propositional connectives, like falsity, conjunction, disjunction, and function space, dependent function space, existential quantification, equality, natural numbers, vectors etc. The reduction relation on terms consists solely of a rule for recursion and a rule for corecursion. The reduction relations for well-known types arise from that. To further support the introduction of this new type theory, we also prove fundamental properties of its term calculus. Most importantly, we prove subject reduction and strong normalisation of the reduction relation, which gives computational meaning to the terms.
In Section 8.7, we considered a well-known theorem from number theory, and we have given a mathematical proof of it in Section 8.8. We now revisit this theorem and its … In Section 8.7, we considered a well-known theorem from number theory, and we have given a mathematical proof of it in Section 8.8. We now revisit this theorem and its proof, which are reproduced below, and translate it into the formal λD-format.
In the previous chapters we have become acquainted with the use of λD for doing mathematics, by selecting a few examples and investigating the issues that we came across. In the previous chapters we have become acquainted with the use of λD for doing mathematics, by selecting a few examples and investigating the issues that we came across.
In the ‘real world’ of logical and mathematical texts, definitions are indispensable. This is particularly the case when the amount of ‘knowledge’ begins to grow. In the ‘real world’ of logical and mathematical texts, definitions are indispensable. This is particularly the case when the amount of ‘knowledge’ begins to grow.
Continuation Calculus (CC), introduced by Geron and Geuvers, is a simple foundational model for functional computation. It is closely related to lambda calculus and term rewriting, but it has no … Continuation Calculus (CC), introduced by Geron and Geuvers, is a simple foundational model for functional computation. It is closely related to lambda calculus and term rewriting, but it has no variable binding and no pattern matching. It is Turing complete and evaluation is deterministic. Notions like "call-by-value" and "call-by-name" computation are available by choosing appropriate function definitions: e.g. there is a call-by-value and a call-by-name addition function. In the present paper we extend CC with types, to be able to define data types in a canonical way, and functions over these data types, defined by iteration. Data type definitions follow the so-called "Scott encoding" of data, as opposed to the more familiar "Church encoding". The iteration scheme comes in two flavors: a call-by-value and a call-by-name iteration scheme. The call-by-value variant is a double negation variant of call-by-name iteration. The double negation translation allows to move between call-by-name and call-by-value.
The goal of this project is to (i) accumulate annotated informal/formal mathematical corpora suitable for training semi-automated translation between informal and formal mathematics by statistical machine-translation methods, (ii) to develop … The goal of this project is to (i) accumulate annotated informal/formal mathematical corpora suitable for training semi-automated translation between informal and formal mathematics by statistical machine-translation methods, (ii) to develop such methods oriented at the formalization task, and in particular (iii) to combine such methods with learning-assisted automated reasoning that will serve as a strong semantic component. We describe these ideas, the initial set of corpora, and some initial experiments done over them.
The goal of this project is to (i) accumulate annotated informal/formal mathematical corpora suitable for training semi-automated translation between informal and formal mathematics by statistical machine-translation methods, (ii) to develop … The goal of this project is to (i) accumulate annotated informal/formal mathematical corpora suitable for training semi-automated translation between informal and formal mathematics by statistical machine-translation methods, (ii) to develop such methods oriented at the formalization task, and in particular (iii) to combine such methods with learning-assisted automated reasoning that will serve as a strong semantic component. We describe these ideas, the initial set of corpora, and some initial experiments done over them.
Programs with control are usually modeled using lambda calculus extended with control operators. Instead of modifying lambda calculus, we consider a different model of computation. We introduce continuation calculus, or … Programs with control are usually modeled using lambda calculus extended with control operators. Instead of modifying lambda calculus, we consider a different model of computation. We introduce continuation calculus, or CC, a deterministic model of computation that is evaluated using only head reduction, and argue that it is suitable for modeling programs with control. It is demonstrated how to define programs, specify them, and prove them correct. This is shown in detail by presenting in CC a list multiplication program that prematurely returns when it encounters a zero. The correctness proof includes termination of the program. In continuation calculus we can model both call-by-name and call-by-value. In addition, call-by-name functions can be applied to call-by-value results, and conversely.
The Agora system is a prototype for Formal Mathematics, with an aim to support developing and documenting large formalizations of mathematics in a proof assistant. The functions implemented in Agora … The Agora system is a prototype for Formal Mathematics, with an aim to support developing and documenting large formalizations of mathematics in a proof assistant. The functions implemented in Agora include in-browser editing, strong AI/ATP proof advice, verification, and HTML rendering. The HTML rendering contains hyperlinks and provides on-demand explanation of the proof state for each proof step. In the present paper we show the prototype Flyspeck Wiki as an instance of Agora for HOL Light formalizations. The wiki can be used for formalizations of mathematics and for writing informal wiki pages about mathematics. Such informal pages may contain islands of formal text, which is used here for providing an initial cross-linking between Hales's informal Flyspeck book, and the formal Flyspeck development. The Agora platform intends to address distributed wiki-style collaboration on large formalization projects, in particular both the aspect of immediate editing, verification and rendering of formal code, and the aspect of gradual and mutual refactoring and correspondence of the initial informal text and its formalization. Here, we highlight these features within the Flyspeck Wiki.
The Agora system is a prototype "Wiki for Formal Mathematics", with an aim to support developing and documenting large formalizations of mathematics in a proof assistant. The functions implemented in … The Agora system is a prototype "Wiki for Formal Mathematics", with an aim to support developing and documenting large formalizations of mathematics in a proof assistant. The functions implemented in Agora include in-browser editing, strong AI/ATP proof advice, verification, and HTML rendering. The HTML rendering contains hyperlinks and provides on-demand explanation of the proof state for each proof step. In the present paper we show the prototype Flyspeck Wiki as an instance of Agora for HOL Light formalizations. The wiki can be used for formalizations of mathematics and for writing informal wiki pages about mathematics. Such informal pages may contain islands of formal text, which is used here for providing an initial cross-linking between Hales's informal Flyspeck book, and the formal Flyspeck development. The Agora platform intends to address distributed wiki-style collaboration on large formalization projects, in particular both the aspect of immediate editing, verification and rendering of formal code, and the aspect of gradual and mutual refactoring and correspondence of the initial informal text and its formalization. Here, we highlight these features within the Flyspeck Wiki.
We present an approach to type theory in which the typing judgments do not have explicit contexts. Instead of judgments of shape "Gamma |- A : B", our systems just … We present an approach to type theory in which the typing judgments do not have explicit contexts. Instead of judgments of shape "Gamma |- A : B", our systems just have judgments of shape "A : B". A key feature is that we distinguish free and bound variables even in pseudo-terms. Specifically we give the rules of the "Pure Type System" class of type theories in this style. We prove that the typing judgments of these systems correspond in a natural way with those of Pure Type Systems as traditionally formulated. I.e., our systems have exactly the same well-typed terms as traditional presentations of type theory. Our system can be seen as a type theory in which all type judgments share an identical, infinite, typing context that has infinitely many variables for each possible type. For this reason we call our system "Gamma_infinity". This name means to suggest that our type judgment "A : B" should be read as "Gamma_infinity |- A : B", with a fixed infinite type context called "Gamma_infinity".
To improve on existing models of interaction with a proof assistant (PA), in particular for storage and replay of proofs, we in- troduce three related concepts, those of: a proof … To improve on existing models of interaction with a proof assistant (PA), in particular for storage and replay of proofs, we in- troduce three related concepts, those of: a proof movie, consisting of frames which record both user input and the corresponding PA response; a camera, which films a user's interactive session with a PA as a movie; and a proviola, which replays a movie frame-by-frame to a third party. In this paper we describe the movie data structure and we discuss a proto- type implementation of the camera and proviola based on the ProofWeb system. ProofWeb uncouples the interaction with a PA via a web- interface (the client) from the actual PA that resides on the server. Our camera films a movie by "listening" to the ProofWeb communication. The first reason for developing movies is to uncouple the reviewing of a formal proof from the PA used to develop it: the movie concept enables users to discuss small code fragments without the need to install the PA or to load a whole library into it. Other advantages include the possibility to develop a separate com- mentary track to discuss or explain the PA interaction. We assert that a combined camera+proviola provides a generic layer between a client (user) and a server (PA). Finally we claim that movies are the right type of data to be stored in an encyclopedia of formalized mathematics, based on our experience in filming the Coq standard library.
In this paper we present the algebraic-λ-cube, an extension of Barendregt's λ-cube with first- and higher-order algebraic rewriting. We show that strong normalization is a modular property of all the … In this paper we present the algebraic-λ-cube, an extension of Barendregt's λ-cube with first- and higher-order algebraic rewriting. We show that strong normalization is a modular property of all the systems in the algebraic-λ-cube, provided that the first-order rewrite rules are non-duplicating and the higher-order rules satisfy the general schema of Jouannaud and Okada. We also prove that local confluence is a modular property of all the systems in the algebraic-λ-cube, provided that the higher-order rules do not introduce critical pairs. This property and the strong normalization result imply the modularity of confluence.
Abstract We present a modular proof of strong normalization for the Calculus of Constructions of Coquand and Huet (1985, 1988). This result was first proved by Coquand (1986), but our … Abstract We present a modular proof of strong normalization for the Calculus of Constructions of Coquand and Huet (1985, 1988). This result was first proved by Coquand (1986), but our proof is more perspicious. The method consists of a little juggling with some systems in the cube of Barendregt (1989), which provides a fine structure of the calculus of constructions. It is proved that the strong normalization of the calculus of constructions is equivalent with the strong normalization of F ω. In order to give the proof, we first establish some properties of various type systems. Therefore, we present a general framework of typed lambda calculi, including many well-known ones.
As a present to Mizar on its 40th anniversary, we develop an AI/ATP system that in 30 seconds of real time on a 14-CPU machine automatically proves 40% of the … As a present to Mizar on its 40th anniversary, we develop an AI/ATP system that in 30 seconds of real time on a 14-CPU machine automatically proves 40% of the theorems in the latest official version of the Mizar Mathematical Library (MML). This is a considerable improvement over previous performance of large- theory AI/ATP methods measured on the whole MML. To achieve that, a large suite of AI/ATP methods is employed and further developed. We implement the most useful methods efficiently, to scale them to the 150000 formulas in MML. This reduces the training times over the corpus to 1-3 seconds, allowing a simple practical deployment of the methods in the online automated reasoning service for the Mizar users (MizAR).
The λ-calculus is a handy formalism to specify the evaluation of higher-order programs. It is not very handy, however, when one interprets the specification as an execution mechanism, because terms … The λ-calculus is a handy formalism to specify the evaluation of higher-order programs. It is not very handy, however, when one interprets the specification as an execution mechanism, because terms can grow exponentially with the number of β-steps. This is why implementations of functional languages and proof assistants always rely on some form of sharing of subterms.
We present ML4PG - a machine learning extension for Proof General. It allows users to gather proof statistics related to shapes of goals, sequences of applied tactics, and proof tree … We present ML4PG - a machine learning extension for Proof General. It allows users to gather proof statistics related to shapes of goals, sequences of applied tactics, and proof tree structures from the libraries of interactive higher-order proofs written in Coq and SSReflect. The gathered data is clustered using the state-of-the-art machine learning algorithms available in MATLAB and Weka. ML4PG provides automated interfacing between Proof General and MATLAB/Weka. The results of clustering are used by ML4PG to provide proof hints in the process of interactive proof development.
Homotopy type theory is an interpretation of Martin-Lof's constructive type theory into abstract homotopy theory. There results a link between constructive mathematics and algebraic topology, providing topological semantics for intensional … Homotopy type theory is an interpretation of Martin-Lof's constructive type theory into abstract homotopy theory. There results a link between constructive mathematics and algebraic topology, providing topological semantics for intensional systems of type theory as well as a computational approach to algebraic topology via type theory-based proof assistants such as Coq. The present work investigates inductive types in this setting. Modified rules for inductive types, including types of well-founded trees, or W-types, are presented, and the basic homotopical semantics of such types are determined. Proofs of all results have been formally verified by the Coq proof assistant, and the proof scripts for this verification form an essential component of this research.
The Agora system is a prototypical Wiki for formal mathematics: a web-based system for collaborating on formal mathematics, intended to support informal documentation of formal developments. This system requires a … The Agora system is a prototypical Wiki for formal mathematics: a web-based system for collaborating on formal mathematics, intended to support informal documentation of formal developments. This system requires a reusable proof editor component, both for collaborative editing of documents, and for embedding in the resulting documents. This paper describes the design of Agora's asynchronous editor, that is generic enough to support different tools working on editor content and providing contextual information, with interactive theorem proverss being a special, but important, case described in detail for the Coq theorem prover.
Increasing sharing in programs is desirable to compactify the code, and to avoid duplication of reduction work at run-time, thereby speeding up execution. We show how a maximal degree of … Increasing sharing in programs is desirable to compactify the code, and to avoid duplication of reduction work at run-time, thereby speeding up execution. We show how a maximal degree of sharing can be obtained for programs expressed as terms in the lambda calculus with letrec. We introduce a notion of `maximal compactness' for lambda-letrec-terms among all terms with the same infinite unfolding. Instead of defined purely syntactically, this notion is based on a graph semantics. lambda-letrec-terms are interpreted as first-order term graphs so that unfolding equivalence between terms is preserved and reflected through bisimilarity of the term graph interpretations. Compactness of the term graphs can then be compared via functional bisimulation. We describe practical and efficient methods for the following two problems: transforming a lambda-letrec-term into a maximally compact form; and deciding whether two lambda-letrec-terms are unfolding-equivalent. The transformation of a lambda-letrec-term $L$ into maximally compact form $L_0$ proceeds in three steps: (i) translate L into its term graph $G = [[ L ]]$; (ii) compute the maximally shared form of $G$ as its bisimulation collapse $G_0$; (iii) read back a lambda-letrec-term $L_0$ from the term graph $G_0$ with the property $[[ L_0 ]] = G_0$. This guarantees that $L_0$ and $L$ have the same unfolding, and that $L_0$ exhibits maximal sharing. The procedure for deciding whether two given lambda-letrec-terms $L_1$ and $L_2$ are unfolding-equivalent computes their term graph interpretations $[[ L_1 ]]$ and $[[ L_2 ]]$, and checks whether these term graphs are bisimilar. For illustration, we also provide a readily usable implementation.
Homotopy Type Theory is a new field of mathematics based on the recently-discovered correspondence between Martin-Löf's constructive type theory and abstract homotopy theory. We have a powerful interplay between these … Homotopy Type Theory is a new field of mathematics based on the recently-discovered correspondence between Martin-Löf's constructive type theory and abstract homotopy theory. We have a powerful interplay between these disciplines - we can use geometric intuition to formulate new concepts in type theory and, conversely, use type-theoretic machinery to verify and often simplify existing mathematical proofs.
Abstract We establish a course-of-values induction principle for K-finite sets in intuitionistic type theory. Using this principle, we prove a pigeonhole principle conjectured by Bénabou and Loiseau. We also comment … Abstract We establish a course-of-values induction principle for K-finite sets in intuitionistic type theory. Using this principle, we prove a pigeonhole principle conjectured by Bénabou and Loiseau. We also comment on some variants of this pigeonhole principle.
Recent work on homotopy type theory exploits an exciting new correspondence between Martin-Lof's dependent type theory and the mathematical disciplines of category theory and homotopy theory. The mathematics suggests new … Recent work on homotopy type theory exploits an exciting new correspondence between Martin-Lof's dependent type theory and the mathematical disciplines of category theory and homotopy theory. The mathematics suggests new principles to add to type theory, while the type theory can be used in novel ways to do computer-checked proofs in a proof assistant. In this paper, we formalize a basic result in algebraic topology, that the fundamental group of the circle is the integers. Our proof illustrates the new features of homotopy type theory, such as higher inductive types and Voevodsky's univalence axiom. It also introduces a new method for calculating the path space of a type, which has proved useful in many other examples.
We present improved partition refinement algorithms for three problems: lexicographic sorting, relational coarsest partition, and double lexical ordering. Our double lexical ordering algorithm uses a new, efficient method for unmerging … We present improved partition refinement algorithms for three problems: lexicographic sorting, relational coarsest partition, and double lexical ordering. Our double lexical ordering algorithm uses a new, efficient method for unmerging two sorted sets.
The considerable mathematical knowledge encoded by the Flyspeck project is combined with external automated theorem provers (ATPs) and machine-learning premise selection methods trained on the Flyspeck proofs, producing an AI … The considerable mathematical knowledge encoded by the Flyspeck project is combined with external automated theorem provers (ATPs) and machine-learning premise selection methods trained on the Flyspeck proofs, producing an AI system capable of proving a wide range of mathematical conjectures automatically. The performance of this architecture is evaluated in a bootstrapping scenario emulating the development of Flyspeck from axioms to the last theorem, each time using only the previous theorems and proofs. It is shown that 39 % of the 14185 theorems could be proved in a push-button mode (without any high-level advice and user interaction) in 30 seconds of real time on a fourteen-CPU workstation. The necessary work involves: (i) an implementation of sound translations of the HOL Light logic to ATP formalisms: untyped first-order, polymorphic typed first-order, and typed higher-order, (ii) export of the dependency information from HOL Light and ATP proofs for the machine learners, and (iii) choice of suitable representations and methods for learning from previous proofs, and their integration as advisors with HOL Light. This work is described and discussed here, and an initial analysis of the body of proofs that were found fully automatically is provided.
At first sight, the argument which F. P. Ramsey gave for (the infinite case of) his famous theorem from 1927, is hopelessly unconstructive. If suitably reformulated, the theorem is true … At first sight, the argument which F. P. Ramsey gave for (the infinite case of) his famous theorem from 1927, is hopelessly unconstructive. If suitably reformulated, the theorem is true intuitionistically as well as classically: we offer a proof which should convince both the classical and the intuitionistic reader.
The paper introduces the notion of a weak bisimulation for coalgebras whose type is a monad satisfying some extra properties. In the first part of the paper we argue that … The paper introduces the notion of a weak bisimulation for coalgebras whose type is a monad satisfying some extra properties. In the first part of the paper we argue that systems with silent moves should be modelled coalgebraically as coalgebras whose type is a monad. We show that the visible and invisible part of the functor can be handled internally inside a monadic structure. In the second part we introduce the notion of an ordered saturation monad, study its properties, and show that it allows us to present two approaches towards defining weak bisimulation for coalgebras and compare them. We support the framework presented in this paper by two main examples of models: labelled transition systems and simple Segala systems.
In homotopy type theory, we construct the propositional truncation as a colimit, using only non-recursive higher inductive types (HITs). This is a first step towards reducing recursive HITs to non-recursive … In homotopy type theory, we construct the propositional truncation as a colimit, using only non-recursive higher inductive types (HITs). This is a first step towards reducing recursive HITs to non-recursive HITs. This construction gives a characterization of functions from the propositional truncation to an arbitrary type, extending the universal property of the propositional truncation. We have fully formalized all the results in a new proof assistant, Lean.
In constructive mathematics, several nonequivalent notions of finiteness exist. In this paper, we continue the study of Noetherian sets in the dependently typed setting of the Agda programming language. We … In constructive mathematics, several nonequivalent notions of finiteness exist. In this paper, we continue the study of Noetherian sets in the dependently typed setting of the Agda programming language. We want to say that a set is Noetherian, if, when we are shown elements from it one after another, we will sooner or later have seen some element twice. This idea can be made precise in a number of ways. We explore the properties and connections of some of the possible encodings. In particular, we show that certain implementations imply decidable equality while others do not, and we construct counterexamples in the latter case. Additionally, we explore the relation between Noetherianness and other notions of finiteness.
Abstract We present a modular proof of strong normalization for the Calculus of Constructions of Coquand and Huet (1985, 1988). This result was first proved by Coquand (1986), but our … Abstract We present a modular proof of strong normalization for the Calculus of Constructions of Coquand and Huet (1985, 1988). This result was first proved by Coquand (1986), but our proof is more perspicious. The method consists of a little juggling with some systems in the cube of Barendregt (1989), which provides a fine structure of the calculus of constructions. It is proved that the strong normalization of the calculus of constructions is equivalent with the strong normalization of F ω. In order to give the proof, we first establish some properties of various type systems. Therefore, we present a general framework of typed lambda calculi, including many well-known ones.
Higher inductive types (HITs) in homotopy type theory are a powerful generalization of inductive types. Not only can they have ordinary constructors to define elements, but also higher constructors to … Higher inductive types (HITs) in homotopy type theory are a powerful generalization of inductive types. Not only can they have ordinary constructors to define elements, but also higher constructors to define equalities (paths). We say that a HIT H is non-recursive if its constructors do not quantify over elements or paths in H. The advantage of non-recursive HITs is that their elimination principles are easier to apply than those of general HITs.
We report on the development of the HoTT library, a formalization of homotopy type theory in the Coq proof assistant. It formalizes most of basic homotopy type theory, including univalence, … We report on the development of the HoTT library, a formalization of homotopy type theory in the Coq proof assistant. It formalizes most of basic homotopy type theory, including univalence, higher inductive types, and significant amounts of synthetic homotopy theory, as well as category theory and modalities. The library has been used as a basis for several independent developments. We discuss the decisions that led to the design of the library, and we comment on the interaction of homotopy type theory with recently introduced features of Coq, such as universe polymorphism and private inductive types.
Abstract Rational sequences are possibly infinite sequences with a finite number of distinct suffixes. In this paper, we present different implementations of rational sequences in Martin–Löf type theory. First, we … Abstract Rational sequences are possibly infinite sequences with a finite number of distinct suffixes. In this paper, we present different implementations of rational sequences in Martin–Löf type theory. First, we literally translate the above definition of rational sequence into the language of type theory, i.e., we construct predicates on possibly infinite sequences expressing the finiteness of the set of suffixes. In type theory, there exist several inequivalent notions of finiteness. We consider two of them, listability and Noetherianness, and show that in the implementation of rational sequences the two notions are interchangeable. Then we introduce the type of lists with backpointers, which is an inductive implementation of rational sequences. Lists with backpointers can be unwound into rational sequences, and rational sequences can be truncated into lists with backpointers. As an example, we see how to convert the fractional representation of a rational number into its decimal representation and vice versa.
A higher inductive type of level 1 (a 1-hit) has constructors for points and paths only, whereas a higher inductive type of level 2 (a 2-hit) has constructors for surfaces … A higher inductive type of level 1 (a 1-hit) has constructors for points and paths only, whereas a higher inductive type of level 2 (a 2-hit) has constructors for surfaces too. We restrict attention to finitary higher inductive types and present general schemata for the types of their point, path, and surface constructors. We also derive the elimination and equality rules from the types of constructors for 1-hits and 2-hits. Moreover, we construct a groupoid model for dependent type theory with 2-hits and point out that we obtain a setoid model for dependent type theory with 1-hits by truncating the groupoid model.
HOL(y)Hammer is an online AI/ATP service for formal (computer-understandable) mathematics encoded in the HOL Light system. The service allows its users to upload and automatically process an arbitrary formal development … HOL(y)Hammer is an online AI/ATP service for formal (computer-understandable) mathematics encoded in the HOL Light system. The service allows its users to upload and automatically process an arbitrary formal development (project) based on HOL Light, and to attack arbitrary conjectures that use the concepts defined in some of the uploaded projects. For that, the service uses several automated reasoning systems combined with several premise selection methods trained on all the project proofs. The projects that are readily available on the server for such query answering include the recent versions of the Flyspeck, Multivariate Analysis and Complex Analysis libraries. The service runs on a 48-CPU server, currently employing in parallel for each task 7 AI/ATP combinations and 4 decision procedures that contribute to its overall performance. The system is also available for local installation by interested users, who can customize it for their own proof development. An Emacs interface allowing parallel asynchronous queries to the service is also provided. The overall structure of the service is outlined, problems that arise and their solutions are discussed, and an initial account of using the system is given.
The considerable mathematical knowledge encoded by the Flyspeck project is combined with external automated theorem provers (ATPs) and machine-learning premise selection methods trained on the proofs, producing an AI system … The considerable mathematical knowledge encoded by the Flyspeck project is combined with external automated theorem provers (ATPs) and machine-learning premise selection methods trained on the proofs, producing an AI system capable of answering a wide range of mathematical queries automatically. The performance of this architecture is evaluated in a bootstrapping scenario emulating the development of Flyspeck from axioms to the last theorem, each time using only the previous theorems and proofs. It is shown that 39% of the 14185 theorems could be proved in a push-button mode (without any high-level advice and user interaction) in 30 seconds of real time on a fourteen-CPU workstation. The necessary work involves: (i) an implementation of sound translations of the HOL Light logic to ATP formalisms: untyped first-order, polymorphic typed first-order, and typed higher-order, (ii) export of the dependency information from HOL Light and ATP proofs for the machine learners, and (iii) choice of suitable representations and methods for learning from previous proofs, and their integration as advisors with HOL Light. This work is described and discussed here, and an initial analysis of the body of proofs that were found fully automatically is provided.
Abstract Lambda calculus is the basis of functional programming and higher order proof assistants. However, little is known about combinatorial properties of lambda terms, in particular, about their asymptotic distribution … Abstract Lambda calculus is the basis of functional programming and higher order proof assistants. However, little is known about combinatorial properties of lambda terms, in particular, about their asymptotic distribution and random generation. This paper tries to answer questions like: How many terms of a given size are there? What is a ‘typical’ structure of a simply typable term? Despite their ostensible simplicity, these questions still remain unanswered, whereas solutions to such problems are essential for testing compilers and optimizing programs whose expected efficiency depends on the size of terms. Our approach toward the aforementioned problems may be later extended to any language with bound variables, i.e., with scopes and declarations. This paper presents two complementary approaches: one, theoretical, uses complex analysis and generating functions, the other, experimental, is based on a generator of lambda terms. Thanks to de Bruijn indices (de Bruijn, N. (1972) Lambda calculus notation with nameless dummies, a tool for automatic formula manipulation, with application to the Church-Rosser theorem. Indagat. Math. 34 (5), 381–392), we provide three families of formulas for the number of closed lambda terms of a given size and we give four relations between these numbers which have interesting combinatorial interpretations. As a by-product of the counting formulas, we design an algorithm for generating λ-terms. Performed tests provide us with experimental data, like the average depth of bound variables and the average number of head lambdas. We also create random generators for various sorts of terms. Thereafter, we conduct experiments that answer questions like: What is the ratio of simply typable terms among all terms? ( Very small! ) How are simply typable lambda terms distributed among all lambda terms? ( A typable term almost always starts with an abstraction .) In this paper, abstractions and applications have size 1 and variables have size 0.
Techniques combining machine learning with translation to automated reasoning have recently become an important component of formal proof assistants. Such "hammer" tech- niques complement traditional proof assistant automation as implemented … Techniques combining machine learning with translation to automated reasoning have recently become an important component of formal proof assistants. Such "hammer" tech- niques complement traditional proof assistant automation as implemented by tactics and decision procedures. In this paper we present a unified proof assistant automation approach which attempts to automate the selection of appropriate tactics and tactic-sequences com- bined with an optimized small-scale hammering approach. We implement the technique as a tactic-level automation for HOL4: TacticToe. It implements a modified A*-algorithm directly in HOL4 that explores different tactic-level proof paths, guiding their selection by learning from a large number of previous tactic-level proofs. Unlike the existing hammer methods, TacticToe avoids translation to FOL, working directly on the HOL level. By combining tactic prediction and premise selection, TacticToe is able to re-prove 39 percent of 7902 HOL4 theorems in 5 seconds whereas the best single HOL(y)Hammer strategy solves 32 percent in the same amount of time.
In many applications one wants to identify identical subtrees of a program syntax tree. This identification should ideally be robust to alpha-renaming of the program, but no existing technique has … In many applications one wants to identify identical subtrees of a program syntax tree. This identification should ideally be robust to alpha-renaming of the program, but no existing technique has been shown to achieve this with good efficiency (better than O(n2) in expression size). We present a new, asymptotically efficient way to hash modulo alpha-equivalence. A key insight of our method is to use a weak (commutative) hash combiner at exactly one point in the construction, which admits an algorithm with O(n (logn)2) time complexity. We prove that the use of the commutative combiner nevertheless yields a strong hash with low collision probability. Numerical benchmarks attest to the asymptotic behaviour of the method.
Concepts abound in mathematics and its applications. They vary greatly between subject areas, and new ones are introduced in each mathematical paper or application. A formal theory builds a hierarchy … Concepts abound in mathematics and its applications. They vary greatly between subject areas, and new ones are introduced in each mathematical paper or application. A formal theory builds a hierarchy of definitions, theorems and proofs that reference each other. When an AI agent is proving a new theorem, most of the mathematical concepts and lemmas relevant to that theorem may have never been seen during training. This is especially true in the Coq proof assistant, which has a diverse library of Coq projects, each with its own definitions, lemmas, and even custom tactic procedures used to prove those lemmas. It is essential for agents to incorporate such new information into their knowledge base on the fly. We work towards this goal by utilizing a new, large-scale, graph-based dataset for machine learning in Coq. We leverage a faithful graph-representation of Coq terms that induces a directed graph of dependencies between definitions to create a novel graph neural network, Graph2Tac (G2T), that takes into account not only the current goal, but also the entire hierarchy of definitions that led to the current goal. G2T is an online model that is deeply integrated into the users' workflow and can adapt in real time to new Coq projects and their definitions. It complements well with other online models that learn in real time from new proof scripts. Our novel definition embedding task, which is trained to compute representations of mathematical concepts not seen during training, boosts the performance of the neural network to rival state-of-the-art k-nearest neighbor predictors.
The Tactician's Web is a platform offering a large web of strongly interconnected, machine-checked, formal mathematical knowledge conveniently packaged for machine learning, analytics, and proof engineering. Built on top of … The Tactician's Web is a platform offering a large web of strongly interconnected, machine-checked, formal mathematical knowledge conveniently packaged for machine learning, analytics, and proof engineering. Built on top of the Coq proof assistant, the platform exports a dataset containing a wide variety of formal theories, presented as a web of definitions, theorems, proof terms, tactics, and proof states. Theories are encoded both as a semantic graph (rendered below) and as human-readable text, each with a unique set of advantages and disadvantages. Proving agents may interact with Coq through the same rich data representation and can be automatically benchmarked on a set of theorems. Tight integration with Coq provides the unique possibility to make agents available to proof engineers as practical tools.
Le but de cette note est d'introduire une définition d'un ensemble fini et de démontrer son équivalence avec la définition donnée par Wacław Sierpiński. Le but de cette note est d'introduire une définition d'un ensemble fini et de démontrer son équivalence avec la définition donnée par Wacław Sierpiński.
Increasing sharing in programs is desirable to compactify the code, and to avoid duplication of reduction work at run-time, thereby speeding up execution. We show how a maximal degree of … Increasing sharing in programs is desirable to compactify the code, and to avoid duplication of reduction work at run-time, thereby speeding up execution. We show how a maximal degree of sharing can be obtained for programs expressed as terms in the lambda calculus with letrec. We introduce a notion of 'maximal compactness' for λ letrec -terms among all terms with the same infinite unfolding. Instead of defined purely syntactically, this notion is based on a graph semantics. λ letrec -terms are interpreted as first-order term graphs so that unfolding equivalence between terms is preserved and reflected through bisimilarity of the term graph interpretations. Compactness of the term graphs can then be compared via functional bisimulation. We describe practical and efficient methods for the following two problems: transforming a λ letrec -term into a maximally compact form; and deciding whether two λ letrec -terms are unfolding-equivalent. The transformation of a λ letrec -terms L into maximally compact form L 0 proceeds in three steps: (i) translate L into its term graph G = [[L]] ; (ii) compute the maximally shared form of G as its bisimulation collapse G 0 ; (iii) read back a λ letrec -term L 0 from the term graph G 0 with the property [[ L 0 ]] = G 0 . Then L 0 represents a maximally shared term graph, and it has the same unfolding as L . The procedure for deciding whether two given λ letrec -terms L 1 and L 2 are unfolding-equivalent computes their term graph interpretations [[ L 1 ]] and [[ L 2 ]], and checks whether these are bisimilar. For illustration, we also provide a readily usable implementation.
We present guarded dependent type theory, gDTT, an extensional dependent type theory with a `later' modality and clock quantifiers for programming and proving with guarded recursive and coinductive types. The … We present guarded dependent type theory, gDTT, an extensional dependent type theory with a `later' modality and clock quantifiers for programming and proving with guarded recursive and coinductive types. The later modality is used to ensure the productivity of recursive definitions in a modular, type based, way. Clock quantifiers are used for controlled elimination of the later modality and for encoding coinductive types using guarded recursive types. Key to the development of gDTT are novel type and term formers involving what we call `delayed substitutions'. These generalise the applicative functor rules for the later modality considered in earlier work, and are crucial for programming and proving with dependent types. We show soundness of the type theory with respect to a denotational model.
Automated reasoning and theorem proving have recently become major challenges for machine learning. In other domains, representations that are able to abstract over unimportant transformations, such as abstraction over translations … Automated reasoning and theorem proving have recently become major challenges for machine learning. In other domains, representations that are able to abstract over unimportant transformations, such as abstraction over translations and rotations in vision, are becoming more common. Standard methods of embedding mathematical formulas for learning theorem proving are however yet unable to handle many important transformations. In particular, embedding previously unseen labels, that often arise in definitional encodings and in Skolemization, has been very weak so far. Similar problems appear when transferring knowledge between known symbols. We propose a novel encoding of formulas that extends existing graph neural network models. This encoding represents symbols only by nodes in the graph, without giving the network any knowledge of the original labels. We provide additional links between such nodes that allow the network to recover the meaning and therefore correctly embed such nodes irrespective of the given labels. We test the proposed encoding in an automated theorem prover based on the tableaux connection calculus, and show that it improves on the best characterizations used so far. The encoding is further evaluated on the premise selection task and a newly introduced symbol guessing task, and shown to correctly predict 65% of the symbol names.
Cauchy reals can be defined as a quotient of Cauchy sequences of rationals. The limit of a Cauchy sequence of Cauchy reals is defined through lifting it to a sequence … Cauchy reals can be defined as a quotient of Cauchy sequences of rationals. The limit of a Cauchy sequence of Cauchy reals is defined through lifting it to a sequence of Cauchy sequences of rationals. This lifting requires the axiom of countable choice or excluded middle, neither of which is available in homotopy type theory. To address this, the Univalent Foundations Program uses a higher inductive-inductive type to define the Cauchy reals as the free Cauchy complete metric space generated by the rationals. We generalize this construction to define the free Cauchy complete metric space generated by an arbitrary metric space. This forms a monad in the category of metric spaces with Lipschitz functions. When applied to the rationals it defines the Cauchy reals. Finally, we can use Altenkirch and Danielson (2016)'s partiality monad to define a semi-decision procedure comparing a real number and a rational number. The entire construction has been formalized in the Coq proof assistant. It is available at https://github.com/SkySkimmer/HoTTClasses/tree/CPP2017 .
The univalence axiom expresses the principle of extensionality for dependent type theory. However, if we simply add the univalence axiom to type theory, then we lose the property of canonicity … The univalence axiom expresses the principle of extensionality for dependent type theory. However, if we simply add the univalence axiom to type theory, then we lose the property of canonicity - that every closed term computes to a canonical form. A computation becomes `stuck' when it reaches the point that it needs to evaluate a proof term that is an application of the univalence axiom. So we wish to find a way to compute with the univalence axiom. While this problem has been solved with the formulation of cubical type theory, where the computations are expressed using a nominal extension of lambda-calculus, it may be interesting to explore alternative solutions, which do not require such an extension. As a first step, we present here a system of propositional higher-order minimal logic (PHOML). There are three kinds of typing judgement in PHOML. There are terms which inhabit types, which are the simple types over $\Omega$. There are proofs which inhabit propositions, which are the terms of type $\Omega$. The canonical propositions are those constructed from $\bot$ by implication $\supset$. Thirdly, there are paths which inhabit equations $M =_A N$, where $M$ and $N$ are terms of type $A$. There are two ways to prove an equality: reflexivity, and propositional extensionality - logically equivalent propositions are equal. This system allows for some definitional equalities that are not present in cubical type theory, namely that transport along the trivial path is identity. We present a call-by-name reduction relation for this system, and prove that the system satisfies canonicity: every closed typable term head-reduces to a canonical form. This work has been formalised in Agda.
Capretta's delay monad can be used to model partial computations, but it has the wrong notion of built-in equality, strong bisimilarity. An alternative is to quotient the delay monad by … Capretta's delay monad can be used to model partial computations, but it has the wrong notion of built-in equality, strong bisimilarity. An alternative is to quotient the delay monad by the right notion of equality, weak bisimilarity. However, recent work by Chapman et al. suggests that it is impossible to define a monad structure on the resulting construction in common forms of type theory without assuming (instances of) the axiom of countable choice. Using an idea from homotopy type theory - a higher inductive-inductive type - we construct a partiality monad without relying on countable choice. We prove that, in the presence of countable choice, our partiality monad is equivalent to the delay monad quotiented by weak bisimilarity. Furthermore we outline several applications.
This paper presents a type theory in which it is possible to directly manipulate n-dimensional cubes (points, lines, squares, cubes, etc.) based on an interpretation of dependent type theory in … This paper presents a type theory in which it is possible to directly manipulate n-dimensional cubes (points, lines, squares, cubes, etc.) based on an interpretation of dependent type theory in a cubical set model. This enables new ways to reason about identity types, for instance, function extensionality is directly provable in the system. Further, Voevodsky's univalence axiom is provable in this system. We also explain an extension with some higher inductive types like the circle and propositional truncation. Finally we provide semantics for this cubical type theory in a constructive meta-theory.
Homotopy type theory is a recent research area connecting type theory with homotopy theory by interpreting types as spaces. In particular, one can prove and mechanize type-theoretic analogues of homotopy-theoretic … Homotopy type theory is a recent research area connecting type theory with homotopy theory by interpreting types as spaces. In particular, one can prove and mechanize type-theoretic analogues of homotopy-theoretic theorems, yielding homotopy theory. Here we consider the Seifert-van Kampen theorem, which characterizes the loop structure of spaces obtained by gluing. This is useful in homotopy theory because many spaces are constructed by gluing, and the loop structure helps distinguish distinct spaces. The synthetic proof showcases many new characteristics of synthetic homotopy theory, such as the encode-decode method, enforced homotopy-invariance, and lack of underlying sets.