Author Description

Login to generate an author description

Ask a Question About This Mathematician

All published works (43)

Dependency pairs constitute a series of very effective techniques for the termination analysis of term rewriting systems. In this paper, we adapt the static dependency pair framework to logically constrained … Dependency pairs constitute a series of very effective techniques for the termination analysis of term rewriting systems. In this paper, we adapt the static dependency pair framework to logically constrained simply-typed term rewriting systems (LCSTRSs), a higher-order formalism with logical constraints built in. We also propose the concept of universal computability, which enables a form of open-world termination analysis through the use of static dependency pairs.
In this short paper, we present a simple variant of the recursive path ordering, specified for Logically Constrained Simply Typed Rewriting Systems (LCSTRSs). This is a method for curried systems, … In this short paper, we present a simple variant of the recursive path ordering, specified for Logically Constrained Simply Typed Rewriting Systems (LCSTRSs). This is a method for curried systems, without lambda but with partially applied function symbols, which can deal with logical constraints. As it is designed for use in the dependency pair framework, it is defined as reduction pair, allowing weak monotonicity.
The class of type-two basic feasible functionals ($\mathtt{BFF}_2$) is the analogue of $\mathtt{FP}$ (polynomial time functions) for type-2 functionals, that is, functionals that can take (first-order) functions as arguments. $\mathtt{BFF}_2$ … The class of type-two basic feasible functionals ($\mathtt{BFF}_2$) is the analogue of $\mathtt{FP}$ (polynomial time functions) for type-2 functionals, that is, functionals that can take (first-order) functions as arguments. $\mathtt{BFF}_2$ can be defined through Oracle Turing machines with running time bounded by second-order polynomials. On the other hand, higher-order term rewriting provides an elegant formalism for expressing higher-order computation. We address the problem of characterizing $\mathtt{BFF}_2$ by higher-order term rewriting. Various kinds of interpretations for first-order term rewriting have been introduced in the literature for proving termination and characterizing first-order complexity classes. In this paper, we consider a recently introduced notion of cost-size interpretations for higher-order term rewriting and see second order rewriting as ways of computing type-2 functionals. We then prove that the class of functionals represented by higher-order terms admitting polynomially bounded cost-size interpretations exactly corresponds to $\mathtt{BFF}_2$.
Abstract The class of basic feasible functionals ( $$\texttt{BFF}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>BFF</mml:mi> </mml:math> ) is the analog of $$\texttt{FP}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>FP</mml:mi> </mml:math> (polynomial time functions) for type-2 functionals, that … Abstract The class of basic feasible functionals ( $$\texttt{BFF}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>BFF</mml:mi> </mml:math> ) is the analog of $$\texttt{FP}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>FP</mml:mi> </mml:math> (polynomial time functions) for type-2 functionals, that is, functionals that can take (first-order) functions as arguments. $$\texttt{BFF}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>BFF</mml:mi> </mml:math> can be defined through Oracle Turing machines with running time bounded by second-order polynomials. On the other hand, higher-order term rewriting provides an elegant formalism for expressing higher-order computation. We address the problem of characterizing $$\texttt{BFF}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>BFF</mml:mi> </mml:math> by higher-order term rewriting. Various kinds of interpretations for first-order term rewriting have been introduced in the literature for proving termination and characterizing (first-order) complexity classes. In this paper, we consider a recently introduced notion of cost–size interpretations for higher-order term rewriting and see definitions as ways of computing functionals. We then prove that the class of functionals represented by higher-order terms admitting a certain kind of cost–size interpretation is exactly $$\texttt{BFF}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>BFF</mml:mi> </mml:math> .
Higher-order rewriting is a framework in which one can write higher-order programs and study their properties. One such property is termination: the situation that for all inputs, the program eventually … Higher-order rewriting is a framework in which one can write higher-order programs and study their properties. One such property is termination: the situation that for all inputs, the program eventually halts its execution and produces an output. Several tools have been developed to check whether higher-order rewriting systems are terminating. However, developing such tools is difficult and can be error-prone. In this paper, we present a way of certifying termination proofs of higher-order term rewriting systems. We formalize a specific method, namely the polynomial interpretation method, that is used to prove termination. In addition, we give a program that turns the output of Wanda, a termination analysis tool for higher-order rewriting systems, into a Coq script, so that we can check whether the output is a valid proof of termination.
Higher-order rewriting is a framework in which one can write higher-order programs and study their properties. One such property is termination: the situation that for all inputs, the program eventually … Higher-order rewriting is a framework in which one can write higher-order programs and study their properties. One such property is termination: the situation that for all inputs, the program eventually halts its execution and produces an output. Several tools have been developed to check whether higher-order rewriting systems are terminating. However, developing such tools is difficult and can be error-prone. In this paper, we present a way of certifying termination proofs of higher-order term rewriting systems. We formalize a specific method, namely the polynomial interpretation method, that is used to prove termination. In addition, we give a program that turns the output of Wanda, a termination analysis tool for higher-order rewriting systems, into a Coq script, so that we can check whether the output is a valid proof of termination.
In this short paper, we consider a form of higher-order rewriting with a call-by-value evaluation strategy so as to model call-by-value programs. We briefly present a cost-size semantics to call-by-value … In this short paper, we consider a form of higher-order rewriting with a call-by-value evaluation strategy so as to model call-by-value programs. We briefly present a cost-size semantics to call-by-value rewriting: a class of algebraic interpretations that map terms to tuples that bound both the reductions' cost and the size of normal forms.
Logically constrained term rewriting systems (LCTRSs) are a program analyzing formalism with native support for data types which are not (co)inductively defined. As a first-order formalism, LCTRSs have accommodated only … Logically constrained term rewriting systems (LCTRSs) are a program analyzing formalism with native support for data types which are not (co)inductively defined. As a first-order formalism, LCTRSs have accommodated only analysis of imperative programs so far. In this paper, we present a higher-order variant of the LCTRS formalism, which can be used to analyze functional programs. Then we study the termination problem and define a higher-order recursive path ordering (HORPO) for this new formalism.
Tuple interpretations are a class of algebraic interpretation that subsumes both polynomial and matrix interpretations as it does not impose simple termination and allows non-linear interpretations. It was developed in … Tuple interpretations are a class of algebraic interpretation that subsumes both polynomial and matrix interpretations as it does not impose simple termination and allows non-linear interpretations. It was developed in the context of higher-order rewriting to study derivational complexity of algebraic functional systems. In this short paper, we continue our journey to study the complexity of higher-order TRSs by tailoring tuple interpretations to deal with innermost runtime complexity.
We develop a class of algebraic interpretations for many-sorted and higher-order term rewriting systems that takes type information into account. Specifically, base-type terms are mapped to \emph{tuples} of natural numbers … We develop a class of algebraic interpretations for many-sorted and higher-order term rewriting systems that takes type information into account. Specifically, base-type terms are mapped to \emph{tuples} of natural numbers and higher-order terms to functions between those tuples. Tuples may carry information relevant to the type; for instance, a term of type $\mathsf{nat}$ may be associated to a pair $(\mathsf{cost}, \mathsf{size})$ representing its evaluation cost and size. This class of interpretations results in a more fine-grained notion of complexity than runtime or derivational complexity, which makes it particularly useful to obtain complexity bounds for higher-order rewriting systems. We show that rewriting systems compatible with tuple interpretations admit finite bounds on derivation height. Furthermore, we demonstrate how to mechanically construct tuple interpretations and how to orient $β$ and $η$ reductions within our technique. Finally, we relate our method to runtime complexity and prove that specific interpretation shapes imply certain runtime complexity bounds.
Programming language concepts are used to give some new perspectives on a long-standing open problem: is logspace = ptime ? Programming language concepts are used to give some new perspectives on a long-standing open problem: is logspace = ptime ?
We revisit the static dependency pair method for proving termination of higher-order term rewriting and extend it in a number of ways: (1) We introduce a new rewrite formalism designed … We revisit the static dependency pair method for proving termination of higher-order term rewriting and extend it in a number of ways: (1) We introduce a new rewrite formalism designed for general applicability in termination proving of higher-order rewriting, Algebraic Functional Systems with Meta-variables. (2) We provide a syntactically checkable soundness criterion to make the method applicable to a large class of rewrite systems. (3) We propose a modular dependency pair framework for this higher-order setting. (4) We introduce a fine-grained notion of formative and computable chains to render the framework more powerful. (5) We formulate several existing and new termination proving techniques in the form of processors within our framework. The framework has been implemented in the (fully automatic) higher-order termination tool WANDA.
We generalise the termination method of higher-order polynomial interpretations to a setting with impredicative polymorphism. Instead of using weakly monotonic functionals, we interpret terms in a suitable extension of System … We generalise the termination method of higher-order polynomial interpretations to a setting with impredicative polymorphism. Instead of using weakly monotonic functionals, we interpret terms in a suitable extension of System F-omega. This enables a direct interpretation of rewrite rules which make essential use of impredicative polymorphism. In addition, our generalisation eases the applicability of the method in the non-polymorphic setting by allowing for the encoding of inductive data types. As an illustration of the potential of our method, we prove termination of a substantial fragment of full intuitionistic second-order propositional logic with permutative conversions.
We revisit the static dependency pair method for proving termination of higher-order term rewriting and extend it in a number of ways: (1) We introduce a new rewrite formalism designed … We revisit the static dependency pair method for proving termination of higher-order term rewriting and extend it in a number of ways: (1) We introduce a new rewrite formalism designed for general applicability in termination proving of higher-order rewriting, Algebraic Functional Systems with Meta-variables. (2) We provide a syntactically checkable soundness criterion to make the method applicable to a large class of rewrite systems. (3) We propose a modular dependency pair framework for this higher-order setting. (4) We introduce a fine-grained notion of formative and computable chains to render the framework more powerful. (5) We formulate several existing and new termination proving techniques in the form of processors within our framework. The framework has been implemented in the (fully automatic) higher-order termination tool WANDA.
In recent years, two higher-order extensions of the powerful dependency pair approach for termination analysis of first-order term rewriting have been defined: the static and the dynamic approach. Both approaches … In recent years, two higher-order extensions of the powerful dependency pair approach for termination analysis of first-order term rewriting have been defined: the static and the dynamic approach. Both approaches offer distinct advantages and disadvantages. However, a grand unifying theory is thus far missing, and both approaches lack the modularity present in the dependency pair framework commonly used in first-order rewriting. Moreover, neither approach can be used to prove non-termination. In this paper, we aim to address this gap by defining a higher-order dependency pair framework, integrating both approaches into a shared formal setup. The framework has been implemented in the (fully automatic) higher-order termination tool WANDA.
We present and formalize h, a core (or plank) calculus that can serve as the foundation for several compiler specification languages, notably CRSX (Combinatory Reductions Systems with eXtensions), HACS (Higher-order … We present and formalize h, a core (or plank) calculus that can serve as the foundation for several compiler specification languages, notably CRSX (Combinatory Reductions Systems with eXtensions), HACS (Higher-order Attribute Contraction Schemes), and TransScript. We discuss how the h typing and formation rules introduce the necessary restrictions to ensure that rewriting is well-defined, even in the presence of h's powerful extensions for manipulating free variables and environments as first class elements (including in pattern matching).
In this paper, we prove that (first-order) cons-free term rewriting with a call-by-value reduction strategy exactly characterises the class of PTIME-computable functions. We use this to give an alternative proof … In this paper, we prove that (first-order) cons-free term rewriting with a call-by-value reduction strategy exactly characterises the class of PTIME-computable functions. We use this to give an alternative proof of the result by Carvalho and Simonsen which states that cons-free term rewriting with linearity constraints characterises this class.
Constructor rewriting systems are said to be cons-free if any constructor term occurring in the rhs of a rule must be a subterm of the lhs of the rule. Roughly, … Constructor rewriting systems are said to be cons-free if any constructor term occurring in the rhs of a rule must be a subterm of the lhs of the rule. Roughly, such systems cannot build new data structures during their evaluation. In earlier work by several authors, (typed) cons-free systems have been used to characterise complexity classes such as polynomial or exponential time or space by varying the type orders, and the recursion forms allowed. This paper concerns the construction of interpreters for cons-free term rewriting. Due to their connection with proofs by diagonalisation, interpreters may be of use when studying separation results between complexity classes in implicit computational complexity theory. We are interested in interpreters of type order $k > 1$ that can interpret any term of strictly lower type order; while this gives us a well-known separation result E$^k$TIME $\subseteq$ E$^{k+1}$TIME, the hope is that more refined interpreters with syntactically limited constraints can be used to obtain a notion of faux diagonalisation and be used to attack open problems in complexity theory.
We investigate the power of non-determinism in purely functional programming languages with higher-order types. Specifically, we set out to characterise the hierarchy NP $\subseteq$ NEXP $\subseteq$ NEXP$^{(2)}$ $\subseteq \cdots \subseteq$ … We investigate the power of non-determinism in purely functional programming languages with higher-order types. Specifically, we set out to characterise the hierarchy NP $\subseteq$ NEXP $\subseteq$ NEXP$^{(2)}$ $\subseteq \cdots \subseteq$ NEXP$^{(k)}$ $\subseteq \cdots$ solely in terms of higher-typed, purely functional programs. Although the work is incomplete, we present an initial approach using cons-free programs with immutable functions.
This article aims to develop a verification method for procedural programs via a transformation into logically constrained term rewriting systems (LCTRSs). To this end, we extend transformation methods based on … This article aims to develop a verification method for procedural programs via a transformation into logically constrained term rewriting systems (LCTRSs). To this end, we extend transformation methods based on integer term rewriting systems to handle arbitrary data types, global variables, function calls, and arrays, and to encode safety checks. Then we adapt existing rewriting induction methods to LCTRSs and propose a simple yet effective method to generalize equations. We show that we can automatically verify memory safety and prove correctness of realistic functions. Our approach proves equivalence between two implementations; thus, in contrast to other works, we do not require an explicit specification in a separate specification language.
We investigate the power of non-determinism in purely functional programming languages with higher-order types. Specifically, we consider cons-free programs of varying data orders, equipped with explicit non-deterministic choice. Cons-freeness roughly … We investigate the power of non-determinism in purely functional programming languages with higher-order types. Specifically, we consider cons-free programs of varying data orders, equipped with explicit non-deterministic choice. Cons-freeness roughly means that data constructors cannot occur in function bodies and all manipulation of storage space thus has to happen indirectly using the call stack. While cons-free programs have previously been used by several authors to characterise complexity classes, the work on non-deterministic programs has almost exclusively considered programs of data order 0. Previous work has shown that adding explicit non-determinism to cons-free programs taking data of order 0 does not increase expressivity; we prove that this - dramatically - is not the case for higher data orders: adding non-determinism to programs with data order at least 1 allows for a characterisation of the entire class of elementary-time decidable sets. Finally we show how, even with non-deterministic choice, the original hierarchy of characterisations is restored by imposing different restrictions.
This paper considers quasi-reductivity - essentially, the property that an evaluation cannot get "stuck" due to a missing case in pattern matching - in the context of term rewriting with … This paper considers quasi-reductivity - essentially, the property that an evaluation cannot get "stuck" due to a missing case in pattern matching - in the context of term rewriting with logical constraints.
In this paper, we extend Jones' result -- that cons-free programming with $k^{th}$-order data and a call-by-value strategy characterises EXP$^k$TIME -- to a more general setting, including pattern-matching and non-deterministic … In this paper, we extend Jones' result -- that cons-free programming with $k^{th}$-order data and a call-by-value strategy characterises EXP$^k$TIME -- to a more general setting, including pattern-matching and non-deterministic choice. We show that the addition of non-determinism is unexpectedly powerful in the higher-order setting. Nevertheless, we can obtain a non-deterministic parallel to Jones' hierarchy result by appropriate restricting rule formation.
We investigate the power of non-determinism in purely functional programming languages with higher-order types. Specifically, we consider cons-free programs of varying data orders, equipped with explicit non-deterministic choice. Cons-freeness roughly … We investigate the power of non-determinism in purely functional programming languages with higher-order types. Specifically, we consider cons-free programs of varying data orders, equipped with explicit non-deterministic choice. Cons-freeness roughly means that data constructors cannot occur in function bodies and all manipulation of storage space thus has to happen indirectly using the call stack. While cons-free programs have previously been used by several authors to characterise complexity classes, the work on non-deterministic programs has almost exclusively considered programs of data order 0. Previous work has shown that adding explicit non-determinism to cons-free programs taking data of order 0 does not increase expressivity; we prove that this - dramatically - is not the case for higher data orders: adding non-determinism to programs with data order at least 1 allows for a characterisation of the entire class of elementary-time decidable sets. Finally we show how, even with non-deterministic choice, the original hierarchy of characterisations is restored by imposing different restrictions.
We present and formalize h, a core (or "plank") calculus that can serve as the foundation for several compiler specification languages, notably CRSX (Combinatory Reductions Systems with eXtensions), HACS (Higher-order … We present and formalize h, a core (or "plank") calculus that can serve as the foundation for several compiler specification languages, notably CRSX (Combinatory Reductions Systems with eXtensions), HACS (Higher-order Attribute Contraction Schemes), and TransScript. We discuss how the h typing and formation rules introduce the necessary restrictions to ensure that rewriting is well-defined, even in the presence of h's powerful extensions for manipulating free variables and environments as first class elements (including in pattern matching).
We investigate the power of non-determinism in purely functional programming languages with higher-order types. Specifically, we set out to characterise the hierarchy NP $\subseteq$ NEXP $\subseteq$ NEXP$^{(2)}$ $\subseteq \cdots \subseteq$ … We investigate the power of non-determinism in purely functional programming languages with higher-order types. Specifically, we set out to characterise the hierarchy NP $\subseteq$ NEXP $\subseteq$ NEXP$^{(2)}$ $\subseteq \cdots \subseteq$ NEXP$^{(k)}$ $\subseteq \cdots$ solely in terms of higher-typed, purely functional programs. Although the work is incomplete, we present an initial approach using cons-free programs with immutable functions.
In this paper, we prove that (first-order) cons-free term rewriting with a call-by-value reduction strategy exactly characterises the class of PTIME-computable functions. We use this to give an alternative proof … In this paper, we prove that (first-order) cons-free term rewriting with a call-by-value reduction strategy exactly characterises the class of PTIME-computable functions. We use this to give an alternative proof of the result by Carvalho and Simonsen which states that cons-free term rewriting with linearity constraints characterises this class.
Constructor rewriting systems are said to be cons-free if any constructor term occurring in the rhs of a rule must be a subterm of the lhs of the rule. Roughly, … Constructor rewriting systems are said to be cons-free if any constructor term occurring in the rhs of a rule must be a subterm of the lhs of the rule. Roughly, such systems cannot build new data structures during their evaluation. In earlier work by several authors, (typed) cons-free systems have been used to characterise complexity classes such as polynomial or exponential time or space by varying the type orders, and the recursion forms allowed. This paper concerns the construction of interpreters for cons-free term rewriting. Due to their connection with proofs by diagonalisation, interpreters may be of use when studying separation results between complexity classes in implicit computational complexity theory. We are interested in interpreters of type order $k > 1$ that can interpret any term of strictly lower type order; while this gives us a well-known separation result E$^k$TIME $\subseteq$ E$^{k+1}$TIME, the hope is that more refined interpreters with syntactically limited constraints can be used to obtain a notion of faux diagonalisation and be used to attack open problems in complexity theory.
Constructor rewriting systems are said to be cons-free if, roughly, constructor terms in the right-hand sides of rules are subterms of constructor terms in the left-hand side; the computational intuition … Constructor rewriting systems are said to be cons-free if, roughly, constructor terms in the right-hand sides of rules are subterms of constructor terms in the left-hand side; the computational intuition is that rules cannot build new data structures. It is well-known that cons-free programming languages can be used to characterize computational complexity classes, and that cons-free first-order term rewriting can be used to characterize the set of polynomial-time decidable sets. We investigate cons-free higher-order term rewriting systems, the complexity classes they characterize, and how these depend on the order of the types used in the systems. We prove that, for every k $\geq$ 1, left-linear cons-free systems with type order k characterize E$^k$TIME if arbitrary evaluation is used (i.e., the system does not have a fixed reduction strategy). The main difference with prior work in implicit complexity is that (i) our results hold for non-orthogonal term rewriting systems with possible rule overlaps with no assumptions about reduction strategy, (ii) results for such term rewriting systems have previously only been obtained for k = 1, and with additional syntactic restrictions on top of cons-freeness and left-linearity. Our results are apparently among the first implicit characterizations of the hierarchy E = E$^1$TIME $\subseteq$ E$^2$TIME $\subseteq$ .... Our work confirms prior results that having full non-determinism (via overlaps of rules) does not directly allow characterization of non-deterministic complexity classes like NE. We also show that non-determinism makes the classes characterized highly sensitive to minor syntactic changes such as admitting product types or non-left-linear rules.
Logically Constrained Term Rewriting Systems (LCTRSs) provide a general framework for term rewriting with constraints. We discuss a simple dependency pair approach to prove termination of LCTRSs. We see that … Logically Constrained Term Rewriting Systems (LCTRSs) provide a general framework for term rewriting with constraints. We discuss a simple dependency pair approach to prove termination of LCTRSs. We see that existing techniques transfer to the constrained setting in a natural way.
Constructor rewriting systems are said to be cons-free if, roughly, constructor terms in the right-hand sides of rules are subterms of the left-hand sides; the computational intuition is that rules … Constructor rewriting systems are said to be cons-free if, roughly, constructor terms in the right-hand sides of rules are subterms of the left-hand sides; the computational intuition is that rules cannot build new data structures. In programming language research, cons-free languages have been used to characterize hierarchies of computational complexity classes; in term rewriting, cons-free first-order TRSs have been used to characterize the class PTIME. We investigate cons-free higher-order term rewriting systems, the complexity classes they characterize, and how these depend on the type order of the systems. We prove that, for every K $\geq$ 1, left-linear cons-free systems with type order K characterize E$^K$TIME if unrestricted evaluation is used (i.e., the system does not have a fixed reduction strategy). The main difference with prior work in implicit complexity is that (i) our results hold for non-orthogonal term rewriting systems with no assumptions on reduction strategy, (ii) we consequently obtain much larger classes for each type order (E$^K$TIME versus EXP$^{K-1}$TIME), and (iii) results for cons-free term rewriting systems have previously only been obtained for K = 1, and with additional syntactic restrictions besides cons-freeness and left-linearity. Our results are among the first implicit characterizations of the hierarchy E = E$^1$TIME $\subsetneq$ E$^2$TIME $\subsetneq$ ... Our work confirms prior results that having full non-determinism (via overlapping rules) does not directly allow for characterization of non-deterministic complexity classes like NE. We also show that non-determinism makes the classes characterized highly sensitive to minor syntactic changes like admitting product types or non-left-linear rules.
Constructor rewriting systems are said to be cons-free if, roughly, constructor terms in the right-hand sides of rules are subterms of constructor terms in the left-hand side; the computational intuition … Constructor rewriting systems are said to be cons-free if, roughly, constructor terms in the right-hand sides of rules are subterms of constructor terms in the left-hand side; the computational intuition is that rules cannot build new data structures. It is well-known that cons-free programming languages can be used to characterize computational complexity classes, and that cons-free first-order term rewriting can be used to characterize the set of polynomial-time decidable sets. We investigate cons-free higher-order term rewriting systems, the complexity classes they characterize, and how these depend on the order of the types used in the systems. We prove that, for every k $\geq$ 1, left-linear cons-free systems with type order k characterize E$^k$TIME if arbitrary evaluation is used (i.e., the system does not have a fixed reduction strategy). The main difference with prior work in implicit complexity is that (i) our results hold for non-orthogonal term rewriting systems with possible rule overlaps with no assumptions about reduction strategy, (ii) results for such term rewriting systems have previously only been obtained for k = 1, and with additional syntactic restrictions on top of cons-freeness and left-linearity. Our results are apparently among the first implicit characterizations of the hierarchy E = E$^1$TIME $\subseteq$ E$^2$TIME $\subseteq$ .... Our work confirms prior results that having full non-determinism (via overlaps of rules) does not directly allow characterization of non-deterministic complexity classes like NE. We also show that non-determinism makes the classes characterized highly sensitive to minor syntactic changes such as admitting product types or non-left-linear rules.
We propose a notion of complexity for oriented conditional term rewrite systems satisfying certain restrictions. This notion is realistic in the sense that it measures not only successful computations, but … We propose a notion of complexity for oriented conditional term rewrite systems satisfying certain restrictions. This notion is realistic in the sense that it measures not only successful computations, but also partial computations that result in a failed rule application. A transformation to unconditional context-sensitive rewrite systems is presented which reflects this complexity notion, as well as a technique to derive runtime and derivational complexity bounds for the result of this transformation.
This paper aims at developing a verification method for procedural programs via a transformation into the recently introduced Logically Constrained Term Rewriting Systems (LCTRSs). To this end, we introduce an … This paper aims at developing a verification method for procedural programs via a transformation into the recently introduced Logically Constrained Term Rewriting Systems (LCTRSs). To this end, we introduce an extension of transformation methods based on integer TRSs, which can also handle global variables and arrays, and encode safety checks. Then we adapt existing rewriting induction methods to LCTRSs and propose a simple yet effective method to generalize equations. We show that we can automatically verify memory safety and prove correctness of realistic functions, involving for instance integers and arrays.
This paper discusses the method of formative rules for first-order term rewriting, which was previously defined for a higher-order setting. Dual to the well-known usable rules, formative rules allow dropping … This paper discusses the method of formative rules for first-order term rewriting, which was previously defined for a higher-order setting. Dual to the well-known usable rules, formative rules allow dropping some of the term constraints that need to be solved during a termination proof. Compared to the higher-order definition, the first-order setting allows for significant improvements of the technique.
This paper aims to develop a verification method for procedural programs via a transformation into Logically Constrained Term Rewriting Systems (LCTRSs). To this end, we extend transformation methods based on … This paper aims to develop a verification method for procedural programs via a transformation into Logically Constrained Term Rewriting Systems (LCTRSs). To this end, we extend transformation methods based on integer TRSs to handle arbitrary data types, global variables, function calls and arrays, as well as encode safety checks. Then we adapt existing rewriting induction methods to LCTRSs and propose a simple yet effective method to generalize equations. We show that we can automatically verify memory safety and prove correctness of realistic functions. Our approach proves equivalence between two implementations, so in contrast to other works, we do not require an explicit specification in a separate specification language.
This paper discusses the method of formative rules for first-order term rewriting, which was previously defined for a higher-order setting. Dual to the well-known usable rules, formative rules allow dropping … This paper discusses the method of formative rules for first-order term rewriting, which was previously defined for a higher-order setting. Dual to the well-known usable rules, formative rules allow dropping some of the term constraints that need to be solved during a termination proof. Compared to the higher-order definition, the first-order setting allows for significant improvements of the technique.
We extend the higher-order termination method of dynamic dependency pairs to Algebraic Functional Systems (AFSs). In this setting, simply typed lambda-terms with algebraic reduction and separate {\beta}-steps are considered. For … We extend the higher-order termination method of dynamic dependency pairs to Algebraic Functional Systems (AFSs). In this setting, simply typed lambda-terms with algebraic reduction and separate {\beta}-steps are considered. For left-linear AFSs, the method is shown to be complete. For so-called local AFSs we define a variation of usable rules and an extension of argument filterings. All these techniques have been implemented in the higher-order termination tool WANDA.
The termination method of weakly monotonic algebras, which has been defined for higher-order rewriting in the HRS formalism, offers a lot of power, but has seen little use in recent … The termination method of weakly monotonic algebras, which has been defined for higher-order rewriting in the HRS formalism, offers a lot of power, but has seen little use in recent years. We adapt and extend this method to the alternative formalism of algebraic functional systems, where the simply-typed lambda-calculus is combined with algebraic reduction. Using this theory, we define higher-order polynomial interpretations, and show how the implementation challenges of this technique can be tackled. A full implementation is provided in the termination tool WANDA.
The termination method of weakly monotonic algebras, which has been defined for higher-order rewriting in the HRS formalism, offers a lot of power, but has seen little use in recent … The termination method of weakly monotonic algebras, which has been defined for higher-order rewriting in the HRS formalism, offers a lot of power, but has seen little use in recent years. We adapt and extend this method to the alternative formalism of algebraic functional systems, where the simply-typed lambda-calculus is combined with algebraic reduction. Using this theory, we define higher-order polynomial interpretations, and show how the implementation challenges of this technique can be tackled. A full implementation is provided in the termination tool WANDA.

Commonly Cited References

We extend the higher-order termination method of dynamic dependency pairs to Algebraic Functional Systems (AFSs). In this setting, simply typed lambda-terms with algebraic reduction and separate {\beta}-steps are considered. For … We extend the higher-order termination method of dynamic dependency pairs to Algebraic Functional Systems (AFSs). In this setting, simply typed lambda-terms with algebraic reduction and separate {\beta}-steps are considered. For left-linear AFSs, the method is shown to be complete. For so-called local AFSs we define a variation of usable rules and an extension of argument filterings. All these techniques have been implemented in the higher-order termination tool WANDA.
The static dependency pair method is a method for proving the termination of higher-order rewrite systems à la Nipkow. It combines the dependency pair method introduced for first-order rewrite systems … The static dependency pair method is a method for proving the termination of higher-order rewrite systems à la Nipkow. It combines the dependency pair method introduced for first-order rewrite systems with the notion of strong computability introduced for typed λ-calculi. Argument filterings and usable rules are two important methods of the dependency pair framework used by current state-of-the-art first-order automated termination provers. In this paper, we extend the class of higher-order systems on which the static dependency pair method can be applied. Then, we extend argument filterings and usable rules to higher-order rewriting, hence providing the basis for a powerful automated termination prover for higher-order rewrite systems.
Constructor rewriting systems are said to be cons-free if, roughly, constructor terms in the right-hand sides of rules are subterms of the left-hand sides; the computational intuition is that rules … Constructor rewriting systems are said to be cons-free if, roughly, constructor terms in the right-hand sides of rules are subterms of the left-hand sides; the computational intuition is that rules cannot build new data structures. In programming language research, cons-free languages have been used to characterize hierarchies of computational complexity classes; in term rewriting, cons-free first-order TRSs have been used to characterize the class PTIME. We investigate cons-free higher-order term rewriting systems, the complexity classes they characterize, and how these depend on the type order of the systems. We prove that, for every K $\geq$ 1, left-linear cons-free systems with type order K characterize E$^K$TIME if unrestricted evaluation is used (i.e., the system does not have a fixed reduction strategy). The main difference with prior work in implicit complexity is that (i) our results hold for non-orthogonal term rewriting systems with no assumptions on reduction strategy, (ii) we consequently obtain much larger classes for each type order (E$^K$TIME versus EXP$^{K-1}$TIME), and (iii) results for cons-free term rewriting systems have previously only been obtained for K = 1, and with additional syntactic restrictions besides cons-freeness and left-linearity. Our results are among the first implicit characterizations of the hierarchy E = E$^1$TIME $\subsetneq$ E$^2$TIME $\subsetneq$ ... Our work confirms prior results that having full non-determinism (via overlapping rules) does not directly allow for characterization of non-deterministic complexity classes like NE. We also show that non-determinism makes the classes characterized highly sensitive to minor syntactic changes like admitting product types or non-left-linear rules.
Higher-order rewrite systems (HRSs) and simply-typed term rewriting systems (STRSs) are computational models of functional programs. We recently proposed an extremely powerful method, the static dependency pair method, which is … Higher-order rewrite systems (HRSs) and simply-typed term rewriting systems (STRSs) are computational models of functional programs. We recently proposed an extremely powerful method, the static dependency pair method, which is based on the notion of strong computability, in order to prove termination in STRSs. In this paper, we extend the method to HRSs. Since HRSs include λ-abstraction but STRSs do not, we restructure the static dependency pair method to allow λ-abstraction, and show that the static dependency pair method also works well on HRSs without new restrictions.
We propose a procedure for automated implicit inductive theorem proving for equational specifications made of rewrite rules with conditions and constraints. The constraints are interpreted over constructor terms (representing data … We propose a procedure for automated implicit inductive theorem proving for equational specifications made of rewrite rules with conditions and constraints. The constraints are interpreted over constructor terms (representing data values), and may express syntactic equality, disequality, ordering and also membership in a fixed tree language. Constrained equational axioms between constructor terms are supported and can be used in order to specify complex data structures like sets, sorted lists, trees, powerlists... Our procedure is based on tree grammars with constraints, a formalism which can describe exactly the initial model of the given specification (when it is sufficiently complete and terminating). They are used in the inductive proofs first as an induction scheme for the generation of subgoals at induction steps, second for checking validity and redundancy criteria by reduction to an emptiness problem, and third for defining and solving membership constraints. We show that the procedure is sound and refutationally complete. It generalizes former test set induction techniques and yields natural proofs for several non-trivial examples presented in the paper, these examples are difficult to specify and carry on automatically with related induction procedures.
Inductive theorem provers often diverge. This paper describes a simple critic, a computer program which monitors the construction of inductive proofs attempting to identify diverging proof attempts. Divergence is recognized … Inductive theorem provers often diverge. This paper describes a simple critic, a computer program which monitors the construction of inductive proofs attempting to identify diverging proof attempts. Divergence is recognized by means of a ``difference matching'' procedure. The critic then proposes lemmas and generalizations which ``ripple'' these differences away so that the proof can go through without divergence. The critic enables the theorem prover Spike to prove many theorems completely automatically from the definitions alone.
This paper is concerned with the complexity analysis of constructor term rewrite systems and its ramification in implicit computational complexity. We introduce a path order with multiset status, the polynomial … This paper is concerned with the complexity analysis of constructor term rewrite systems and its ramification in implicit computational complexity. We introduce a path order with multiset status, the polynomial path order POP*, that is applicable in two related, but distinct contexts. On the one hand POP* induces polynomial innermost runtime complexity and hence may serve as a syntactic, and fully automatable, method to analyse the innermost runtime complexity of term rewrite systems. On the other hand POP* provides an order-theoretic characterisation of the polytime computable functions: the polytime computable functions are exactly the functions computable by an orthogonal constructor TRS compatible with POP*.
This paper investigates what is essentially a call-by-value version of PCF under a complexity-theoretically motivated type system. The programming formalism, ATR1, has its first-order programs characterize the poly-time computable functions, … This paper investigates what is essentially a call-by-value version of PCF under a complexity-theoretically motivated type system. The programming formalism, ATR1, has its first-order programs characterize the poly-time computable functions, and its second-order programs characterize the type-2 basic feasible functionals of Mehlhorn and of Cook and Urquhart. (The ATR1-types are confined to levels 0, 1, and 2.) The type system comes in two parts, one that primarily restricts the sizes of values of expressions and a second that primarily restricts the time required to evaluate expressions. The size-restricted part is motivated by Bellantoni and Cook's and Leivant's implicit characterizations of poly-time. The time-restricting part is an affine version of Barber and Plotkin's DILL. Two semantics are constructed for ATR1. The first is a pruning of the naive denotational semantics for ATR1. This pruning removes certain functions that cause otherwise feasible forms of recursion to go wrong. The second semantics is a model for ATR1's time complexity relative to a certain abstract machine. This model provides a setting for complexity recurrences arising from ATR1 recursions, the solutions of which yield second-order polynomial time bounds. The time-complexity semantics is also shown to be sound relative to the costs of interpretation on the abstract machine.
Abstract The application of automatic transformation processes during the formal development and optimization of programs can introduce encumbrances in the generated code that programmers usually (or presumably) do not write. … Abstract The application of automatic transformation processes during the formal development and optimization of programs can introduce encumbrances in the generated code that programmers usually (or presumably) do not write. An example is the introduction of redundant arguments in the functions defined in the program. Redundancy of a parameter means that replacing it by any expression does not change the result. In this work, we provide methods for the analysis and elimination of redundant arguments in term rewriting systems as a model for the programs that can be written in more sophisticated languages. On the basis of the uselessness of redundant arguments, we also propose an erasure procedure which may avoid wasteful computations while still preserving the semantics (under ascertained conditions). A prototype implementation of these methods has been undertaken, which demonstrates the practicality of our approach.
We investigate the power of non-determinism in purely functional programming languages with higher-order types. Specifically, we consider cons-free programs of varying data orders, equipped with explicit non-deterministic choice. Cons-freeness roughly … We investigate the power of non-determinism in purely functional programming languages with higher-order types. Specifically, we consider cons-free programs of varying data orders, equipped with explicit non-deterministic choice. Cons-freeness roughly means that data constructors cannot occur in function bodies and all manipulation of storage space thus has to happen indirectly using the call stack. While cons-free programs have previously been used by several authors to characterise complexity classes, the work on non-deterministic programs has almost exclusively considered programs of data order 0. Previous work has shown that adding explicit non-determinism to cons-free programs taking data of order 0 does not increase expressivity; we prove that this - dramatically - is not the case for higher data orders: adding non-determinism to programs with data order at least 1 allows for a characterisation of the entire class of elementary-time decidable sets. Finally we show how, even with non-deterministic choice, the original hierarchy of characterisations is restored by imposing different restrictions.
The termination method of weakly monotonic algebras, which has been defined for higher-order rewriting in the HRS formalism, offers a lot of power, but has seen little use in recent … The termination method of weakly monotonic algebras, which has been defined for higher-order rewriting in the HRS formalism, offers a lot of power, but has seen little use in recent years. We adapt and extend this method to the alternative formalism of algebraic functional systems, where the simply-typed lambda-calculus is combined with algebraic reduction. Using this theory, we define higher-order polynomial interpretations, and show how the implementation challenges of this technique can be tackled. A full implementation is provided in the termination tool WANDA.
The construction and analysis of an abstract reachability tree (ART) are the basis for a successful method for software verification. The ART represents unwindings of the control-flow graph of the … The construction and analysis of an abstract reachability tree (ART) are the basis for a successful method for software verification. The ART represents unwindings of the control-flow graph of the program. Traditionally, a transition of the ART represents a single block of the program, and therefore, we call this approach single-block encoding (SBE). SBE may result in a huge number of program paths to be explored, which constitutes a fundamental source of inefficiency. We propose a generalization of the approach, in which transitions of the ART represent larger portions of the program; we call this approach large-block encoding (LBE). LBE may reduce the number of paths to be explored up to exponentially. Within this framework, we also investigate symbolic representations: for representing abstract states, in addition to conjunctions as used in SBE, we investigate the use of arbitrary Boolean formulas; for computing abstract-successor states, in addition to Cartesian predicate abstraction as used in SBE, we investigate the use of Boolean predicate abstraction. The new encoding leverages the efficiency of state-of-the-art SMT solvers, which can symbolically compute abstract large-block successors. Our experiments on benchmark C programs show that the large-block encoding outperforms the single-block encoding.
This paper provides an alternate characterization of second-order polynomial-time computability, with the goal of making second-order complexity theory more approachable. We rely on the usual oracle machines to model programs … This paper provides an alternate characterization of second-order polynomial-time computability, with the goal of making second-order complexity theory more approachable. We rely on the usual oracle machines to model programs with subroutine calls. In contrast to previous results, the use of higher-order objects as running times is avoided, either explicitly or implicitly. Instead, regular polynomials are used. This is achieved by refining the notion of oracle-poly-time computability introduced by Cook. We impose a further restriction on oracle interactions to force feasibility. Both the restriction and its purpose are very simple: it is well-known that Cook's model allows polynomial depth iteration of functional inputs with no restrictions on size, and thus does not preserve poly-time computability. To mend this we restrict the number of lookahead revisions, that is the number of times a query whose size exceeds that of any previous query may be asked. We prove that this leads to a class of feasible functionals and that all feasible problems can be solved within this class if one is allowed to separate a task into efficiently solvable subtasks. Formally, the closure of our class under lambda-abstraction and application are the basic feasible functionals. We also revisit the very similar class of strongly poly-time computable operators previously introduced by Kawamura and Steinberg. We prove it to be strictly included in our class and, somewhat surprisingly, to have the same closure property. This is due to the nature of the limited recursion operator: it is not strongly poly-time but decomposes into two such operations and lies in our class.
This article aims to develop a verification method for procedural programs via a transformation into logically constrained term rewriting systems (LCTRSs). To this end, we extend transformation methods based on … This article aims to develop a verification method for procedural programs via a transformation into logically constrained term rewriting systems (LCTRSs). To this end, we extend transformation methods based on integer term rewriting systems to handle arbitrary data types, global variables, function calls, and arrays, and to encode safety checks. Then we adapt existing rewriting induction methods to LCTRSs and propose a simple yet effective method to generalize equations. We show that we can automatically verify memory safety and prove correctness of realistic functions. Our approach proves equivalence between two implementations; thus, in contrast to other works, we do not require an explicit specification in a separate specification language.
The class of Basic Feasible Functionals BFF2 is the type-2 counterpart of the class FP of type-1 functions computable in polynomial time. Several characterizations have been suggested in the literature, … The class of Basic Feasible Functionals BFF2 is the type-2 counterpart of the class FP of type-1 functions computable in polynomial time. Several characterizations have been suggested in the literature, but none of these present a programming language with a type system guaranteeing this complexity bound. We give a characterization of BFF2 based on an imperative language with oracle calls using a tier-based type system whose inference is decidable. Such a characterization should make it possible to link higher-order complexity with programming theory. The low complexity (cubic in the size of the program) of the type inference algorithm contrasts with the intractability of the aforementioned methods and does not restrain strongly the expressive power of the language.
This paper aims at carrying out termination proofs for simply typed higher-order calculi automatically by using ordering comparisons. To this end, we introduce the computability path ordering (CPO), a recursive … This paper aims at carrying out termination proofs for simply typed higher-order calculi automatically by using ordering comparisons. To this end, we introduce the computability path ordering (CPO), a recursive relation on terms obtained by lifting a precedence on function symbols. A first version, core CPO, is essentially obtained from the higher-order recursive path ordering (HORPO) by eliminating type checks from some recursive calls and by incorporating the treatment of bound variables as in the com-putability closure. The well-foundedness proof shows that core CPO captures the essence of computability arguments \'a la Tait and Girard, therefore explaining its name. We further show that no further type check can be eliminated from its recursive calls without loosing well-foundedness, but for one for which we found no counterexample yet. Two extensions of core CPO are then introduced which allow one to consider: the first, higher-order inductive types; the second, a precedence in which some function symbols are smaller than application and abstraction.
Unravelings are transformations from a conditional term rewriting system (CTRS, for short) over an original signature into an unconditional term rewriting systems (TRS, for short) over an extended signature. They … Unravelings are transformations from a conditional term rewriting system (CTRS, for short) over an original signature into an unconditional term rewriting systems (TRS, for short) over an extended signature. They are not sound w.r.t. reduction for every CTRS, while they are complete w.r.t. reduction. Here, soundness w.r.t. reduction means that every reduction sequence of the corresponding unraveled TRS, of which the initial and end terms are over the original signature, can be simulated by the reduction of the original CTRS. In this paper, we show that an optimized variant of Ohlebusch's unraveling for a deterministic CTRS is sound w.r.t. reduction if the corresponding unraveled TRS is left-linear or both right-linear and non-erasing. We also show that soundness of the variant implies that of Ohlebusch's unraveling. Finally, we show that soundness of Ohlebusch's unraveling is the weakest in soundness of the other unravelings and a transformation, proposed by Serbanuta and Rosu, for (normal) deterministic CTRSs, i.e., soundness of them respectively implies that of Ohlebusch's unraveling.
Logically Constrained Term Rewriting Systems (LCTRSs) provide a general framework for term rewriting with constraints. We discuss a simple dependency pair approach to prove termination of LCTRSs. We see that … Logically Constrained Term Rewriting Systems (LCTRSs) provide a general framework for term rewriting with constraints. We discuss a simple dependency pair approach to prove termination of LCTRSs. We see that existing techniques transfer to the constrained setting in a natural way.
Abstract The class of Basic Feasible Functionals $$\mathtt{BFF}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>BFF</mml:mi> </mml:math> is the second-order counterpart of the class of first-order functions computable in polynomial time. We present several implicit … Abstract The class of Basic Feasible Functionals $$\mathtt{BFF}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>BFF</mml:mi> </mml:math> is the second-order counterpart of the class of first-order functions computable in polynomial time. We present several implicit characterizations of $$\mathtt{BFF}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>BFF</mml:mi> </mml:math> based on a typed programming language of terms. These terms may perform calls to imperative procedures, which are not recursive. The type discipline has two layers: the terms follow a standard simply-typed discipline and the procedures follow a standard tier-based type discipline. $$\mathtt{BFF}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>BFF</mml:mi> </mml:math> consists exactly of the second-order functionals that are computed by typable and terminating programs. The completeness of this characterization surprisingly still holds in the absence of lambda-abstraction. Moreover, the termination requirement can be specified as a completeness-preserving instance, which can be decided in time quadratic in the size of the program. As typing is decidable in polynomial time, we obtain the first tractable ( i.e. , decidable in polynomial time), sound, complete, and implicit characterization of $$\mathtt{BFF}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>BFF</mml:mi> </mml:math> , thus solving a problem opened for more than 20 years.
This paper considers quasi-reductivity - essentially, the property that an evaluation cannot get "stuck" due to a missing case in pattern matching - in the context of term rewriting with … This paper considers quasi-reductivity - essentially, the property that an evaluation cannot get "stuck" due to a missing case in pattern matching - in the context of term rewriting with logical constraints.
In this paper, we extend Jones' result -- that cons-free programming with $k^{th}$-order data and a call-by-value strategy characterises EXP$^k$TIME -- to a more general setting, including pattern-matching and non-deterministic … In this paper, we extend Jones' result -- that cons-free programming with $k^{th}$-order data and a call-by-value strategy characterises EXP$^k$TIME -- to a more general setting, including pattern-matching and non-deterministic choice. We show that the addition of non-determinism is unexpectedly powerful in the higher-order setting. Nevertheless, we can obtain a non-deterministic parallel to Jones' hierarchy result by appropriate restricting rule formation.
Interpretation methods and their restrictions to polynomials have been deeply used to control the termination and complexity of first-order term rewrite systems. This paper extends interpretation methods to a pure … Interpretation methods and their restrictions to polynomials have been deeply used to control the termination and complexity of first-order term rewrite systems. This paper extends interpretation methods to a pure higher order functional language. We develop a theory of higher order functions that is well-suited for the complexity analysis of this programming language. The interpretation domain is a complete lattice and, consequently, we express program interpretation in terms of a least fixpoint. As an application, by bounding interpretations by higher order polynomials, we characterize Basic Feasible Functions at any order.
The termination method of weakly monotonic algebras, which has been defined for higher-order rewriting in the HRS formalism, offers a lot of power, but has seen little use in recent … The termination method of weakly monotonic algebras, which has been defined for higher-order rewriting in the HRS formalism, offers a lot of power, but has seen little use in recent years. We adapt and extend this method to the alternative formalism of algebraic functional systems, where the simply-typed lambda-calculus is combined with algebraic reduction. Using this theory, we define higher-order polynomial interpretations, and show how the implementation challenges of this technique can be tackled. A full implementation is provided in the termination tool WANDA.
In this paper, we first briefly survey automated termination proof methods for higher-order calculi. We then concentrate on the higher-order recursive path ordering, for which we provide an improved definition, … In this paper, we first briefly survey automated termination proof methods for higher-order calculi. We then concentrate on the higher-order recursive path ordering, for which we provide an improved definition, the Computability Path Ordering. This new definition appears indeed to capture the essence of computability arguments à la Tait and Girard, therefore explaining the name of the improved ordering.
Abstract The class of basic feasible functionals ( $$\texttt{BFF}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>BFF</mml:mi> </mml:math> ) is the analog of $$\texttt{FP}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>FP</mml:mi> </mml:math> (polynomial time functions) for type-2 functionals, that … Abstract The class of basic feasible functionals ( $$\texttt{BFF}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>BFF</mml:mi> </mml:math> ) is the analog of $$\texttt{FP}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>FP</mml:mi> </mml:math> (polynomial time functions) for type-2 functionals, that is, functionals that can take (first-order) functions as arguments. $$\texttt{BFF}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>BFF</mml:mi> </mml:math> can be defined through Oracle Turing machines with running time bounded by second-order polynomials. On the other hand, higher-order term rewriting provides an elegant formalism for expressing higher-order computation. We address the problem of characterizing $$\texttt{BFF}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>BFF</mml:mi> </mml:math> by higher-order term rewriting. Various kinds of interpretations for first-order term rewriting have been introduced in the literature for proving termination and characterizing (first-order) complexity classes. In this paper, we consider a recently introduced notion of cost–size interpretations for higher-order term rewriting and see definitions as ways of computing functionals. We then prove that the class of functionals represented by higher-order terms admitting a certain kind of cost–size interpretation is exactly $$\texttt{BFF}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>BFF</mml:mi> </mml:math> .
All current investigations to analyze the derivational complexity of term rewrite systems are based on a single termination method, possibly preceded by transformations. However, the exclusive use of direct criteria … All current investigations to analyze the derivational complexity of term rewrite systems are based on a single termination method, possibly preceded by transformations. However, the exclusive use of direct criteria is problematic due to their restricted power. To overcome this limitation the article introduces a modular framework which allows to infer (polynomial) upper bounds on the complexity of term rewrite systems by combining different criteria. Since the fundamental idea is based on relative rewriting, we study how matrix interpretations and match-bounds can be used and extended to measure complexity for relative rewriting, respectively. The modular framework is proved strictly more powerful than the conventional setting. Furthermore, the results have been implemented and experiments show significant gains in power.
A central method for analyzing the asymptotic complexity of a functional program is to extract and then solve a recurrence that expresses evaluation cost in terms of input size. The … A central method for analyzing the asymptotic complexity of a functional program is to extract and then solve a recurrence that expresses evaluation cost in terms of input size. The relevant notion of input size is often specific to a datatype, with measures including the length of a list, the maximum element in a list, and the height of a tree. In this work, we give a formal account of the extraction of cost and size recurrences from higher-order functional programs over inductive datatypes. Our approach allows a wide range of programmer-specified notions of size, and ensures that the extracted recurrences correctly predict evaluation cost. To extract a recurrence from a program, we first make costs explicit by applying a monadic translation from the source language to a complexity language, and then abstract datatype values as sizes. Size abstraction can be done semantically, working in models of the complexity language, or syntactically, by adding rules to a preorder judgement. We give several different models of the complexity language, which support different notions of size. Additionally, we prove by a logical relations argument that recurrences extracted by this process are upper bounds for evaluation cost; the proof is entirely syntactic and therefore applies to all of the models we consider.
A system of linear dependent types for the lambda calculus with full higher-order recursion, called dlPCF, is introduced and proved sound and relatively complete. Completeness holds in a strong sense: … A system of linear dependent types for the lambda calculus with full higher-order recursion, called dlPCF, is introduced and proved sound and relatively complete. Completeness holds in a strong sense: dlPCF is not only able to precisely capture the functional behaviour of PCF programs (i.e. how the output relates to the input) but also some of their intensional properties, namely the complexity of evaluating them with Krivine's Machine. dlPCF is designed around dependent types and linear logic and is parametrized on the underlying language of index terms, which can be tuned so as to sacrifice completeness for tractability.
This paper presents general syntactic conditions ensuring the strong normalisation and the logical consistency of the Calculus of Algebraic Constructions, an extension of the Calculus of Constructions with functions and … This paper presents general syntactic conditions ensuring the strong normalisation and the logical consistency of the Calculus of Algebraic Constructions, an extension of the Calculus of Constructions with functions and predicates defined by higher-order rewrite rules. On the one hand, the Calculus of Constructions is a powerful type system in which one can formalise the propositions and natural deduction proofs of higher-order logic. On the other hand, rewriting is a simple and powerful computation paradigm. The combination of the two allows, among other things, the development of formal proofs with a reduced size and more automation compared with more traditional proof assistants. The main novelty is to consider a general form of rewriting at the predicate-level that generalises the strong elimination of the Calculus of Inductive Constructions.
We study the derivational complexity induced by the dependency pair method, enhanced with standard refinements. We obtain upper bounds on the derivational complexity induced by the dependency pair method in … We study the derivational complexity induced by the dependency pair method, enhanced with standard refinements. We obtain upper bounds on the derivational complexity induced by the dependency pair method in terms of the derivational complexity of the base techniques employed. In particular we show that the derivational complexity induced by the dependency pair method based on some direct technique, possibly refined by argument filtering, the usable rules criterion, or dependency graphs, is primitive recursive in the derivational complexity induced by the direct method. This implies that the derivational complexity induced by a standard application of the dependency pair method based on traditional termination orders like KBO, LPO, and MPO is exactly the same as if those orders were applied as the only termination technique.
Several successful approaches to software verification are based on the construction and analysis of an abstract reachability tree (ART). The ART represents unwindings of the control-flow graph of the program. … Several successful approaches to software verification are based on the construction and analysis of an abstract reachability tree (ART). The ART represents unwindings of the control-flow graph of the program. Traditionally, a transition of the ART represents a single block of the program, and therefore, we call this approach single-block encoding (SBE). SBE may result in a huge number of program paths to be explored, which constitutes a fundamental source of inefficiency. We propose a generalization of the approach, in which transitions of the ART represent larger portions of the program; we call this approach large-block encoding (LBE). LBE may reduce the number of paths to be explored up to exponentially. Within this framework, we also investigate symbolic representations: for representing abstract states, in addition to conjunctions as used in SBE, we investigate the use of arbitrary Boolean formulas; for computing abstract-successor states, in addition to Cartesian predicate abstraction as used in SBE, we investigate the use of Boolean predicate abstraction. The new encoding leverages the efficiency of state-of-the-art SMT solvers, which can symbolically compute abstract large-block successors. Our experiments on benchmark C programs show that the large-block encoding outperforms the single-block encoding.
We propose a new order, the small polynomial path order (sPOP* for short). The order sPOP* provides a characterisation of the class of polynomial time computable function via term rewrite … We propose a new order, the small polynomial path order (sPOP* for short). The order sPOP* provides a characterisation of the class of polynomial time computable function via term rewrite systems. Any polynomial time computable function gives rise to a rewrite system that is compatible with sPOP*. On the other hand any function defined by a rewrite system compatible with sPOP* is polynomial time computable. Technically sPOP* is a tamed recursive path order with product status. Its distinctive feature is the precise control provided. For any rewrite system that is compatible with sPOP* that makes use of recursion up to depth d, the (innermost) runtime complexity is bounded from above by a polynomial of degree d.
Algebraic specification has a long tradition in bridging the gap between specification and programming by making specifications executable. Building on extensive experience in designing, implementing and using specification formalisms that … Algebraic specification has a long tradition in bridging the gap between specification and programming by making specifications executable. Building on extensive experience in designing, implementing and using specification formalisms that are based on algebraic specification and term rewriting (namely Asf and Asf+Sdf), we are now focusing on using the best concepts from algebraic specification and integrating these into a new programming language: Rascal. This language is easy to learn by non-experts but is also scalable to very large meta-programming applications. We explain the algebraic roots of Rascal and its main application areas: software analysis, software transformation, and design and implementation of domain-specific languages. Some example applications in the domain of Model-Driven Engineering (MDE) are described to illustrate this.