Author Description

Login to generate an author description

Ask a Question About This Mathematician

All published works (54)

We study a cost-aware programming language for higher-order recursion dubbed $\textbf{PCF}_\mathsf{cost}$ in the setting of synthetic domain theory (SDT). Our main contribution relates the denotational cost semantics of $\textbf{PCF}_\mathsf{cost}$ to … We study a cost-aware programming language for higher-order recursion dubbed $\textbf{PCF}_\mathsf{cost}$ in the setting of synthetic domain theory (SDT). Our main contribution relates the denotational cost semantics of $\textbf{PCF}_\mathsf{cost}$ to its computational cost semantics, a new kind of dynamic semantics for program execution that serves as a mathematically natural alternative to operational semantics in SDT. In particular we prove an internal, cost-sensitive version of Plotkin's computational adequacy theorem, giving a precise correspondence between the denotational and computational semantics for complete programs at base type. The constructions and proofs of this paper take place in the internal dependent type theory of an SDT topos extended by a phase distinction in the sense of Sterling and Harper. By controlling the interpretation of cost structure via the phase distinction in the denotational semantics, we show that $\textbf{PCF}_\mathsf{cost}$ programs also evince a noninterference property of cost and behavior. We verify the axioms of the type theory by means of a model construction based on relative sheaf models of SDT. Comment: Final version for MFPS '24
Amortized analysis is a cost analysis technique for data structures in which cost is studied in aggregate: rather than considering the maximum cost of a single operation, one bounds the … Amortized analysis is a cost analysis technique for data structures in which cost is studied in aggregate: rather than considering the maximum cost of a single operation, one bounds the total cost encountered throughout a session. Traditionally, amortized analysis has been phrased inductively, quantifying over finite sequences of operations. Connecting to prior work on coalgebraic semantics for data structures, we develop the alternative perspective that amortized analysis is naturally viewed coalgebraically in a category of cost algebras, where a morphism of coalgebras serves as a first-class generalization of potential function suitable for integrating cost and behavior. Using this simple definition, we consider amortization of other sample effects, non-commutative printing and randomization. To support imprecise amortized upper bounds, we adapt our discussion to the bicategorical setting, where a potential function is a colax morphism of coalgebras. We support algebraic and coalgebraic operations simultaneously by using coalgebras for an endoprofunctor instead of an endofunctor, combining potential using a monoidal structure on the underlying category. Finally, we compose amortization arguments in the indexed category of coalgebras to implement one amortized data structure in terms of others.
Amortized analysis is a cost analysis technique for data structures in which cost is studied in aggregate, rather than considering the maximum cost of a single operation. Traditionally, amortized analysis … Amortized analysis is a cost analysis technique for data structures in which cost is studied in aggregate, rather than considering the maximum cost of a single operation. Traditionally, amortized analysis has been phrased inductively, in terms of finite sequences of operations. Connecting to prior work on coalgebraic semantics for data structures, we develop the perspective that amortized analysis is naturally viewed coalgebraically in the category of algebras for a cost monad, where a morphism of coalgebras serves as a first-class generalization of potential function suitable for integrating cost and behavior. Using this simple definition, we consider amortization of other effects, such as randomization, and we compose amortization arguments in the indexed category of coalgebras. We generalize this to parallel data structure usage patterns by using coalgebras for an endoprofunctor instead of an endofunctor, combining potential using a monoidal structure on the underlying category. Finally, we adapt our discussion to the bicategorical setting, supporting imprecise amortized upper bounds.
We study a cost-aware programming language for higher-order recursion dubbed $\textbf{PCF}_\mathsf{cost}$ in the setting of synthetic domain theory (SDT). Our main contribution relates the denotational cost semantics of $\textbf{PCF}_\mathsf{cost}$ to … We study a cost-aware programming language for higher-order recursion dubbed $\textbf{PCF}_\mathsf{cost}$ in the setting of synthetic domain theory (SDT). Our main contribution relates the denotational cost semantics of $\textbf{PCF}_\mathsf{cost}$ to its computational cost semantics, a new kind of dynamic semantics for program execution that serves as a mathematically natural alternative to operational semantics in SDT. In particular we prove an internal, cost-sensitive version of Plotkin's computational adequacy theorem, giving a precise correspondence between the denotational and computational semantics for complete programs at base type. The constructions and proofs of this paper take place in the internal dependent type theory of an SDT topos extended by a phase distinction in the sense of Sterling and Harper. By controlling the interpretation of cost structure via the phase distinction in the denotational semantics, we show that $\textbf{PCF}_\mathsf{cost}$ programs also evince a noninterference property of cost and behavior. We verify the axioms of the type theory by means of a model construction based on relative sheaf models of SDT.
We present metalanguages for developing synthetic cost-aware denotational semantics of programming languages. Extending recent advances by Niu et al. in cost and behavioral verification in dependent type theory, we define … We present metalanguages for developing synthetic cost-aware denotational semantics of programming languages. Extending recent advances by Niu et al. in cost and behavioral verification in dependent type theory, we define two successively more expressive metalanguages for studying cost-aware metatheory. We construct synthetic denotational models of the simply-typed lambda calculus and Modernized Algol, a language with first-order store and while loops, and show that they satisfy a cost-aware generalization of the classic Plotkin-type computational adequacy theorem. Moreover, by developing our proofs in a synthetic language of phase-separated constructions of intension and extension, our results easily restrict to the corresponding extensional theorems. Consequently, our work provides a positive answer to the conjecture raised in op. cit. and contributes a framework for cost-aware programming, verification, and metatheory.
Amortized analysis is a program cost analysis technique for data structures in which the cost of operations is specified in aggregate, under the assumption of continued sequential use. Typically, amortized … Amortized analysis is a program cost analysis technique for data structures in which the cost of operations is specified in aggregate, under the assumption of continued sequential use. Typically, amortized analyses are presented inductively, in terms of finite sequences of operations. We give an alternative coinductive formulation and prove that it is equivalent to the standard inductive definition. We describe a classic amortized data structure, the batched queue, and outline a coinductive proof of its amortized efficiency in $\textbf{calf}$, a dependent type theory for cost analysis.
We present ${\bf decalf}$, a ${\bf d}$irected, ${\bf e}$ffectful ${\bf c}$ost-${\bf a}$ware ${\bf l}$ogical ${\bf f}$ramework for studying quantitative aspects of functional programs with effects. Like ${\bf calf}$, the language … We present ${\bf decalf}$, a ${\bf d}$irected, ${\bf e}$ffectful ${\bf c}$ost-${\bf a}$ware ${\bf l}$ogical ${\bf f}$ramework for studying quantitative aspects of functional programs with effects. Like ${\bf calf}$, the language is based on a formal phase distinction between the extension and the intension of a program, its pure behavior as distinct from its cost measured by an effectful step-counting primitive. The type theory ensures that the behavior is unaffected by the cost accounting. Unlike ${\bf calf}$, the present language takes account of effects, such as probabilistic choice and mutable state; this extension requires a reformulation of ${\bf calf}$'s approach to cost accounting: rather than rely on a "separable" notion of cost, here a cost bound is simply another program. To make this formal, we equip every type with an intrinsic preorder, relaxing the precise cost accounting intrinsic to a program to a looser but nevertheless informative estimate. For example, the cost bound of a probabilistic program is itself a probabilistic program that specifies the distribution of costs. This approach serves as a streamlined alternative to the standard method of isolating a recurrence that bounds the cost in a manner that readily extends to higher-order, effectful programs. The development proceeds by first introducing the ${\bf decalf}$ type system, which is based on an intrinsic ordering among terms that restricts in the extensional phase to extensional equality, but in the intensional phase reflects an approximation of the cost of a program of interest. This formulation is then applied to a number of illustrative examples, including pure and effectful sorting algorithms, simple probabilistic programs, and higher-order functions. Finally, we justify ${\bf decalf}$ via a model in the topos of augmented simplicial sets.
Program equivalence is the fulcrum for reasoning about and proving properties of programs. For noninterference, for example, program equivalence up to the secrecy level of an observer is shown. A … Program equivalence is the fulcrum for reasoning about and proving properties of programs. For noninterference, for example, program equivalence up to the secrecy level of an observer is shown. A powerful enabler for such proofs are logical relations. Logical relations only recently were adopted for session types -- but exclusively for terminating languages. This paper scales logical relations to general recursive session types. It develops a logical relation for progress-sensitive noninterference (PSNI) for intuitionistic linear logic session types (ILLST), tackling the challenges non-termination and concurrency pose, and shows that logical equivalence is sound and complete with regard to closure of weak bisimilarity under parallel composition, using a biorthogonality argument. A distinguishing feature of the logical relation is its stratification with an observation index (as opposed to a step or unfolding index), a crucial shift to make the logical relation closed under parallel composition in a concurrent setting. To demonstrate practicality of the logical relation, the paper develops an information flow control (IFC) refinement type system for ILLST, with support of secrecy-polymorphic processes, and shows that well-typed programs are self-related by the logical relation and thus enjoy PSNI. The refinement type system has been implemented in a type checker, featuring local security theories to support secrecy-polymorphic processes.
Ordered sequences of data, specified with a join operation to combine sequences, serve as a foundation for the implementation of parallel functional algorithms. This abstract data type can be elegantly … Ordered sequences of data, specified with a join operation to combine sequences, serve as a foundation for the implementation of parallel functional algorithms. This abstract data type can be elegantly and efficiently implemented using balanced binary trees, where a join operation is provided to combine two trees and rebalance as necessary. In this work, we present a verified implementation and cost analysis of joinable red-black trees in $\textbf{calf}$, a dependent type theory for cost analysis. We implement red-black trees and auxiliary intermediate data structures in such a way that all correctness invariants are intrinsically maintained. Then, we describe and verify precise cost bounds on the operations, making use of the red-black tree invariants. Finally, we implement standard algorithms on sequences using the simple join-based signature and bound their cost in the case that red-black trees are used as the underlying implementation. All proofs are formally mechanized using the embedding of $\textbf{calf}$ in the Agda theorem prover.
We present calf , a c ost- a ware l ogical f ramework for studying quantitative aspects of functional programs. Taking inspiration from recent work that reconstructs traditional aspects of … We present calf , a c ost- a ware l ogical f ramework for studying quantitative aspects of functional programs. Taking inspiration from recent work that reconstructs traditional aspects of programming languages in terms of a modal account of phase distinctions , we argue that the cost structure of programs motivates a phase distinction between intension and extension . Armed with this technology, we contribute a synthetic account of cost structure as a computational effect in which cost-aware programs enjoy an internal noninterference property: input/output behavior cannot depend on cost. As a full-spectrum dependent type theory, calf presents a unified language for programming and specification of both cost and behavior that can be integrated smoothly with existing mathematical libraries available in type theoretic proof assistants. We evaluate calf as a general framework for cost analysis by implementing two fundamental techniques for algorithm analysis: the method of recurrence relations and physicist’s method for amortized analysis . We deploy these techniques on a variety of case studies: we prove a tight, closed bound for Euclid’s algorithm, verify the amortized complexity of batched queues, and derive tight, closed bounds for the sequential and parallel complexity of merge sort, all fully mechanized in the Agda proof assistant. Lastly we substantiate the soundness of quantitative reasoning in calf by means of a model construction.
We propose a new sheaf semantics for secure information flow over a space of abstract behaviors, based on synthetic domain theory: security classes are open/closed partitions, types are sheaves, and … We propose a new sheaf semantics for secure information flow over a space of abstract behaviors, based on synthetic domain theory: security classes are open/closed partitions, types are sheaves, and redaction of sensitive information corresponds to restricting a sheaf to a closed subspace. Our security-aware computational model satisfies termination-insensitive noninterference automatically, and therefore constitutes an intrinsic alternative to state of the art extrinsic/relational models of noninterference. Our semantics is the latest application of Sterling and Harper's recent re-interpretation of phase distinctions and noninterference in programming languages in terms of Artin gluing and topos-theoretic open/closed modalities. Prior applications include parametricity for ML modules, the proof of normalization for cubical type theory by Sterling and Angiuli, and the cost-aware logical framework of Niu et al. In this paper we employ the phase distinction perspective twice: first to reconstruct the syntax and semantics of secure information flow as a lattice of phase distinctions between "higher" and "lower" security, and second to verify the computational adequacy of our sheaf semantics vis-\`a-vis an extension of Abadi et al.'s dependency core calculus with a construct for declassifying termination channels.
We present two metalanguages for developing $\textit{synthetic cost-aware denotational semantics}$ of programming languages. Extending the recent work of Niu et al. [2022] on $\textbf{calf}$, a dependent type theory for both … We present two metalanguages for developing $\textit{synthetic cost-aware denotational semantics}$ of programming languages. Extending the recent work of Niu et al. [2022] on $\textbf{calf}$, a dependent type theory for both cost and behavioral verification, we define two metalanguages, $\textbf{calf}^\star$ and $\textbf{calf}^\omega$, for studying cost-aware metatheory. $\textbf{calf}^\star$ is an extension of $\textbf{calf}$ with universes and inductive types, and $\textbf{calf}^\omega$ is a an extension of $\textbf{calf}^\star$ with unbounded iteration. We construct denotational models of the simply-typed lambda calculus and Modernized Algol, a language with first-order store and while loops, and show that they satisfy a $\textit{cost-aware}$ generalization of the classic Plotkin-type computational adequacy theorem. Moreover, by developing our proofs in a synthetic language of $\textit{phase-separated}$ constructions of intension and extension, our results easily $\textit{restrict}$ to the corresponding extensional theorems. Our work provides a positive answer to the conjecture raised in Niu et al. [2022] and in light of $\textit{op. cit.}$'s work on algorithm analysis, contributes a metalanguage for doing both cost-aware programming and verification and cost-aware metatheory of programming languages.
We define a computational type theory combining the contentful equality structure of cartesian cubical type theory with internal parametricity primitives. The combined theory supports both univalence and its relational equivalent, … We define a computational type theory combining the contentful equality structure of cartesian cubical type theory with internal parametricity primitives. The combined theory supports both univalence and its relational equivalent, which we call relativity. We demonstrate the use of the theory by analyzing polymorphic functions between higher inductive types, observe how cubical equality regularizes parametric type theory, and examine the similarities and discrepancies between cubical and parametric type theory, which are closely related. We also abstract a formal interface to the computational interpretation and show that this also has a presheaf model.
The theory of program modules is of interest to language designers not only for its practical importance to programming, but also because it lies at the nexus of three fundamental … The theory of program modules is of interest to language designers not only for its practical importance to programming, but also because it lies at the nexus of three fundamental concerns in language design: the phase distinction , computational effects , and type abstraction . We contribute a fresh “synthetic” take on program modules that treats modules as the fundamental constructs, in which the usual suspects of prior module calculi (kinds, constructors, dynamic programs) are rendered as derived notions in terms of a modal type-theoretic account of the phase distinction. We simplify the account of type abstraction (embodied in the generativity of module functors) through a lax modality that encapsulates computational effects, placing projectibility of module expressions on a type-theoretic basis. Our main result is a (significant) proof-relevant and phase-sensitive generalization of the Reynolds abstraction theorem for a calculus of program modules, based on a new kind of logical relation called a parametricity structure . Parametricity structures generalize the proof-irrelevant relations of classical parametricity to proof- relevant families, where there may be non-trivial evidence witnessing the relatedness of two programs—simplifying the metatheory of strong sums over the collection of types, for although there can be no “relation classifying relations,” one easily accommodates a “family classifying small families.” Using the insight that logical relations/parametricity is itself a form of phase distinction between the syntactic and the semantic, we contribute a new synthetic approach to phase separated parametricity based on the slogan logical relations as types , by iterating our modal account of the phase distinction. We axiomatize a dependent type theory of parametricity structures using two pairs of complementary modalities (syntactic, semantic) and (static, dynamic), substantiated using the topos theoretic Artin gluing construction. Then, to construct a simulation between two implementations of an abstract type, one simply programs a third implementation whose type component carries the representation invariant.
A wide range of intuitionistic type theories may be presented as equational theories within a logical framework. This method was formulated by Per Martin-Lof in the mid-1980's and further developed … A wide range of intuitionistic type theories may be presented as equational theories within a logical framework. This method was formulated by Per Martin-Lof in the mid-1980's and further developed by Uemura, who used it to prove an initiality result for a class of models. Herein is presented a logical framework for type theories that includes an extensional equality type so that a type theory may be given by a signature of constants. The framework is illustrated by a number of examples of type-theoretic concepts, including identity and equality types, and a hierarchy of universes.
Abstract We present a cubical type theory based on the Cartesian cube category (faces, degeneracies, symmetries, diagonals, but no connections or reversal) with univalent universes, each containing Π, Σ, path, … Abstract We present a cubical type theory based on the Cartesian cube category (faces, degeneracies, symmetries, diagonals, but no connections or reversal) with univalent universes, each containing Π, Σ, path, identity, natural number, boolean, suspension, and glue (equivalence extension) types. The type theory includes a syntactic description of a uniform Kan operation, along with judgmental equality rules defining the Kan operation on each type. The Kan operation uses both a different set of generating trivial cofibrations and a different set of generating cofibrations than the Cohen, Coquand, Huber, and Mörtberg (CCHM) model. Next, we describe a constructive model of this type theory in Cartesian cubical sets. We give a mechanized proof, using Agda as the internal language of cubical sets in the style introduced by Orton and Pitts, that glue, Π, Σ, path, identity, boolean, natural number, suspension types, and the universe itself are Kan in this model, and that the universe is univalent. An advantage of this formal approach is that our construction can also be interpreted in a range of other models, including cubical sets on the connections cube category and the De Morgan cube category, as used in the CCHM model, and bicubical sets, as used in directed type theory.
We present $\textbf{calf}$, a $\textbf{c}$ost-$\textbf{a}$ware $\textbf{l}$ogical $\textbf{f}$ramework for studying quantitative aspects of functional programs. Taking inspiration from recent work that reconstructs traditional aspects of programming languages in terms of a … We present $\textbf{calf}$, a $\textbf{c}$ost-$\textbf{a}$ware $\textbf{l}$ogical $\textbf{f}$ramework for studying quantitative aspects of functional programs. Taking inspiration from recent work that reconstructs traditional aspects of programming languages in terms of a modal account of \emph{phase distinctions}, we argue that the cost structure of programs motivates a phase distinction between $\textit{intension}$ and $\textit{extension}$. Armed with this technology, we contribute a synthetic account of cost structure as a computational effect in which cost-aware programs enjoy an internal noninterference property: input/output behavior cannot depend on cost. As a full-spectrum dependent type theory, $\textbf{calf}$ presents a unified language for programming and specification of both cost and behavior that can be integrated smoothly with existing mathematical libraries available in type theoretic proof assistants. We evaluate $\textbf{calf}$ as a general framework for cost analysis by implementing two fundamental techniques for algorithm analysis: the $\textit{method of recurrence relations}$ and $\textit{physicist's method for amortized analysis}$. We deploy these techniques on a variety of case studies: we prove a tight, closed bound for Euclid's algorithm, verify the amortized complexity of batched queues, and derive tight, closed bounds for the sequential and $\textit{parallel}$ complexity of merge sort, all fully mechanized in the Agda proof assistant. Lastly we substantiate the soundness of quantitative reasoning in $\textbf{calf}$ by means of a model construction.
A wide range of intuitionistic type theories may be presented as equational theories within a logical framework. This method was formulated by Per Martin-L\"{o}f in the mid-1980's and further developed … A wide range of intuitionistic type theories may be presented as equational theories within a logical framework. This method was formulated by Per Martin-L\"{o}f in the mid-1980's and further developed by Uemura, who used it to prove an initiality result for a class of models. Herein is presented a logical framework for type theories that includes an extensional equality type so that a type theory may be given by a signature of constants. The framework is illustrated by a number of examples of type-theoretic concepts, including identity and equality types, and a hierarchy of universes.
Although computational complexity is a fundamental aspect of program behavior, it is often at odds with common type theoretic principles such as function extensionality, which identifies all functions with the … Although computational complexity is a fundamental aspect of program behavior, it is often at odds with common type theoretic principles such as function extensionality, which identifies all functions with the same $\textit{input-output}$ behavior. We present a computational type theory called $\mathbf{CATT}$ that has a primitive notion of cost (the number of evaluation steps). We introduce a new dependent function type whose semantics can be viewed as a cost-aware version of function extensionality. We prove a collection of lemmas for $\mathbf{CATT}$, including a novel introduction rule for the new funtime type. $\mathbf{CATT}$ can be simultaneously viewed as a framework for analyzing computational complexity of programs and as the beginnings of a semantic foundation for characterizing feasible mathematical proofs.
The theory of program modules is of interest to language designers not only for its practical importance to programming, but also because it lies at the nexus of three fundamental … The theory of program modules is of interest to language designers not only for its practical importance to programming, but also because it lies at the nexus of three fundamental concerns in language design: the phase distinction, computational effects, and type abstraction. We contribute a fresh take on program modules that treats modules as the fundamental constructs, in which the usual suspects of prior module calculi (kinds, constructors, dynamic programs) are rendered as derived notions in terms of a modal type-theoretic account of the phase distinction. We simplify the account of type abstraction (embodied in the generativity of module functors) through a lax modality that encapsulates computational effects. Our main result is a (significant) proof-relevant and phase-sensitive generalization of the Reynolds abstraction theorem for a calculus of program modules, based on a new kind of called a parametricity structure. Parametricity structures generalize the proof-irrelevant relations of classical parametricity to proof-relevant families, where there may be non-trivial evidence witnessing the relatedness of two programs -- simplifying the metatheory of strong sums over the collection of for although there can be no relation classifying relations, one easily accommodates a family classifying small families. Using the insight that relations/parametricity is itself a form of phase distinction between the syntactic and the semantic, we contribute a new synthetic approach to phase separated parametricity based on the slogan logical relations as types, iterating our modal account of the phase distinction. Then, to construct a simulation between two implementations of an abstract type, one simply programs a third implementation whose type component carries the representation invariant.
We define a computational type theory combining the contentful equality structure of cartesian cubical type theory with internal parametricity primitives. The combined theory supports both univalence and its relational equivalent, … We define a computational type theory combining the contentful equality structure of cartesian cubical type theory with internal parametricity primitives. The combined theory supports both univalence and its relational equivalent, which we call relativity. We demonstrate the use of the theory by analyzing polymorphic functions between higher inductive types, observe how cubical equality regularizes parametric type theory, and examine the similarities and discrepancies between cubical and parametric type theory, which are closely related. We also abstract a formal interface to the computational interpretation and show that this also has a presheaf model.
We define a computational type theory combining the contentful equality structure of cartesian cubical type theory with internal parametricity primitives. The combined theory supports both univalence and its relational equivalent, … We define a computational type theory combining the contentful equality structure of cartesian cubical type theory with internal parametricity primitives. The combined theory supports both univalence and its relational equivalent, which we call relativity. We demonstrate the use of the theory by analyzing polymorphic functions between higher inductive types, and we give an account of the identity extension lemma for internal parametricity.
The structure of complex networks has long been understood to play a role in transmission and spreading phenomena on a graph. This behavior is difficult to model analytically and is … The structure of complex networks has long been understood to play a role in transmission and spreading phenomena on a graph. This behavior is difficult to model analytically and is most often modeled numerically. Such networks form an important part of the structure of society, including transportation networks. As society fights to control the COVID-19 pandemic, an important question is to choose the optimum balance between the full opening of transport networks and the control of epidemic spread. In this paper we investigate how recent advances in analyzing network structure using information theory could inform decisions regarding the opening of such networks. By virtue of the richness of data available we focus upon the worldwide airline network, but these methods are in principle applicable to any transport network. We are able to demonstrate that it is possible to substantially open the airline network and have some degree of control on the spread of the virus.
Although computational complexity is a fundamental aspect of program behavior, it is often at odds with common type theoretic principles such as function extensionality, which identifies all functions with the … Although computational complexity is a fundamental aspect of program behavior, it is often at odds with common type theoretic principles such as function extensionality, which identifies all functions with the same $\textit{input-output}$ behavior. We present a computational type theory called $\mathbf{CATT}$ that has a primitive notion of cost (the number of evaluation steps). We introduce a new dependent function type "funtime" whose semantics can be viewed as a cost-aware version of function extensionality. We prove a collection of lemmas for $\mathbf{CATT}$, including a novel introduction rule for the new funtime type. $\mathbf{CATT}$ can be simultaneously viewed as a framework for analyzing computational complexity of programs and as the beginnings of a semantic foundation for characterizing feasible mathematical proofs.
Homotopy type theory proposes higher inductive types (HITs) as a means of defining and reasoning about inductively-generated objects with higher-dimensional structure. As with the univalence axiom, however, homotopy type theory … Homotopy type theory proposes higher inductive types (HITs) as a means of defining and reasoning about inductively-generated objects with higher-dimensional structure. As with the univalence axiom, however, homotopy type theory does not specify the computational behavior of HITs. Computational interpretations have now been provided for univalence and specific HITs by way of cubical type theories, which use a judgmental infrastructure of dimension variables. We extend the cartesian cubical computational type theory introduced by Angiuli et al. with a schema for indexed cubical inductive types (CITs), an adaptation of higher inductive types to the cubical setting. In doing so, we isolate the canonical values of a cubical inductive type and prove a canonicity theorem with respect to these values.
We present Polaris, a concurrent separation logic with support for probabilistic reasoning. As part of our logic, we extend the idea of coupling, which underlies recent work on probabilistic relational … We present Polaris, a concurrent separation logic with support for probabilistic reasoning. As part of our logic, we extend the idea of coupling, which underlies recent work on probabilistic relational logics, to the setting of programs with both probabilistic and non-deterministic choice. To demonstrate Polaris, we verify a variant of a randomized concurrent counter algorithm and a two-level concurrent skip list. All of our results have been mechanized in Coq.
We exhibit a computational type theory which combines the higher-dimensional structure of cartesian cubical type theory with the internal parametricity primitives of parametric type theory, drawing out the similarities and … We exhibit a computational type theory which combines the higher-dimensional structure of cartesian cubical type theory with the internal parametricity primitives of parametric type theory, drawing out the similarities and distinctions between the two along the way. The combined theory supports both univalence and its relational equivalent, which we call relativity. We demonstrate the use of the theory by analyzing polymorphic types, including functions between higher inductive types, and we show by example how relativity can be used to characterize the relational interpretation of inductive types.
Multi-threaded programs have traditionally fallen into one of two domains: cooperative and competitive. These two domains have traditionally remained mostly disjoint, with cooperative threading used for increasing throughput in compute-intensive … Multi-threaded programs have traditionally fallen into one of two domains: cooperative and competitive. These two domains have traditionally remained mostly disjoint, with cooperative threading used for increasing throughput in compute-intensive applications such as scientific workloads and cooperative threading used for increasing responsiveness in interactive applications such as GUIs and games. As multicore hardware becomes increasingly mainstream, there is a need for bridging these two disjoint worlds, because many applications mix interaction and computation and would benefit from both cooperative and competitive threading. In this paper, we present techniques for programming and reasoning about parallel interactive applications that can use both cooperative and competitive threading. Our techniques enable the programmer to write rich parallel interactive programs by creating and synchronizing with threads as needed, and by assigning threads user-defined and partially ordered priorities. To ensure important responsiveness properties, we present a modal type system analogous to S4 modal logic that precludes low-priority threads from delaying high-priority threads, thereby statically preventing a crucial set of priority-inversion bugs. We then present a cost model that allows reasoning about responsiveness and completion time of well-typed programs. The cost model extends the traditional work-span model for cooperative threading to account for competitive scheduling decisions needed to ensure responsiveness. Finally, we show that our proposed techniques are realistic by implementing them as an extension to the Standard ML language.
Multi-threaded programs have traditionally fallen into one of two domains: cooperative and competitive. These two domains have traditionally remained mostly disjoint, with cooperative threading used for increasing throughput in compute-intensive … Multi-threaded programs have traditionally fallen into one of two domains: cooperative and competitive. These two domains have traditionally remained mostly disjoint, with cooperative threading used for increasing throughput in compute-intensive applications such as scientific workloads and cooperative threading used for increasing responsiveness in interactive applications such as GUIs and games. As multicore hardware becomes increasingly mainstream, there is a need for bridging these two disjoint worlds, because many applications mix interaction and computation and would benefit from both cooperative and competitive threading. In this paper, we present techniques for programming and reasoning about parallel interactive applications that can use both cooperative and competitive threading. Our techniques enable the programmer to write rich parallel interactive programs by creating and synchronizing with threads as needed, and by assigning threads user-defined and partially ordered priorities. To ensure important responsiveness properties, we present a modal type system analogous to S4 modal logic that precludes low-priority threads from delaying high-priority threads, thereby statically preventing a crucial set of priority-inversion bugs. We then present a cost model that allows reasoning about responsiveness and completion time of well-typed programs. The cost model extends the traditional work-span model for cooperative threading to account for competitive scheduling decisions needed to ensure responsiveness. Finally, we show that our proposed techniques are realistic by implementing them as an extension to the Standard ML language.
RedPRL is an experimental proof assistant based on Cartesian cubical computational type theory, a new type theory for higher-dimensional constructions inspired by homotopy type theory. In the style of Nuprl, … RedPRL is an experimental proof assistant based on Cartesian cubical computational type theory, a new type theory for higher-dimensional constructions inspired by homotopy type theory. In the style of Nuprl, RedPRL users employ tactics to establish behavioral properties of cubical functional programs embodying the constructive content of proofs. Notably, RedPRL implements a two-level type theory, allowing an extensional, proof-irrelevant notion of exact equality to coexist with a higher-dimensional proof-relevant notion of paths.
Nakano's later modality can be used to specify and define recursive functions which are causal or synchronous; in concert with a notion of clock variable, it is possible to also … Nakano's later modality can be used to specify and define recursive functions which are causal or synchronous; in concert with a notion of clock variable, it is possible to also capture the broader class of productive (co)programs. Until now, it has been difficult to combine these constructs with dependent types in a way that preserves the operational meaning of type theory and admits a hierarchy of universes Ui.
This is the fourth in a series of papers extending Martin-Löf's meaning explanation of dependent type theory to higher-dimensional types. In this installment, we show how to define cubical type … This is the fourth in a series of papers extending Martin-Löf's meaning explanation of dependent type theory to higher-dimensional types. In this installment, we show how to define cubical type systems supporting a general schema of indexed cubical inductive types whose constructors may take dimension parameters and have a specified boundary. Using this schema, we are able to specify and implement many of the higher inductive types which have been postulated in homotopy type theory, including homotopy pushouts, the torus, $W$-quotients, truncations, arbitrary localizations. By including indexed inductive types, we enable the definition of identity types. The addition of higher inductive types makes computational higher type theory a model of homotopy type theory, capable of interpreting almost all of the constructions in the HoTT Book (with the exception of inductive-inductive types). This is the first such model with an explicit canonicity theorem, which specifies the canonical values of higher inductive types and confirms that every term in an inductive type evaluates to such a value.
Multi-threaded programs have traditionally fallen into one of two domains: cooperative and competitive. These two domains have traditionally remained mostly disjoint, with cooperative threading used for increasing throughput in compute-intensive … Multi-threaded programs have traditionally fallen into one of two domains: cooperative and competitive. These two domains have traditionally remained mostly disjoint, with cooperative threading used for increasing throughput in compute-intensive applications such as scientific workloads and cooperative threading used for increasing responsiveness in interactive applications such as GUIs and games. As multicore hardware becomes increasingly mainstream, there is a need for bridging these two disjoint worlds, because many applications mix interaction and computation and would benefit from both cooperative and competitive threading. In this paper, we present techniques for programming and reasoning about parallel interactive applications that can use both cooperative and competitive threading. Our techniques enable the programmer to write rich parallel interactive programs by creating and synchronizing with threads as needed, and by assigning threads user-defined and partially ordered priorities. To ensure important responsiveness properties, we present a modal type system analogous to S4 modal logic that precludes low-priority threads from delaying high-priority threads, thereby statically preventing a crucial set of priority-inversion bugs. We then present a cost model that allows reasoning about responsiveness and completion time of well-typed programs. The cost model extends the traditional work-span model for cooperative threading to account for competitive scheduling decisions needed to ensure responsiveness. Finally, we show that our proposed techniques are realistic by implementing them as an extension to the Standard ML language.
We consider the problem of inferring the functional connectivity of a large-scale computer network from sparse time series of events emitted by its nodes. We do so under the following … We consider the problem of inferring the functional connectivity of a large-scale computer network from sparse time series of events emitted by its nodes. We do so under the following three domain-specific constraints: (a) non-stationarity of the functional connectivity due to unknown temporal changes in the network, (b) sparsity of the time-series of events that limits the effectiveness of classical correlation-based analysis, and (c) lack of an explicit model describing how events propagate through the network. Under the assumption that the probability of two nodes being functionally connected correlates with the mean delay between their respective events, we develop an inference method whose output is an undirected weighted network where the weight of an edge between two nodes denotes the probability of these nodes being functionally connected. Using a combination of windowing and convolution to calculate at each time window a score quantifying the likelihood of a pair of nodes emitting events in quick succession, we develop a model of time-varying connectivity whose parameters are determined by maximising the model's predictive power from one time window to the next. To assess the effectiveness of our inference method, we construct synthetic data for which ground truth is available and use these data to benchmark our approach against three state-of-the-art inference methods. We conclude by discussing its application to data from a real-world large-scale computer network.
We contribute a general apparatus for dependent tactic-based proof refinement in the LCF tradition, in which the statements of subgoals may express a dependency on the proofs of other subgoals; … We contribute a general apparatus for dependent tactic-based proof refinement in the LCF tradition, in which the statements of subgoals may express a dependency on the proofs of other subgoals; this form of dependency is extremely useful and can serve as an algorithmic alternative to extensions of LCF based on non-local instantiation of schematic variables. Additionally, we introduce a novel behavioral distinction between refinement rules and tactics based on naturality. Our framework, called Dependent LCF, is already deployed in the nascent RedPRL proof assistant for computational cubical type theory.
This is the third in a series of papers extending Martin-Löf's meaning explanations of dependent type theory to a Cartesian cubical realizability framework that accounts for higher-dimensional types. We extend … This is the third in a series of papers extending Martin-Löf's meaning explanations of dependent type theory to a Cartesian cubical realizability framework that accounts for higher-dimensional types. We extend this framework to include a cumulative hierarchy of univalent Kan universes of Kan types, exact equality and other pretypes lacking Kan structure, and a cumulative hierarchy of pretype universes. As in Parts I and II, the main result is a canonicity theorem stating that closed terms of boolean type evaluate to either true or false. This establishes the computational interpretation of Cartesian cubical higher type theory based on cubical programs equipped with a deterministic operational semantics.
Compiler correctness proofs for higher-order concurrent languages are difficult: they involve establishing a termination-preserving refinement between a concurrent high-level source language and an implementation that uses low-level shared memory primitives. … Compiler correctness proofs for higher-order concurrent languages are difficult: they involve establishing a termination-preserving refinement between a concurrent high-level source language and an implementation that uses low-level shared memory primitives. However, existing logics for proving concurrent refinement either neglect properties such as termination, or only handle first-order state. In this paper, we address these limitations by extending Iris, a recent higher-order concurrent separation logic, with support for reasoning about termination-preserving refinements. To demonstrate the power of these extensions, we prove the correctness of an efficient implementation of a higher-order, session-typed language. To our knowledge, this is the first program logic capable of giving a compiler correctness proof for such a language. The soundness of our extensions and our compiler correctness proof have been mechanized in Coq.
An interface is a contract that specifies the rights of a client and the responsibilities of an implementor. Being a specification of behavior, an interface is a type. In principle, … An interface is a contract that specifies the rights of a client and the responsibilities of an implementor. Being a specification of behavior, an interface is a type. In principle, any type may serve as an interface, but in practice it is usual to structure code into modules consisting of separable and reusable components. An interface specifies the behavior of a module expected by a client and imposed on the implementor. It is the fulcrum balancing the tension between separation and integration. As a rule, a module ought to have a welldefined behavior that can be understood separately, but it is equally important that it be easy to combine modules to form an integrated whole.
Modernized Algol, or MA, is an imperative, block-structured programming language based on the classic language Algol. MA extends PCF with a new syntactic sort of commands that act on assignables … Modernized Algol, or MA, is an imperative, block-structured programming language based on the classic language Algol. MA extends PCF with a new syntactic sort of commands that act on assignables by retrieving and altering their contents. Assignables are introduced by declaring them for use within a specified scope; this is the essence of block structure. Commands are combined by sequencing and are iterated using recursion.
A summary is not available for this content so a preview has been provided. Please use the Get access link above for information on how to access this content. A summary is not available for this content so a preview has been provided. Please use the Get access link above for information on how to access this content.
Brouwer's constructivist foundations of mathematics is based on an intuitively meaningful notion of computation shared by all mathematicians. Martin-Löf's meaning explanations for constructive type theory define the concept of a … Brouwer's constructivist foundations of mathematics is based on an intuitively meaningful notion of computation shared by all mathematicians. Martin-Löf's meaning explanations for constructive type theory define the concept of a type in terms of computation. Briefly, a type is a complete (closed) program that evaluates to a canonical type whose members are complete programs that evaluate to canonical elements of that type. The explanation is extended to incomplete (open) programs by functionality: types and elements must respect equality in their free variables. Equality is evidence-free---two types or elements are at most equal---and equal things are implicitly interchangeable in all contexts. Higher-dimensional type theory extends type theory to account for identifications of types and elements. An identification witnesses that two types or elements are explicitly interchangeable in all contexts by an explicit transport, or coercion, operation. There must be sufficiently many identifications, which is ensured by imposing a generalized form of the Kan condition from homotopy theory. Here we provide a Martin-Löf-style meaning explanation of simple higher-dimensional type theory based on a programming language that includes Kan-like constructs witnessing the computational meaning of the higher structure of types. The treatment includes an example of a higher inductive type (namely, the 1-dimensional sphere) and an example of Voevodsky's univalence principle, which identifies equivalent types. The main result is a computational canonicity theorem that validates the computational interpretation: a closed boolean expression must always evaluate to a boolean value, even in the presence of higher-dimensional structure. This provides the first fully computational formulation of higher-dimensional type theory.
This is the second in a series of papers extending Martin-Löf's meaning explanation of dependent type theory to account for higher-dimensional types. We build on the cubical realizability framework for … This is the second in a series of papers extending Martin-Löf's meaning explanation of dependent type theory to account for higher-dimensional types. We build on the cubical realizability framework for simple types developed in Part I, and extend it to a meaning explanation of dependent higher-dimensional type theory. This extension requires generalizing the computational Kan condition given in Part I, and considering the action of type families on paths. We define identification types, which classify identifications (paths) in a type, and dependent function and pair types. The main result is a canonicity theorem, which states that a closed term of boolean type evaluates to either true or false. This result establishes the first computational interpretation of higher dependent type theory by giving a deterministic operational semantics for its programs, including operations that realize the Kan condition.
Homotopy type theory is a recently-developed unification of previously disparate frameworks, which can serve to advance the project of formalizing and mechanizing mathematics. One framework is based on a computational … Homotopy type theory is a recently-developed unification of previously disparate frameworks, which can serve to advance the project of formalizing and mechanizing mathematics. One framework is based on a computational conception of the type of a construction, the other is based on a homotopical conception of the homotopy type of a space. The computational notion of type has its origins in Brouwer's program of intuitionism, and Church's λ-calculus, both of which sought to ground mathematics in computation (one would say "algorithm" these days). The homotopical notion comes from Grothendieck's late conception of homotopy types of spaces as represented by ∞-groupoids [Grothendieck 1983]. The computational perspective was developed most fully by Per Martin-Löf, leading in particular to his Intuitionistic Theory of Types [Martin-Löf and Sambin 1984], on which the formal system of homotopy type theory is based. The connection to homotopy theory was first hinted at in the groupoid interpretation of Hofmann and Streicher [Hofmann and Streicher 1994; 1995]. It was then made explicit by several researchers, roughly simultaneously. The connection was clinched by Voevodsky's introduction of the univalence axiom , which is motivated by the homotopical interpretation, and which relates type equality to homotopy equivalence [Kapulkin et al. 2012; Awodey et al. 2013].
Bezem, Coquand, and Huber have recently given a constructively valid model of higher type theory in a category of nominal cubical sets satisfying a novel condition, called the uniform Kan … Bezem, Coquand, and Huber have recently given a constructively valid model of higher type theory in a category of nominal cubical sets satisfying a novel condition, called the uniform Kan condition (UKC), which generalizes the standard cubical Kan condition (as considered by, for example, Williamson in his survey of combinatorial homotopy theory) to admit phantom "additional" dimensions in open boxes. This note, which represents the authors' attempts to fill in the details of the UKC, is intended for newcomers to the field who may appreciate a more explicit formulation and development of the main ideas. The crux of the exposition is an analogue of the Yoneda Lemma for co-sieves that relates geometric open boxes bijectively to their algebraic counterparts, much as its progenitor for representables relates geometric cubes to their algebraic counterparts in a cubical set. This characterization is used to give a formulation of uniform Kan fibrations in which uniformity emerges as naturality in the additional dimensions.
Homotopy type theory is a new branch of mathematics, based on a recently discovered connection between homotopy theory and type theory, which brings new ideas into the very foundation of … Homotopy type theory is a new branch of mathematics, based on a recently discovered connection between homotopy theory and type theory, which brings new ideas into the very foundation of mathematics. On the one hand, Voevodsky's subtle and beautiful "univalence axiom" implies that isomorphic structures can be identified. On the other hand, "higher inductive types" provide direct, logical descriptions of some of the basic spaces and constructions of homotopy theory. Both are impossible to capture directly in classical set-theoretic foundations, but when combined in homotopy type theory, they permit an entirely new kind of "logic of homotopy types". This suggests a new conception of foundations of mathematics, with intrinsic homotopical content, an "invariant" conception of the objects of mathematics -- and convenient machine implementations, which can serve as a practical aid to the working mathematician. This book is intended as a first systematic exposition of the basics of the resulting "Univalent Foundations" program, and a collection of examples of this new style of reasoning -- but without requiring the reader to know or learn any formal logic, or to use any computer proof assistant.
A summary is not available for this content so a preview has been provided. Please use the Get access link above for information on how to access this content. A summary is not available for this content so a preview has been provided. Please use the Get access link above for information on how to access this content.
An interface is a contract that specifies the rights of a client and the responsibilities of an implementor. Being a specification of behavior, an interface is a type. In principle … An interface is a contract that specifies the rights of a client and the responsibilities of an implementor. Being a specification of behavior, an interface is a type. In principle any type may serve as an interface, but in practice it is usual to structure code into modules consisting of separable and reusable components. An interface specifies the behavior of a module expected by a client and imposed on the implementor. It is the fulcrum on which is balanced the tension between separability and integration. As a rule, a module should have a well-defined behavior that can be understood separately, but it is equally important that it be easy to combine modules to form an integrated whole.
Modernized Algol, or ℒ{nat cmd ⇀}, is an imperative, block-structured programming language based on the classic language Algol. ℒ{nat cmd ⇀} may be seen as an extension to ℒ{nat ⇀} … Modernized Algol, or ℒ{nat cmd ⇀}, is an imperative, block-structured programming language based on the classic language Algol. ℒ{nat cmd ⇀} may be seen as an extension to ℒ{nat ⇀} with a new syntactic sort of commands that act on assignables by retrieving and altering their contents. Assignables are introduced by declaring them for use within a specified scope; this is the essence of block structure. Commands may be combined by sequencing and may be iterated by recursion.
This paper presents language techniques for applying memoization selectively. The techniques provide programmer control over equality, space usage, and identification of precise dependences so that memoization can be applied according … This paper presents language techniques for applying memoization selectively. The techniques provide programmer control over equality, space usage, and identification of precise dependences so that memoization can be applied according to the needs of an application. Two key properties of the approach are that it accepts and efficient implementation and yields programs whose performance can be analyzed using standard analysis techniques. We describe our approach in the context of a functional language called MFL and an implementation as a Standard ML library. The MFL language employs a modal type system to enable the programmer to express programs that reveal their true data dependences when executed. We prove that the MFL language is sound by showing that that MFL programs yield the same result as they would with respect to a standard, non-memoizing semantics. The SML implementation cannot support the modal type system of MFL statically but instead employs run-time checks to ensure correct usage of primitives.

Commonly Cited References

This paper presents a type theory in which it is possible to directly manipulate n-dimensional cubes (points, lines, squares, cubes, etc.) based on an interpretation of dependent type theory in … This paper presents a type theory in which it is possible to directly manipulate n-dimensional cubes (points, lines, squares, cubes, etc.) based on an interpretation of dependent type theory in a cubical set model. This enables new ways to reason about identity types, for instance, function extensionality is directly provable in the system. Further, Voevodsky's univalence axiom is provable in this system. We also explain an extension with some higher inductive types like the circle and propositional truncation. Finally we provide semantics for this cubical type theory in a constructive meta-theory.
The main way of analysing the complexity of a program is that of extracting and solving a recurrence that expresses its running time in terms of the size of its … The main way of analysing the complexity of a program is that of extracting and solving a recurrence that expresses its running time in terms of the size of its input. We develop a method that automatically extracts such recurrences from the syntax of higher-order recursive functional programs. The resulting recurrences, which are programs in a call-by-name language with recursion, explicitly compute the running time in terms of the size of the input. In order to achieve this in a uniform way that covers both call-by-name and call-by-value evaluation strategies, we use Call-by-Push-Value (CBPV) as an intermediate language. Finally, we use domain theory to develop a denotational cost semantics for the resulting recurrences.
We give a denotational semantics for a region-based effect system that supports type abstraction in the sense that only externally visible effects need to be tracked: non-observable internal modifications, such … We give a denotational semantics for a region-based effect system that supports type abstraction in the sense that only externally visible effects need to be tracked: non-observable internal modifications, such as the reorganisation of a search tree or lazy initialisation, can count as 'pure' or 'read only'. This 'fictional purity' allows clients of a module to validate soundly more effect-based program equivalences than would be possible with previous semantics. Our semantics uses a novel variant of logical relations that maps types not merely to partial equivalence relations on values, as is commonly done, but rather to a proof-relevant generalisation thereof, namely setoids. The objects of a setoid establish that values inhabit semantic types, whilst its morphisms are understood as proofs of semantic equivalence. The transition to proof-relevance solves twoawkward problems caused by naïve use of existential quantification in Kripke logical relations, namely failure of admissibility and spurious functional dependencies.
The theory of program modules is of interest to language designers not only for its practical importance to programming, but also because it lies at the nexus of three fundamental … The theory of program modules is of interest to language designers not only for its practical importance to programming, but also because it lies at the nexus of three fundamental concerns in language design: the phase distinction , computational effects , and type abstraction . We contribute a fresh “synthetic” take on program modules that treats modules as the fundamental constructs, in which the usual suspects of prior module calculi (kinds, constructors, dynamic programs) are rendered as derived notions in terms of a modal type-theoretic account of the phase distinction. We simplify the account of type abstraction (embodied in the generativity of module functors) through a lax modality that encapsulates computational effects, placing projectibility of module expressions on a type-theoretic basis. Our main result is a (significant) proof-relevant and phase-sensitive generalization of the Reynolds abstraction theorem for a calculus of program modules, based on a new kind of logical relation called a parametricity structure . Parametricity structures generalize the proof-irrelevant relations of classical parametricity to proof- relevant families, where there may be non-trivial evidence witnessing the relatedness of two programs—simplifying the metatheory of strong sums over the collection of types, for although there can be no “relation classifying relations,” one easily accommodates a “family classifying small families.” Using the insight that logical relations/parametricity is itself a form of phase distinction between the syntactic and the semantic, we contribute a new synthetic approach to phase separated parametricity based on the slogan logical relations as types , by iterating our modal account of the phase distinction. We axiomatize a dependent type theory of parametricity structures using two pairs of complementary modalities (syntactic, semantic) and (static, dynamic), substantiated using the topos theoretic Artin gluing construction. Then, to construct a simulation between two implementations of an abstract type, one simply programs a third implementation whose type component carries the representation invariant.
Cubical type theory provides a constructive justification to certain aspects of homotopy type theory such as Voevodsky's univalence axiom. This makes many extensionality principles, like function and propositional extensionality, directly … Cubical type theory provides a constructive justification to certain aspects of homotopy type theory such as Voevodsky's univalence axiom. This makes many extensionality principles, like function and propositional extensionality, directly provable in the theory. This paper describes a constructive semantics, expressed in a presheaf topos with suitable structure inspired by cubical sets, of some higher inductive types. It also extends cubical type theory by a syntax for the higher inductive types of spheres, torus, suspensions, truncations, and pushouts. All of these types are justified by the semantics and have judgmental computation rules for all constructors, including the higher dimensional ones, and the universes are closed under these type formers.
Homotopy type theory proposes higher inductive types (HITs) as a means of defining and reasoning about inductively-generated objects with higher-dimensional structure. As with the univalence axiom, however, homotopy type theory … Homotopy type theory proposes higher inductive types (HITs) as a means of defining and reasoning about inductively-generated objects with higher-dimensional structure. As with the univalence axiom, however, homotopy type theory does not specify the computational behavior of HITs. Computational interpretations have now been provided for univalence and specific HITs by way of cubical type theories, which use a judgmental infrastructure of dimension variables. We extend the cartesian cubical computational type theory introduced by Angiuli et al. with a schema for indexed cubical inductive types (CITs), an adaptation of higher inductive types to the cubical setting. In doing so, we isolate the canonical values of a cubical inductive type and prove a canonicity theorem with respect to these values.
Reynolds' original theory of relational parametricity was intended to capture the observation that polymorphically typed System F programs preserve all relations between inputs. But as Reynolds himself later showed, his … Reynolds' original theory of relational parametricity was intended to capture the observation that polymorphically typed System F programs preserve all relations between inputs. But as Reynolds himself later showed, his theory can only be formulated in a metatheory with an impredicative universe, such as the Calculus of Inductive Constructions. A number of more abstract treatments of relational parametricity have since appeared; however, as we show, none of these seem to express Reynolds' original theory in a satisfactory way.
This is the third in a series of papers extending Martin-Löf's meaning explanations of dependent type theory to a Cartesian cubical realizability framework that accounts for higher-dimensional types. We extend … This is the third in a series of papers extending Martin-Löf's meaning explanations of dependent type theory to a Cartesian cubical realizability framework that accounts for higher-dimensional types. We extend this framework to include a cumulative hierarchy of univalent Kan universes of Kan types, exact equality and other pretypes lacking Kan structure, and a cumulative hierarchy of pretype universes. As in Parts I and II, the main result is a canonicity theorem stating that closed terms of boolean type evaluate to either true or false. This establishes the computational interpretation of Cartesian cubical higher type theory based on cubical programs equipped with a deterministic operational semantics.
This paper presents a novel connection between homotopical algebra and mathematical logic. It is shown that a form of intensional type theory is valid in any Quillen model category, generalizing … This paper presents a novel connection between homotopical algebra and mathematical logic. It is shown that a form of intensional type theory is valid in any Quillen model category, generalizing the Hofmann-Streicher groupoid model of Martin-Loef type theory.
This is the second in a series of papers extending Martin-Löf's meaning explanation of dependent type theory to account for higher-dimensional types. We build on the cubical realizability framework for … This is the second in a series of papers extending Martin-Löf's meaning explanation of dependent type theory to account for higher-dimensional types. We build on the cubical realizability framework for simple types developed in Part I, and extend it to a meaning explanation of dependent higher-dimensional type theory. This extension requires generalizing the computational Kan condition given in Part I, and considering the action of type families on paths. We define identification types, which classify identifications (paths) in a type, and dependent function and pair types. The main result is a canonicity theorem, which states that a closed term of boolean type evaluates to either true or false. This result establishes the first computational interpretation of higher dependent type theory by giving a deterministic operational semantics for its programs, including operations that realize the Kan condition.
A central method for analyzing the asymptotic complexity of a functional program is to extract and then solve a recurrence that expresses evaluation cost in terms of input size. The … A central method for analyzing the asymptotic complexity of a functional program is to extract and then solve a recurrence that expresses evaluation cost in terms of input size. The relevant notion of input size is often specific to a datatype, with measures including the length of a list, the maximum element in a list, and the height of a tree. In this work, we give a formal account of the extraction of cost and size recurrences from higher-order functional programs over inductive datatypes. Our approach allows a wide range of programmer-specified notions of size, and ensures that the extracted recurrences correctly predict evaluation cost. To extract a recurrence from a program, we first make costs explicit by applying a monadic translation from the source language to a complexity language, and then abstract datatype values as sizes. Size abstraction can be done semantically, working in models of the complexity language, or syntactically, by adding rules to a preorder judgement. We give several different models of the complexity language, which support different notions of size. Additionally, we prove by a logical relations argument that recurrences extracted by this process are upper bounds for evaluation cost; the proof is entirely syntactic and therefore applies to all of the models we consider.
We present the topos S of trees as a model of guarded recursion. We study the internal dependently-typed higher-order logic of S and show that S models two modal operators, … We present the topos S of trees as a model of guarded recursion. We study the internal dependently-typed higher-order logic of S and show that S models two modal operators, on predicates and types, which serve as guards in recursive definitions of terms, predicates, and types. In particular, we show how to solve recursive type equations involving dependent types. We propose that the internal logic of S provides the right setting for the synthetic construction of abstract versions of step-indexed models of programming languages and program logics. As an example, we show how to construct a model of a programming language with higher-order store and recursive types entirely inside the internal logic of S.
This is the fourth in a series of papers extending Martin-Löf's meaning explanation of dependent type theory to higher-dimensional types. In this installment, we show how to define cubical type … This is the fourth in a series of papers extending Martin-Löf's meaning explanation of dependent type theory to higher-dimensional types. In this installment, we show how to define cubical type systems supporting a general schema of indexed cubical inductive types whose constructors may take dimension parameters and have a specified boundary. Using this schema, we are able to specify and implement many of the higher inductive types which have been postulated in homotopy type theory, including homotopy pushouts, the torus, $W$-quotients, truncations, arbitrary localizations. By including indexed inductive types, we enable the definition of identity types. The addition of higher inductive types makes computational higher type theory a model of homotopy type theory, capable of interpreting almost all of the constructions in the HoTT Book (with the exception of inductive-inductive types). This is the first such model with an explicit canonicity theorem, which specifies the canonical values of higher inductive types and confirms that every term in an inductive type evaluates to such a value.
We prove normalization for (univalent, Cartesian) cubical type theory, closing the last major open problem in the syntactic metatheory of cubical type theory. Our normalization result is reduction-free, in the … We prove normalization for (univalent, Cartesian) cubical type theory, closing the last major open problem in the syntactic metatheory of cubical type theory. Our normalization result is reduction-free, in the sense of yielding a bijection between equivalence classes of terms in context and a tractable language of $β/η$-normal forms. As corollaries we obtain both decidability of judgmental equality and the injectivity of type constructors.
This paper introduces an expressive class of indexed quotient-inductive types, called QWI types, within the framework of constructive type theory. They are initial algebras for indexed families of equational theories … This paper introduces an expressive class of indexed quotient-inductive types, called QWI types, within the framework of constructive type theory. They are initial algebras for indexed families of equational theories with possibly infinitary operators and equations. We prove that QWI types can be derived from quotient types and inductive types in the type theory of toposes with natural number object and universes, provided those universes satisfy the Weakly Initial Set of Covers (WISC) axiom. We do so by constructing QWI types as colimits of a family of approximations to them defined by well-founded recursion over a suitable notion of size, whose definition involves the WISC axiom. We developed the proof and checked it using the Agda theorem prover.
Brouwer's constructivist foundations of mathematics is based on an intuitively meaningful notion of computation shared by all mathematicians. Martin-Löf's meaning explanations for constructive type theory define the concept of a … Brouwer's constructivist foundations of mathematics is based on an intuitively meaningful notion of computation shared by all mathematicians. Martin-Löf's meaning explanations for constructive type theory define the concept of a type in terms of computation. Briefly, a type is a complete (closed) program that evaluates to a canonical type whose members are complete programs that evaluate to canonical elements of that type. The explanation is extended to incomplete (open) programs by functionality: types and elements must respect equality in their free variables. Equality is evidence-free---two types or elements are at most equal---and equal things are implicitly interchangeable in all contexts. Higher-dimensional type theory extends type theory to account for identifications of types and elements. An identification witnesses that two types or elements are explicitly interchangeable in all contexts by an explicit transport, or coercion, operation. There must be sufficiently many identifications, which is ensured by imposing a generalized form of the Kan condition from homotopy theory. Here we provide a Martin-Löf-style meaning explanation of simple higher-dimensional type theory based on a programming language that includes Kan-like constructs witnessing the computational meaning of the higher structure of types. The treatment includes an example of a higher inductive type (namely, the 1-dimensional sphere) and an example of Voevodsky's univalence principle, which identifies equivalent types. The main result is a computational canonicity theorem that validates the computational interpretation: a closed boolean expression must always evaluate to a boolean value, even in the presence of higher-dimensional structure. This provides the first fully computational formulation of higher-dimensional type theory.
We present a model of type theory with dependent product, sum, and identity, in cubical sets. We describe a universe and explain how to transform an equivalence between two types … We present a model of type theory with dependent product, sum, and identity, in cubical sets. We describe a universe and explain how to transform an equivalence between two types into an equality. We also explain how to model propositional truncation and the circle. While not expressed internally in type theory, the model is expressed in a constructive metalogic. Thus it is a step towards a computational interpretation of Voevodsky's Univalence Axiom.
We describe a homotopical version of the relational and gluing models of type theory, and generalize it to inverse diagrams and oplax limits. Our method uses the Reedy homotopy theory … We describe a homotopical version of the relational and gluing models of type theory, and generalize it to inverse diagrams and oplax limits. Our method uses the Reedy homotopy theory on inverse diagrams, and relies on the fact that Reedy fibrant diagrams correspond to contexts of a certain shape in type theory. This has two main applications. First, by considering inverse diagrams in Voevodsky's univalent model in simplicial sets, we obtain new models of univalence in a number of (∞, 1)-toposes; this answers a question raised at the Oberwolfach workshop on homotopical type theory. Second, by gluing the syntactic category of univalent type theory along its global sections functor to groupoids, we obtain a partial answer to Voevodsky's homotopy-canonicity conjecture: in 1-truncated type theory with one univalent universe of sets, any closed term of natural number type is homotopic to a numeral.
The homotopical approach to intensional type theory views proofs of equality as paths. We explore what is required of an interval-like object I in a topos to give a model … The homotopical approach to intensional type theory views proofs of equality as paths. We explore what is required of an interval-like object I in a topos to give a model of type theory in which elements of identity types are functions with domain I. Cohen, Coquand, Huber and Mortberg give such a model using a particular category of presheaves. We investigate the extent to which their model construction can be expressed in the internal type theory of any topos and identify a collection of quite weak axioms for this purpose. This clarifies the definition and properties of the notion of uniform Kan filling that lies at the heart of their constructive interpretation of Voevodsky's univalence axiom. Furthermore, since our axioms can be satisfied in a number of different ways, we show that there is a range of topos-theoretic models of homotopy type theory in this style.
The goal of this thesis is to prove that π4(S3) ≃ Z/2Z in homotopy type theory. In particular it is a constructive and purely homotopy-theoretic proof. We first recall the … The goal of this thesis is to prove that π4(S3) ≃ Z/2Z in homotopy type theory. In particular it is a constructive and purely homotopy-theoretic proof. We first recall the basic concepts of homotopy type theory, and we prove some well-known results about the homotopy groups of spheres: the computation of the homotopy groups of the circle, the triviality of those of the form πk(Sn) with k < n, and the construction of the Hopf fibration. We then move to more advanced tools. In particular, we define the James construction which allows us to prove the Freudenthal suspension theorem and the fact that there exists a natural number n such that π4(S3) ≃ Z/nZ. Then we study the smash product of spheres, we construct the cohomology ring of a space, and we introduce the Hopf invariant, allowing us to narrow down the n to either 1 or 2. The Hopf invariant also allows us to prove that all the groups of the form π4n−1(S2n) are infinite. Finally we construct the Gysin exact sequence, allowing us to compute the cohomology of CP2 and to prove that π4(S3) ≃ Z/2Z and that more generally πn+1(Sn) ≃ Z/2Z for every n ≥ 3
We present calf , a c ost- a ware l ogical f ramework for studying quantitative aspects of functional programs. Taking inspiration from recent work that reconstructs traditional aspects of … We present calf , a c ost- a ware l ogical f ramework for studying quantitative aspects of functional programs. Taking inspiration from recent work that reconstructs traditional aspects of programming languages in terms of a modal account of phase distinctions , we argue that the cost structure of programs motivates a phase distinction between intension and extension . Armed with this technology, we contribute a synthetic account of cost structure as a computational effect in which cost-aware programs enjoy an internal noninterference property: input/output behavior cannot depend on cost. As a full-spectrum dependent type theory, calf presents a unified language for programming and specification of both cost and behavior that can be integrated smoothly with existing mathematical libraries available in type theoretic proof assistants. We evaluate calf as a general framework for cost analysis by implementing two fundamental techniques for algorithm analysis: the method of recurrence relations and physicist’s method for amortized analysis . We deploy these techniques on a variety of case studies: we prove a tight, closed bound for Euclid’s algorithm, verify the amortized complexity of batched queues, and derive tight, closed bounds for the sequential and parallel complexity of merge sort, all fully mechanized in the Agda proof assistant. Lastly we substantiate the soundness of quantitative reasoning in calf by means of a model construction.
We implement in the formal language of homotopy type theory a new set of axioms called cohesion. Then we indicate how the resulting cohesive homotopy type theory naturally serves as … We implement in the formal language of homotopy type theory a new set of axioms called cohesion. Then we indicate how the resulting cohesive homotopy type theory naturally serves as a formal foundation for central concepts in quantum gauge field theory. This is a brief survey of work by the authors developed in detail elsewhere.
Nominal abstract syntax is an approach to representing names and binding pioneered by Gabbay and Pitts. So far nominal techniques have mostly been studied using classical logic or model theory, … Nominal abstract syntax is an approach to representing names and binding pioneered by Gabbay and Pitts. So far nominal techniques have mostly been studied using classical logic or model theory, not type theory. Nominal extensions to simple, dependent and ML-like polymorphic languages have been studied, but decidability and normalization results have only been established for simple nominal type theories. We present a LF-style dependent type theory extended with name-abstraction types, prove soundness and decidability of beta-eta-equivalence checking, discuss adequacy and canonical forms via an example, and discuss extensions such as dependently-typed recursion and induction principles.
Homotopy type theory is a new branch of mathematics, based on a recently discovered connection between homotopy theory and type theory, which brings new ideas into the very foundation of … Homotopy type theory is a new branch of mathematics, based on a recently discovered connection between homotopy theory and type theory, which brings new ideas into the very foundation of mathematics. On the one hand, Voevodsky's subtle and beautiful "univalence axiom" implies that isomorphic structures can be identified. On the other hand, "higher inductive types" provide direct, logical descriptions of some of the basic spaces and constructions of homotopy theory. Both are impossible to capture directly in classical set-theoretic foundations, but when combined in homotopy type theory, they permit an entirely new kind of "logic of homotopy types". This suggests a new conception of foundations of mathematics, with intrinsic homotopical content, an "invariant" conception of the objects of mathematics -- and convenient machine implementations, which can serve as a practical aid to the working mathematician. This book is intended as a first systematic exposition of the basics of the resulting "Univalent Foundations" program, and a collection of examples of this new style of reasoning -- but without requiring the reader to know or learn any formal logic, or to use any computer proof assistant.
Higher inductive types are a class of type-forming rules, introduced to provide basic (and not-so-basic) homotopy-theoretic constructions in a type-theoretic style. They have proven very fruitful for the "synthetic" development … Higher inductive types are a class of type-forming rules, introduced to provide basic (and not-so-basic) homotopy-theoretic constructions in a type-theoretic style. They have proven very fruitful for the "synthetic" development of homotopy theory within type theory, as well as in formalizing ordinary set-level mathematics in type theory. In this article, we construct models of a wide range of higher inductive types in a fairly wide range of settings. We introduce the notion of cell monad with parameters: a semantically-defined scheme for specifying homotopically well-behaved notions of structure. We then show that any suitable model category has *weakly stable typal initial algebras* for any cell monad with parameters. When combined with the local universes construction to obtain strict stability, this specializes to give models of specific higher inductive types, including spheres, the torus, pushout types, truncations, the James construction, and general localisations. Our results apply in any sufficiently nice Quillen model category, including any right proper, simplicially locally cartesian closed, simplicial Cisinski model category (such as simplicial sets) and any locally presentable locally cartesian closed category (such as sets) with its trivial model structure. In particular, any locally presentable locally cartesian closed $(\infty,1)$-category is presented by some model category to which our results apply.
We define a computational type theory combining the contentful equality structure of cartesian cubical type theory with internal parametricity primitives. The combined theory supports both univalence and its relational equivalent, … We define a computational type theory combining the contentful equality structure of cartesian cubical type theory with internal parametricity primitives. The combined theory supports both univalence and its relational equivalent, which we call relativity. We demonstrate the use of the theory by analyzing polymorphic functions between higher inductive types, and we give an account of the identity extension lemma for internal parametricity.
Abstract In recent years, we have seen several new models of dependent type theory extended with some form of modal necessity operator, including nominal type theory, guarded and clocked type … Abstract In recent years, we have seen several new models of dependent type theory extended with some form of modal necessity operator, including nominal type theory, guarded and clocked type theory and spatial and cohesive type theory. In this paper, we study modal dependent type theory : dependent type theory with an operator satisfying (a dependent version of) the K axiom of modal logic. We investigate both semantics and syntax. For the semantics, we introduce categories with families with a dependent right adjoint (CwDRA) and show that the examples above can be presented as such. Indeed, we show that any category with finite limits and an adjunction of endofunctors give rise to a CwDRA via the local universe construction. For the syntax, we introduce a dependently typed extension of Fitch-style modal λ -calculus, show that it can be interpreted in any CwDRA, and build a term model. We extend the syntax and semantics with universes.
Homotopy theory can be developed synthetically in homotopy type theory, using types to describe spaces, the identity type to describe paths in a space, and iterated identity types to describe … Homotopy theory can be developed synthetically in homotopy type theory, using types to describe spaces, the identity type to describe paths in a space, and iterated identity types to describe higher-dimensional paths.While some aspects of homotopy theory have been developed synthetically and formalized in proof assistants, some seemingly easy examples have proved difficult because the required manipulations of paths becomes complicated.In this paper, we describe a cubical approach to developing homotopy theory within type theory.The identity type is complemented with higher-dimensional cube types, such as a type of squares, dependent on four points and four lines, and a type of three-dimensional cubes, dependent on the boundary of a cube.Path-over-a-path types and higher generalizations are used to describe cubes in a fibration over a cube in the base.These higher-dimensional cube and path-over types can be defined from the usual identity type, but isolating them as independent conceptual abstractions has allowed for the formalization of some previously difficult examples.
In this note we show that Voevodsky's univalence axiom holds in the model of type theory based on cubical sets as described in Bezem et al. (in: Matthes and Schubert … In this note we show that Voevodsky's univalence axiom holds in the model of type theory based on cubical sets as described in Bezem et al. (in: Matthes and Schubert (eds.) 19th international conference on types for proofs and programs (TYPES 2013), Leibniz international proceedings in informatics (LIPIcs), Schloss Dagstuhl-Leibniz-Zentrum für Informatik, Dagstuhl, Germany, vol 26, pp 107–128, 2014. https://doi.org/10.4230/LIPIcs.TYPES.2013.107 . http://drops.dagstuhl.de/opus/volltexte/2014/4628 ) and Huber (A model of type theory in cubical sets. Licentiate thesis, University of Gothenburg, 2015). We will also discuss Swan's construction of the identity type in this variation of cubical sets. This proves that we have a model of type theory supporting dependent products, dependent sums, univalent universes, and identity types with the usual judgmental equality, and this model is formulated in a constructive metatheory.
Logical relations and their generalisations are a fundamental tool in proving properties of lambda calculi, for example, for yielding sound principles for observational equivalence. We propose a natural notion of … Logical relations and their generalisations are a fundamental tool in proving properties of lambda calculi, for example, for yielding sound principles for observational equivalence. We propose a natural notion of logical relations that is able to deal with the monadic types of Moggi's computational lambda calculus. The treatment is categorical, and is based on notions of subsconing, mono factorisation systems and monad morphisms. Our approach has a number of interesting applications, including cases for lambda calculi with non-determinism (where being in a logical relation means being bisimilar), dynamic name creation and probabilistic systems.
We contribute a general apparatus for dependent tactic-based proof refinement in the LCF tradition, in which the statements of subgoals may express a dependency on the proofs of other subgoals; … We contribute a general apparatus for dependent tactic-based proof refinement in the LCF tradition, in which the statements of subgoals may express a dependency on the proofs of other subgoals; this form of dependency is extremely useful and can serve as an algorithmic alternative to extensions of LCF based on non-local instantiation of schematic variables. Additionally, we introduce a novel behavioral distinction between refinement rules and tactics based on naturality. Our framework, called Dependent LCF, is already deployed in the nascent RedPRL proof assistant for computational cubical type theory.
In homotopy type theory we can define the join of maps as a binary operation on maps with a common co-domain. This operation is commutative, associative, and the unique map … In homotopy type theory we can define the join of maps as a binary operation on maps with a common co-domain. This operation is commutative, associative, and the unique map from the empty type into the common codomain is a neutral element. Moreover, we show that the idempotents of the join of maps are precisely the embeddings, and we prove the `join connectivity theorem', which states that the connectivity of the join of maps equals the join of the connectivities of the individual maps. We define the image of a map $f:A\to X$ in $U$ via the join construction, as the colimit of the finite join powers of $f$. The join powers therefore provide approximations of the image inclusion, and the join connectivity theorem implies that the approximating maps into the image increase in connectivity. A modified version of the join construction can be used to show that for any map $f:A\to X$ in which $X$ is only assumed to be locally small, the image is a small type. We use the modified join construction to give an alternative construction of set-quotients, the Rezk completion of a precategory, and we define the $n$-truncation for any $n:\mathbb{N}$. Thus we see that each of these are definable operations on a univalent universe for Martin-Lof type theory with a natural numbers object, that is moreover closed under homotopy coequalizers.
It is known that one can construct non-parametric functions by assuming classical axioms. Our work is a converse to that: we prove classical axioms in dependent type theory assuming specific … It is known that one can construct non-parametric functions by assuming classical axioms. Our work is a converse to that: we prove classical axioms in dependent type theory assuming specific instan ...
Haskell, as implemented in the Glasgow Haskell Compiler (GHC), has been adding new type-level programming features for some time. Many of these features---chiefly: generalized algebraic datatypes (GADTs), type families, kind … Haskell, as implemented in the Glasgow Haskell Compiler (GHC), has been adding new type-level programming features for some time. Many of these features---chiefly: generalized algebraic datatypes (GADTs), type families, kind polymorphism, and promoted datatypes---have brought Haskell to the doorstep of dependent types. Many dependently typed programs can even currently be encoded, but often the constructions are painful. In this dissertation, I describe Dependent Haskell, which supports full dependent types via a backward-compatible extension to today's Haskell. An important contribution of this work is an implementation, in GHC, of a portion of Dependent Haskell, with the rest to follow. The features I have implemented are already released, in GHC 8.0. This dissertation contains several practical examples of Dependent Haskell code, a full description of the differences between Dependent Haskell and today's Haskell, a novel type-safe dependently typed lambda-calculus (called Pico) suitable for use as an intermediate language for compiling Dependent Haskell, and a type inference and elaboration algorithm, Bake, that translates Dependent Haskell to type-correct Pico.
A higher inductive type of level 1 (a 1-hit) has constructors for points and paths only, whereas a higher inductive type of level 2 (a 2-hit) has constructors for surfaces … A higher inductive type of level 1 (a 1-hit) has constructors for points and paths only, whereas a higher inductive type of level 2 (a 2-hit) has constructors for surfaces too. We restrict attention to finitary higher inductive types and present general schemata for the types of their point, path, and surface constructors. We also derive the elimination and equality rules from the types of constructors for 1-hits and 2-hits. Moreover, we construct a groupoid model for dependent type theory with 2-hits and point out that we obtain a setoid model for dependent type theory with 1-hits by truncating the groupoid model.
In this paper we construct new categorical models for the identity types of Martin-Löf type theory, in the categories Top of topological spaces and SSet of simplicial sets. We do … In this paper we construct new categorical models for the identity types of Martin-Löf type theory, in the categories Top of topological spaces and SSet of simplicial sets. We do so building on earlier work of Awodey and Warren [2009], which has suggested that a suitable environment for the interpretation of identity types should be a category equipped with a weak factorization system in the sense of Bousfield--Quillen. It turns out that this is not quite enough for a sound model, due to some subtle coherence issues concerned with stability under substitution; and so our first task is to introduce a slightly richer structure, which we call a homotopy-theoretic model of identity types , and to prove that this is sufficient for a sound interpretation. Now, although both Top and SSet are categories endowed with a weak factorization system---and indeed, an entire Quillen model structure---exhibiting the additional structure required for a homotopy-theoretic model is quite hard to do. However, the categories we are interested in share a number of common features, and abstracting these leads us to introduce the notion of a path object category . This is a relatively simple axiomatic framework, which is nonetheless sufficiently strong to allow the construction of homotopy-theoretic models. Now by exhibiting suitable path object structures on Top and SSet , we endow those categories with the structure of a homotopy-theoretic model and, in this way, obtain the desired topological and simplicial models of identity types.
Higher inductive types (HITs) in homotopy type theory are a powerful generalization of inductive types. Not only can they have ordinary constructors to define elements, but also higher constructors to … Higher inductive types (HITs) in homotopy type theory are a powerful generalization of inductive types. Not only can they have ordinary constructors to define elements, but also higher constructors to define equalities (paths). We say that a HIT H is non-recursive if its constructors do not quantify over elements or paths in H. The advantage of non-recursive HITs is that their elimination principles are easier to apply than those of general HITs.
In homotopy type theory, we construct the propositional truncation as a colimit, using only non-recursive higher inductive types (HITs). This is a first step towards reducing recursive HITs to non-recursive … In homotopy type theory, we construct the propositional truncation as a colimit, using only non-recursive higher inductive types (HITs). This is a first step towards reducing recursive HITs to non-recursive HITs. This construction gives a characterization of functions from the propositional truncation to an arbitrary type, extending the universal property of the propositional truncation. We have fully formalized all the results in a new proof assistant, Lean.