catlog/tt/mod.rs
1//! Type theory for models of a double category
2//!
3//! This is developer documentation for those who wish to learn more about how
4//! doublett works "under the hood" -- it expects that you already know the basics
5//! of how to write double models and morphisms between them using doublett.
6//!
7//! To a first approximation, the implementation of doublett is a standard dependent
8//! type theory implementation following Coquand's algorithm, and so we start by
9//! explaining what that looks like.
10//!
11//! # Basics of Normalization by Evaluation
12//!
13//! There are four data structures that form the core of this implementation, which
14//! may be arranged in a 2x2 grid:
15//!
16//! | | Syntax | Value |
17//! |------|--------|-------|
18//! | Term | [TmS] | [TmV] |
19//! | Type | [TyS] | [TyV] |
20//!
21//! Evaluation is the process of going from syntax to values. Evaluation is used to
22//! normalize types*. We need to normalize types because there are many different
23//! syntaxes which may produces the same type. For instance, `b`, and `x.a` where
24//! `x : [ a : @sing b ]` are both the same type. Because types may depend on terms,
25//! we must also normalize terms. (Later we will see that we don't have to normalize
26//! all* terms, but in a vanilla dependent type theory implementation, one does
27//! have to normalize all terms, and as we are reviewing the basics here, we will
28//! stick to that assumption).
29//!
30//! An evaluator for the untyped lambda calculus would look like the following code,
31//! which is in a "rust with a gc" syntax.
32//!
33//! ```ignore
34//! type BwdIdx = usize;
35//!
36//! enum TmS {
37//! Var(BwdIdx),
38//! App(TmS, TmS),
39//! Lam(TmS)
40//! }
41//!
42//! type Env = Bwd<Closure>;
43//!
44//! struct Closure {
45//! env: Env,
46//! body: TmS
47//! }
48//!
49//! fn eval(env: Env, tm_s: TmS) -> Closure {
50//! match tm_s {
51//! TmS::Var(i) => env.lookup(i),
52//! TmS::App(f, x) => {
53//! let fv = eval(env, f);
54//! let xv = eval(env, x);
55//! eval(fv.env.snoc(xv), fv.body)
56//! }
57//! TmS::Lam(body) => Closure { env, body }
58//! }
59//! }
60//! ```
61//!
62//! This is a "closed" evaluator for lambda calculus, which means that a value is
63//! always* a closure. What we need for type theory is an "open" evaluator,
64//! which means that a value is either a closure, or a variable. This permits us
65//! to normalize *past a binding*. For instance, we can normalize `λ x. (λ x. x) x`
66//! to `λ x. x`. This looks like the following:
67//!
68//! ```ignore
69//! type FwdIdx = usize;
70//!
71//! enum TmV {
72//! // f a₁ ... aₙ
73//! Neu(FwdIdx, Bwd<TmV>),
74//! Clo(Closure)
75//! }
76//!
77//! impl TmV {
78//! fn app(self, arg: TmV) -> TmV {
79//! match self {
80//! TmV::Neu(head, args) => TmV::Neu(head, args.snoc(arg)),
81//! TmV::Clo(clo) => eval(clo.env.snoc(arg), clo.body)
82//! }
83//! }
84//! }
85//!
86//! type Env = Bwd<TmV>;
87//!
88//! fn eval(env: Env, tm_s: TmS) -> Closure {
89//! match tm_s {
90//! TmS::Var(i) => env.lookup(i),
91//! TmS::App(f, x) => {
92//! let fv = eval(env, f);
93//! let xv = eval(env, x);
94//! fv.app(xv)
95//! }
96//! TmS::Lam(body) => TmV::Clo(Closure { env, body })
97//! }
98//! }
99//!
100//! fn quote(scope_len: usize, tm_v: TmV) -> TmS {
101//! match tm_v {
102//! TmV::Neu(f, xs) =>
103//! xs.iter.fold(TmS::Var(scope_len - f - 1), |f, x| TmS::App(f, x)),
104//! TmV::Clo(clo) => {
105//! let x_v = TmV::Neu(scope_len, Bwd::Nil);
106//! let body_v = eval(clo.env.snoc(x_v), clo.body);
107//! TmS::Lam(quote(scope_len + 1, body_v));
108//! }
109//! }
110//! }
111//! ```
112//!
113//! The normalization procedure is achieved by evaluating then quoting.
114//!
115//! In a way, evaluation is kind of like substitution, in that it replaces variables
116//! with values. However, the key difference is that evaluation creates closures
117//! that capture their environment, while substitution does not.
118//!
119//! Anyways, that's the basics of normalization by evaluation. We now discuss how
120//! the theory implemented in this module differs from a typical dependent type theory.
121//!
122//! # Categories with families, and indexed simple categories with families
123//!
124//! DoubleTT is the internal language for a category formed by a Grothendieck construction
125//! of an indexed category. To understand what this means, we will quickly review
126//! the categories-with-families approach to type theory.
127//!
128//! A category with families can be described as a big tuple with the following fields.
129//! We write this in a quasi-Agda syntax, with implicit variables.
130//!
131//! ```text
132//! -- Contexts
133//! Con : Set
134//! -- Substitutions
135//! _⇒_ : Con → Con → Set
136//! -- Types
137//! Ty : Con → Set
138//! -- Terms
139//! Tm : (Γ : Con) → Ty Γ → Set
140//!
141//! variable Γ Δ : Con
142//! variable A : Ty _
143//!
144//! -- Action of substitutions on types
145//! _[_]ty : Ty Δ → Γ ⇒ Δ → Ty Γ
146//! -- Action of substitutions on terms
147//! _[_]tm : Tm Δ A → (γ : Γ ⇒ Δ) → Tm Γ (A [ γ ]ty)
148//!
149//! -- Empty context
150//! · : Con
151//! -- Context with another variable added
152//! _▷_ : (Γ : Con) → Ty Γ → Con
153//!
154//! -- Producing a substitution
155//! _,_ : (γ : Γ ⇒ Δ) → Tm Γ (A [ γ ]ty) → Γ ⇒ (Δ ▷ A)
156//! -- The weakening substitution
157//! p : (Γ ▷ A) ⇒ Γ
158//! -- Accessing the variable at the end of the context
159//! q : Tm (Γ ▷ A) (A [ p ]ty)
160//!
161//! -- a bunch of laws
162//! ...
163//! ```
164//!
165//! A category with families models the basic judgment structure of dependent type theory,
166//! that is the dependencies between contexts, substitutions, types, and terms.
167//!
168//! The type theory behind doublett is given by a category with families that is given
169//! by the Grothendieck construction of an indexed simple category with families. This can
170//! be presented in the following way.
171//!
172//! ```text
173//! Con₀ : Set
174//! _⇒₀_ : Con₀ → Con₀ → Set
175//! -- Note that types don't depend on contexts anymore
176//! Ty₀ : Set
177//! Tm₀ : Con₀ → Ty₀ → Set
178//!
179//! _[_]tm₀ : Tm₀ Δ A → Γ ⇒₀ Δ → Tm₀ Γ A
180//!
181//! ·₀ : Con₀
182//! _▷₀_ : Con₀ → Ty₀ → Con₀
183//!
184//! _,_ : (γ : Γ ⇒₀ Δ) → Tm₀ Γ A → Γ ⇒₀ (Δ ▷₀ A)
185//! p : (Γ ▷₀ A) ⇒₀ Γ
186//! q : Tm (Γ ▷₀ A) A
187//!
188//! -- a bunch of laws
189//! ...
190//!
191//! Con₁ : Con₀ → Set
192//! _⇒₁_ : Con₀ Γ → Con₀ Γ → Set
193//! Ty₁ : Con₀ → Set
194//! Tm₁ : Con₁ Γ → Ty₁ Γ → Set
195//!
196//! variables Γ' : Con₁ Γ
197//! variables Δ' Δ'' : Con₁ Δ
198//! variables A' : Ty₁ _
199//!
200//! _[_]con₁ : Con₁ Δ → (Γ ⇒₀ Δ) → Con₁ Γ
201//! _[_]⇒₁ : (Δ' ⇒₁ Δ'') → (γ : Γ ⇒₀ Δ) → (Δ' [ γ ]con₁) ⇒₁ (Δ'' [ γ ]con₁)
202//! _[_]ty₁ : Ty₁ Δ → (Γ ⇒₀ Δ) → Ty₁ Γ
203//! _[_]tm₁ : Tm₁ Δ' A' → (γ : Γ ⇒₀ Δ) → Tm₁ (Δ' [ γ ]con₁) (A' [ γ ]ty₁)
204//!
205//! -- context and substitution formation, etc.
206//! ...
207//! ```
208//!
209//! The idea for doublett is that the object types are elements of `Ty₀`, and the
210//! morphism types (which depend on objects of certain object types) are elements of
211//! `Ty₁ Γ` (where `Γ` is a `Con₀` which holds the relevant objects). This
212//! separation nicely captures that we only care about a fairly limited form of
213//! dependency; we don't need a general dependent type theory.
214//!
215//! However, we want to freely intermix object generators and morphism generators in
216//! a notebook. We produce a type theory which permits this via a Grothendieck construction.
217//!
218//!
219//! ```text
220//! Con := (Γ₀ : Con₀) x Con₁ Γ₀
221//! Γ ⇒ Δ := (γ₀ : Γ₀ ⇒₀ Δ₀) x (Γ₁ ⇒₁ (Δ₁ [ γ₀ ]con₁))
222//! Ty Γ := (A₀ : Ty₀) x Ty₁ (Γ₀ ▷₀ A₀)
223//! Tm Γ A := (a₀ : Tm₀ Γ₀ A₀) x Tm₁ Γ₁ (A₁ [ (id Γ₀ , a₀) ]ty₁)
224//! ```
225//!
226//! We call this the "total" type theory, deriving etymologically from the "total space"
227//! of a fibration.
228//!
229//! The internal language of this total type theory is dependent type theory, but
230//! with a twist: no type may ever depend on the second component of a term.
231//!
232//! That is, if `F : A → Type` is a type family, and `a` and `a'` are two terms
233//! of type `A` which differ only in their second component, then `F a` and `F a'`
234//! must be the same type.
235//!
236//! This means that in the conversion checking apparatus, we don't need to keep
237//! track of the values of morphisms; we represent any morphism as [TmV::Opaque].
238//! Therefore, we don't need to worry about equality checking with respect to
239//! any morphism equalities which we might want to impose.
240//!
241//! So to recap, the implementation of doublett is similar to a normal
242//! implementation of dependent type theory, except we take advantage of the way
243//! doublett is built (as a Grothendieck construction) to avoid writing a normalizer
244//! for morphism terms.
245//!
246//! # Specializations
247//!
248//! There's a third component to doublett, which we call "specialization." We can see
249//! specialization at play in the following example. Let `Graph` and `Graph2` be
250//! the following double models.
251//!
252//! ```text
253//! type Graph := [
254//! E : Entity,
255//! V : Entity,
256//! src : (Id Entity)[E, V],
257//! tgt : (Id Entity)[E, V],
258//! ]
259//! /# declared: Graph
260//!
261//! type Graph2 := [
262//! V : Entity,
263//! g1 : Graph & [ .V := V ],
264//! g2 : Graph & [ .V := V ]
265//! ]
266//! /# declared: Graph2
267//! ```
268//!
269//! Then we can synthesize the type of `g.g1.V` in the context of a variable `g: Graph2`
270//! via:
271//!
272//! ```text
273//! syn [g: Graph2] g.g1.V
274//! /# result: g.g1.V : @sing g.V
275//! ```
276//!
277//! We can also normalize `g.g1.V` in the context of a variable `g: Graph2`
278//!
279//! ```text
280//! norm [g: Graph2] g.g1.V
281//! /# result: g.V
282//! ```
283//!
284//! This shows how the specialization `g1 : Graph & [ .V := V ]` influences the type
285//! and normalization of the `.g1.V` field of a model of `Graph2`.
286//!
287//! Specialization is a way of creating *subtypes* of a record type by setting the
288//! type of a field to be a subtype of its original type. So in order to understand
289//! specialization, one must first understand subtyping. In doublett, there are
290//! two ways to form subtypes. We write `A <: B` for "`A` is a subtype of `B`".
291//!
292//! 1. If `a : A`, then `@sing a <: A`.
293//! 2. If `A` is a record type with a field `.x`, and `B` is a subtype of the type of
294//! `a.x` for a generic element `a : A`, then `A & [ .x : B ] <: A`. The notation `A
295//! & [ .x := y ]` is just syntactic sugar for `A & [ .x : @sing y ]`.
296//!
297//! Crucially, the type of `a.x` may depend on the values of other fields of `a`, so
298//! it is important that the subtyping check is performed in the context of a *generic*
299//! element of `A`. In the above case, the type of `g1.V` is `Entity` for a generic
300//! `g1 : Graph`, and as `V : Entity`, we have `@sing V <: Entity` as required for
301//! `Graph & [ .V := V ]` to be a well-formed type.
302//!
303//! Mathematically speaking, one could think of specialization as adding a third
304//! component to each of the definitions for `Con`, `⇒`, `Ty`, and `Tm` from before.
305//! That is, we have a base component, an indexed component, and a specialization
306//! component. In the style of presentation that we were using before, this looks
307//! like the following:
308//!
309//! ```text
310//! -- a specialization of a type in a context
311//! Spcl : Con₀ → Ty₀ → Set
312//! -- whether a term satisfies a specialization
313//! Sat : Tm₀ Γ A → Spcl Γ A → Prop
314//!
315//! -- s is a subspecialization of s' if all terms satisfying s also satisfy s'
316//! Sub : Spcl Γ A → Spcl Γ A → Prop
317//! Sub s s' := ∀ (a : Tm₀ Γ A) . Sat a s → Sat a s'
318//!
319//! Sing : Tm₀ Γ A → Spcl Γ A
320//! Elt : (a : Tm₀ Γ A) → Sat a (Sing a)
321//!
322//! SCon : Con₀ → Set
323//! · : SCon ·
324//! _▷s_ : SCon Γ → {A : Ty₀} → Spcl Γ A → SCon (Γ ▷ A)
325//!
326//! -- A context consists of a base context, an indexed context, and a
327//! -- specialization of the base context
328//! Con := (Γ₀ : Con₀) x Con₁ Γ₀ x SCon Γ₀
329//! -- A type consists of a base type, an indexed type, and a specialization
330//! -- of the base type
331//! Ty Γ := (A₀ : Ty₀) x Ty₁ (Γ₀ ▷ A₀) x Spcl Γ₀ A₀
332//! -- A term consists of a base term, an indexed term, and a proof
333//! -- that the term satisfies the specialization of the base type.
334//! Tm Γ A := (a₀ : Tm₀ Γ₀ A₀) x Tm₁ Γ₁ (A₁ [ (id , a₀) ]ty₁) x Sat a₀ A₂
335//!
336//! ...
337//! ```
338//!
339//! However, just as before we do not literally represent types as a base part and
340//! an indexed part (we instead implement the projection from total category to base
341//! category via `Opaque` values) we do not literally represent types as base part +
342//! indexed part + specialized part. Instead, we have some algorithms which simply
343//! ignore specializations, and this is equivalent to a projection from (base +
344//! indexed + specialized) to (base + indexed). This is the case in, for instance
345//! [eval::Evaluator::convertable_ty].
346//!
347//! This is convenient, because it means that checking whether two types are
348//! subtypes can be reduced to checking whether they are convertable, and then
349//! checking whether a generic element of the first type is an element of the second
350//! type. This neatly resolves the difference between `[ x : @sing a ]` and
351//! `[ x : Entity ] & [ .x := a ]`, which are represented differently, but should
352//! be semantically the same type.
353
354pub mod batch;
355pub mod context;
356pub mod eval;
357pub mod modelgen;
358pub mod notebook_elab;
359pub mod prelude;
360pub mod stx;
361pub mod text_elab;
362pub mod toplevel;
363pub mod util;
364pub mod val;
365
366#[cfg(doc)]
367use stx::*;
368#[cfg(doc)]
369use val::*;