CSS

Tuesday, November 23, 2021

The Next Generation of Theorem Provers

1. A parable. Back in the 1980s and '90s, there was a Cambrian explosion of CPU architectures: SPARC, POWER, PA-RISC, MIPS, and presumably many others. Each processor architecture had its own customized UNIX-like operating system (SPARC had Solaris née SunOS, POWER had AIX, PA-RISC had HP-UX, MIPS had IRIX, etc.).

If you wrote a program and compiled it for one architecture, but I worked on a computer with a different architecture, then you couldn't share your program with me: you had to send me your source code, and hopefully I had a compiler for the language you used, then I could compile your program for my computer.

Of course, this was a nuisance. Some compilers had special features which hindered porting programs. Thus motivated the Java programming language with the motto, "Write once, run anywhere".

2. Theorem Provers. The situation with theorem provers reminds me of the situation with competing CPU architectures in the '80s and '90s. If you wrote a proof in Lean (or Coq or HOL or...), and I work with Coq (or HOL or anything else), then I can't verify your proof in my system. Worse: I can't even "compile it" to my theorem prover. I would have to rewrite it from scratch.

2.1. Terminology. I'm going to generically refer to proof checkers, interactive theorem provers, and automatic theorem provers under the umbrella term Prover. This is non-standard, but the field is so small that any choice would be non-standard.

3. "Generations" of Provers. For the sake of clarity, I think it's useful to introduce a notion of "Generations of Provers", analogous to generations of programming languages.

"First generation provers" include: Automath, Metamath, possibly Twelf, and others lacking automation (or auto-generation of infrastructure, like inductive types which [when defined] introduce an induction principle). First generation provers require you do this manually.

"Second generation provers" range from LCF to HOL, Coq, and Lean (and, I would argue, Mizar). There is some degree of automation, the proof assistant does some of the heavy lifting for you, but you may be required to explicitly state some proof steps.

And that's where we are today: after about 70 years, we've got two "generations" of provers. (This isn't too bad, considering how few people have worked on theorem provers and proof assistants.) We emphasize that "generation" is used metaphorically for the "level of abstraction" afforded by the prover.

4. Hitting a moving target. Unlike the situation facing CPU architects or programming language designers, we who think about theorem provers are trying to formalize an evolving language (mathematics).

We could berate mathematicians for not using theorem provers. But then we have our "tower of Babel" problem, where your prover can't talk to mine. (Fanatics Evangelists resolve this problem by insisting on using their "one true Prover", but that's unrealistic and unmerited.)

A better alternative would be to try to come up with the "next generation" of provers. This is easy to say, hard to accomplish. And mathematicians would just say, "Well, can't we make it like LaTeX with custom macros for proving results?" Again: easier said than done.

Designing a language is very hard, even with simple requirements. But a step towards the next generation would be (I believe Freek Wiedijk pointed this out in some article) analogous to how compilers "eat in" a high level language, process it, and then "spit out" assembly. What's the "assembly language" of mathematics? Some first generation prover, like Metamath or Automath.

5. Designing a mathematical language. The way we have learned to design a programming language (from many years of trial and error): write many small programs in the proposed language, and in comments specify the intended outcome. This strategy applies to provers, as well: write down many examples of mathematics [definitions, theorems, proofs or proof sketches] in the hypothetical prover's vernacular, and comment on what it should be doing.

There is much freedom when designing a new prover's vernacular, and it's easy to "cheat". Simply say, "...and somehow the prover will magically do the following: ...".

Perhaps we can turn this on its head, and begin thinking about how to do mathematics as a controlled natural language? This is how I stumbled into theorem proving, running across Freek Wiedijk's note "The Mathematical Vernacular".

6. Semiformal mathematics: controlling the vernacular. I just started jotting these ideas down when I came across Jeremy Avigad's interesting recent paper, "The design of mathematical language". Although I agree with much of the paper, and I especially praise the use of "many small examples from mathematical literature" to guide the discussion, I thought it was limited in exploring the "design space" and eager to accept Lean's design decisions.

For example, consider the definition of a "normal subgroup". Are we defining an adjective ("normal") or a type modifier (since "subgroup" is a type, a "normal subgroup" would be a subtype of "subgroup") or a new type with implicit coercions (any "normal subgroup" is automatically a "subgroup", and satisfy the properties that a subgroup would have)?

Another undiscussed dimension to mathematical definitions (as a domain specific language) would be examining the "stuff, structure, and properties" idea of Baez and Dolan. This template gives us a lot "for free", and can be made somewhat "independent of foundations" by using objects in toposes. It's a small thing, but seldom discussed among provers.

The discussion also lacked adequate examples from applied mathematics (which, for me, would be things like weak solutions to PDEs, or theorems about regularity of PDEs, or something like that...my mind is focused on the Navier-Stokes equations). Granted, this is a domain where many mathematicians flinch and say, "Ouch, this is an unpleasant field." I do not intend it as a criticism, but an observation.

6.1. Analysis, Synthesis. The next natural step in Avigad's paper would be to move from analyzing examples to synthesizing some semi-formal "vernacular".

I honestly don't have a "semi-formal vernacular" at hand to present, but I have been using something similar to Wiedijk's vernacular for the past year. Since April or so of 2021, I have been transcribing my notes into a Zettelkasten (i.e., onto A6 size slips of paper). Proofs and proof sketches have been written using a slight variant of Wiedijk's vernacular.

It works fine for proofs, and forces me to fill in missing steps (or at least, make note of them). But I think proofs as a domain-specific language don't require much thought: we've had very smart people think about it for a century, and they came up with pretty good ideas (natural deduction, judgements, etc.). Now my task boils down to coming up with the right "reserved keywords" corresponding to the proof steps found in mathematical logic.

For definitions, in Field theory and Galois theory, most definitions don't neatly fit the "stuff, structure, properties" mould. This is great! Now I have some ready-to-present examples where it fails, namely whenever a definition looks like: "We call a gadget $G$ Hairy if [hairy property] holds." This occurs a lot in field theory and Galois theory. It also forces me to think deeply about what "species" of definition this is, and what's "really being defined" here.

6.2. Future of Provers? The short answer is, I don't know. The next steps may be to think about the "next generation" of provers, and try to move towards a "foundation independent framework" which resembles how working mathematicians actually operate.

But I think it's wrong-headed to say, as Leslie Lamport put it, mathematicians are "doing proofs wrong" because it resembles Euclid (and Euclid is old, thus bad). It's probably equally as misguided as suggesting that the design of the input language for provers is irrelevant, as any compiler designer will tell you, since this alienates the user-base and hinders adoptation.

Taken together, this suggests we need to make provers resemble "ordinary mathematics" as published and practiced by working mathematicians. Thus investigations along the lines of Avigad's article is well worth pursuing.

My intuition tells me that the third or fourth generation of Provers will resemble compilers, accepting more human intelligible input, producing "assembly code"-like output for a particular foundations of mathematics. Changing foundations would be a "one liner", if that. But what does the input language look like? That's hard to say.

No comments:

Post a Comment