CSS

Monday, November 21, 2022

Emacs Macro: PDF Creation Date for LaTeX

I have been using Emacs...for a while now. I've found it useful to automate away common tasks using Lisp functions. One that has been recurring: write the creation date. Using something like datetime2 package in LaTeX (and assuming you're using PDFlatex), then the following command can be used to automatically determine the date and time you started writing:

(defun latex-created ()
  (interactive)
  (insert (format-time-string "\\DTMsavetimestamp{creation}{%FT%T%z}\n"))
  (insert "{\\DTMsetstyle{pdf}\\pdfinfo{/CreationDate (\\DTMuse{creation})}}\n")
  (insert "\\date{\\DTMusedate{creation}}\n"))

The other bit of advice I have is to use Yasnippets. It's so much easier to start a new line, enter "d", hit tab, and have it expand to a definition environment (or have "thm" expand to a theorem environment). I have written a few snippets and found them wonderful.

Tuesday, April 26, 2022

Zettelkasten workflow

I was asked about the details concerning my Zettelkasten (last time I checked, I had an estimated 7920 slips in it, which motivated the question). I thought I would write them down here with some examples.

The basic workflow boils down to:

  1. Pick some reading material
  2. Read it
  3. Summarize the reading material in "literature notes"
  4. Write some "permanent notes"
  5. Integrate the "permanent notes" in my "zoo" [Zettelkasten]
  6. Think about stuff
  7. Write a manuscript using my permanent notes
  8. Questions tend to crop up while writing a manuscript, which guides my choice of next reading material

Literature Notes

Definition. A Literature Note is some writing which occurs when reading an article or book. Its contents vary considerably, but are used to create permanent notes. The literature notes are stored in a separate "shoe box".

The content of the literatuer notes varies depending on how familiar I am with the subject, or whether I can "see where it's going". These are written on quarter-slips of printer paper which are 4.25-inches wide and 5.5-inches tall. I also draw a stripe down the "top" using a blue highlighter, so I can see if I accidentally placed one in my Zettelkasten. [I store literature notes in a "bibliography shoe box".]

Figure 1. A [clipped] literature note from Mac Lane's Categories for the Working Mathematician.

If I'm having difficulty with some portion of an article or book, I sometimes look for other explanations, and insert "citations" to them. (Basically, a sentence like, "I honestly cannot follow [author] here, but I found [other reference] useful, specifically [section].")

Note I usually write small, and this is kinda large for my writing: it's about 3/16 inches, or 0.5 centimeters, for a line's height. I do not place lined paper behind the slip as I write, so it can get pretty curvy by accident.

The back of the literature notes include bibliographic information, and when I read it. This may include a "counter" for long chapters, which is just something like "(part 3 of N)". For technical articles, I also include the journal information. The idea is, if I completely lose access to my computer and papers, I can look up the source material again. Consequently, I give as much bibliographic information as possible.

Figure 2. The back of the literature note from figure 1.

I can refer to my literature notes when writing a permanent note, but I do not permit myself to refer to the original source when writing permanent notes. If I find I need to refer to the original source, I take that to mean my literature note-taking process was lacking somehow.

There are times when, for example, I read a source particularly from one perspective. (E.g., "How can this article about quantum hydrodynamics be used in this particular calculation I'm trying to do?") Later on, I realize there was some important aspect I neglected to take notes on (e.g., "Oh, that article about quantum hydrodynamics revived the 'particle model tradition' of Stokes, I totally missed that."). In that case, I re-read the article or book, and because I note when I read it on the back...it acts like a note to "future me" that there was some material I missed in the reading.

(And, for the record, I organize the literature notes by author-year-title.)

Permanent Notes

Definition. A Permanent Note is some writing which is given an ID number, a "title", and some content; it is stored in the Zettelkasten.

Most of my notes are mathematical in nature, or written in a "mathematical style". I mean to say, I write them using "mathematical registers" to classify the sort of statement (theorem, example, definition, remark, etc.) and a summary of the result. Then the body of the slip of paper contains more precise details.

Figure 3. An example of a permanent note [front side].

I write on printer paper cut into quarters, so this is 5.5-inches wide and 4.25 inches tall. When writing on the paper, I have a college-ruled lined paper behind the quarter slip (which is why it appears so straight and clean).

ID Numbering. I should mention the ID number (the "6.1.2/3b" in the upper left-hand corner) has the following BNF grammar:

id⟩ ::= ⟨folder-part⟩ "/" ⟨file-part⟩
  ⟨folder-part⟩ ::= ⟨positive integer⟩
                 |  ⟨positive integer⟩ "." ⟨folder-part⟩
  ⟨file-part⟩ ::= ⟨positive integer⟩
               |  ⟨positive integer⟩ ⟨lowercase letter⟩
               |  ⟨positive integer⟩ ⟨lowercase letter⟩ ⟨file-part

The "folder part" encodes the generic subject of the slip. For example, "6.1.2" is the folder part for the slip given. Here "6" refers to "Mathematical analysis", "6.1" refers to "Real analysis", and "6.1.2" refers to sequences of real numbers. Observe, the folder part moves from more general ("Real analysis") to the more specific; this is characteristic of all "folders" in my Zettelkasten.

The "file part" encodes the placement of a slip "inside" a "folder". The idea is that I'm sending "telegrams" to an intelligent (but completely ignorant) colleague. My colleague has never heard of real analysis (or anything else besides arithmetic and spelling). I need to order my telegrams somehow. The first attempt is to use Threads, just a sequence of numbers.

But when I want to discuss further an aspect of a thread (like, I want to pause and give a "sub-thread" consisting of examples), then I need to Branch off a slip to talk about some specific aspect. A single slip may have zero or more branches. I indicate a branch by writing a letter. Usually "a" is reserved for examples (unless there is a critical remark necessary to clarify something), then "b" is reserved for theorems about a concept.

For the example given, there is some fibbing here. That is to say, the ID number should really be "6.1.2/3b1". Why? Because letters act like different "flavors" of decimal separations. There shouldn't be any privilege to them (e.g., "a" is not more important than "b", there is no sense of priority, etc.). I haven't added any further theorems concerning "6.1.2/3", so I lazily truncated the ID number of the example to "6.1.2/3b".

Title. The title is usually of the form:

title⟩ ::= ⟨register⟩ ". " ⟨summary

As a mathematician, I habitually write "definition", "theorem", "proof", "remark", "example", etc., in an abbreviated form ["defn", "thm", "pf", "rmk", "ex", respectively]. Then I write a human readable summary of the result, like "Convergent sequences are bounded".

I previously mentioned introducing a "code" register when writing literate programs, and discussed the content of such notes.

Recently, I have also been experimenting with writing notes on history, and I have found registers of little use there. So your mileage may vary.

Figure 4. Another example of a permanent note [front side]. NOTE: all links are written in red ink (using non-red ink was a bad idea).

Also worth mentioning, the first "folder" is about my Zettelkasten's conventions. An example slip from this "folder" is given in Figure 4.

Content of a permanent note. This is pretty subjective, and varies considerably. For mathematics, I have a great luxury here since every "register" is a separate slip.

I have found it useful for propositions [lemmas, theorems, corollaries, etc.] to give a "proof sketch" below the theorem statement. If there is some justification for a step, it's a branch off the slip.

Examples are just special kinds of propositions, and the emphasis of its content is the proof.

I can give some concrete "tricks" to writing permanent notes, they both boil down to relating new material to existing content already in the Zettelkasten.

Example: Creating examples. A useful exercise is to go back to examples existing in my Zettelkasten, and try to use it as an example of the new material. For instance, I have the "symmetries of the tetrahedron" as an example of a group. When reading figure 1, I may pose the question: can we express the symmetries of the tetrahedron as a universal arrow? (Not in any obvious way, but there are other gadgets which may be a universal arrow, so I keep looking.) When I find an example [call it a "widget"], I create a new permanent note entitled "Ex: widget (Universal Arrow)", the first sentence is "A widget [link] is a universal arrow." The "[link]" refers back to the original slip which first discussed the "widget".

Usually this process is more "subconscious" than actually flipping through every slip in my Zettelkasten. With the universal arrow example, I usually know an example would be an algebraic gadget, so I can restrict my focus to the "Abstract algebra" folder; even then, I can restrict attention to a few "subfolders".

Example: Translating content. Another way to approach creating permanent notes is to "translate" old material from a different domain to whatever I just read. For example, I learned the Krull-Schmidt theorem for rings and modules. But currently I am studying finite group theory. Could I frame an analogous theorem for groups?

Convention: formal proof sketches. I try to write the mathematical proofs using Wiedijk's Mathematical Vernacular, which is a synthesis of formal proof steps while leaving the terms and formulas informal. I also write "big step proofs", or what Wiedijk calls formal proof sketches.

Back of permanent notes. On the back side of a permanent note is a list of references which were used for the slip. They are lists to where one could find the same result in the literature which I've read, and refer to entries in my "bibliography apparatus".

Integration into the Zettelkasten. Usually, I study a book or article, write a number of permanent notes without any ID number. Only after I have extracted all the information needed from my source, I begin to think about how to integrate the new permanent notes into my Zettelkasten.

Sometimes the order of material in a book is "backwards" (it happens a lot in physics), so the corresponding permanent notes are in reverse order. Less frequently, the material needs to be partitioned to separate parts of the Zettelkasten. There has not been a time when the material is "completely disconnected" from each other, in the sense that they are added to different threads.

How big will the classification of finite simple groups be?

Gorenstein, Lyons, and Solomon (and friends) have been writing a consolidated proof for the classification theorem for finite simple groups...since 1994. In fact, it's still a work in progress. How big will it be?

Thus far, nine volumes have been published:

Volume NumberFirst page numberBibliography Page numberNumber of Pages
122159137
214216202
3 18 420 402
4 18350332
5 14476462
6 14532518
7 12352340
8 14498484
9 12530518

At present, there is an expected additional 4 volumes coming (according to David Roberts's answer on Mathoverflow). Performing a linear regression on the number of pages in terms of the volume number, we can make a prediction. Here's the R code doing this:

df <- data.frame(vol = 1:9, 
      start=c(22,14,18,18,14,14,12,14,12), 
      bib=c(159,216,420,350,476,532,352,498,530));
df$pgs <- df$bib - df$start;
lm0 <- lm(pgs ~ vol, data=df);
summary(lm0);

# Call:
# lm(formula = pgs ~ vol, data = df)
# 
# Residuals:
#     Min      1Q  Median      3Q     Max 
# -118.29  -53.62  -14.82   84.78  105.84 
# 
# Coefficients:
#             Estimate Std. Error t value Pr(>|t|)   
# (Intercept)   174.56      62.68   2.785   0.0271 * 
# vol            40.53      11.14   3.639   0.0083 **
# ---
# Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
# 
# Residual standard error: 86.27 on 7 degrees of freedom
# Multiple R-squared:  0.6542,	Adjusted R-squared:  0.6048 
# F-statistic: 13.24 on 1 and 7 DF,  p-value: 0.008296

predict(lm0, data.frame(vol=c(10,11,12,13)))
#        1        2        3        4 
# 579.8889 620.4222 660.9556 701.4889 

sum(df$pgs)
# [1] 3395

sum(predict(lm0, data.frame(vol=c(10,11,12,13)))) + sum(df$pgs)
# [1] 5957.756

Tl;dr: the classification of finite simple groups is expected to be around 5957 pages, but could ostensibly reach up to 8243 pages.

Addendum

We can improve the estimate by dropping the constant term, and estimating the "additional pages" compared to the first volume:

df <- data.frame(vol = 1:9, 
      start=c(22,14,18,18,14,14,12,14,12), 
      bib=c(159,216,420,350,476,532,352,498,530));
df$pgs <- df$bib - df$start;
df2 <- data.frame(vol = 1:9,
      pgs=df$pgs - df$pgs[1]);
lm0 <- lm(pgs ~ 0 + vol, data=df2);
summary(lm0)

# Call:
# lm(formula = pgs ~ 0 + vol, data = df2)
# 
# Residuals:
#     Min      1Q  Median      3Q     Max 
# -122.24  -37.17  -24.70   92.68  125.61 
# 
# Coefficients:
#     Estimate Std. Error t value Pr(>|t|)    
# vol   46.463      4.901    9.48 1.26e-05 ***
# ---
# Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
# 
# Residual standard error: 82.75 on 8 degrees of freedom
# Multiple R-squared:  0.9183,	Adjusted R-squared:  0.908 
# F-statistic: 89.86 on 1 and 8 DF,  p-value: 1.263e-05

predict(lm0, data.frame(vol=c(10,11,12,13))) + df$pgs[1]
#        1        2        3        4 
# 601.6316 648.0947 694.5579 741.0211 

sum(predict(lm0, data.frame(vol=c(10,11,12,13))) + 137) + sum(df$pgs)
# [1] 6080.305

This gives us a better fit (adjusted R-squared of 0.908) with a better error estimate (with 95% probability, it will be 6080 pages give or take 40 pages).

Tl;dr2: the classification of finite simple groups can be expected to be around 6080±40 pages long.

Wednesday, March 23, 2022

Zettelkastens and Literate Programs

I've been re-reading John Harrison's Handbook of Practical Logic and Automated Reasoning (a wonderful book), with an eye towards writing notes for my Zettelkasten.

What I have done, out of habit, is I've reworked the code in Standard ML (as opposed to OCaml). I've added a "register" (mathematicians would recognize them as "theorem environments" in LaTeX) for source code, and the template for writing such notes on a 5.5 inch by 4.25 inch (A6) slip of paper looks like:

[id number]Code [name of chunk]
[English summary of the code, or the algorithm, or...]
⟨[name of chunk]⟩≡
[Standard ML code here]

Just replace the bracketed italicized text (including the brackets) with your own text. I do write a horizontal line separating the "code block" from the "text block", and underline the "Code" part of the title.

I follow the general heuristics Knuth suggests for literate programming, even using the names of code chunks in angled braces as if they were statements. Following some of the early handwritten documents on Standard ML, I underline keywords like "if", "then", "else", "case", "fun", etc., in the code block.

While reading Harrison's book, I add unit tests, write up examples, etc. This isn't done "in the abstract", I actually type the code up on a computer (e.g., here is my implementation of most of the propositional logic code).

One advantage to this is that we can prove properties about the code snippet in branches off the code zettel. Another advantage, giving a good human-level summary of the code snippet helps greatly with Harrison's book, since the techniques from chapter 3 make another appearance in chapter 6 (i.e., 300 pages later) and I forget some of the nuances surrounding the implementation ("Is skolem called before or after nnf?"), I can just glance at the snippets for guidance.

Another useful aspect of using a Zettelkasten for studying theorem provers is that it facilitates blending notes from multiple books seamlessly. I noticed this accidentally, when I was unsatisfied with Harrison's description of Herbrand universes, then started reading Chang and Lee's Symbolic Logic and Mechanical Theorem Proving (1973) description of Herbrand universes.

There are other aspects of Zettelkasten which seems uniquely suitable for studying theorem provers, but I digress.

I'm experimenting with using a Zettelkasten for literate programming "by hand" more generally, which is a bizarre but fascinating experience, especially when exploring monads in Standard ML.

Anyways, I just wanted to make note of this quirky note-taking system for literate programming.

Monday, December 6, 2021

On Structured Proofs and the Declarative Proof Style

1. Suppose we have successfully formalized the classification theorem for finite simple groups in your favorite theorem prover (Lean, HOL, Coq, Mizar, whatever). In a century's time (or several centuries from now), do you think your theorem prover will still be around and backwards compatible? If not, what good would the formalization be?

1.1. Remark (Concerns have been partially realized). We stress, despite the concerns raised being hypothetical and distant, they have been partly realized already. The Automath project's efforts to translate Landau's Foundations of Analysis would have been lost to history but for Freek Wiedijk's heroic efforts. Or, hitting closer to home for many, Lean 3 broke compatibility with Lean 2; and, as I understand it, there still remains a large quantity of Lean 2 libraries not yet ported. Worse, Lean 4 broke compatibility with Lean 3, and the Xena project's library has not yet ported to Lean 4. The idea that we would encode mathematical knowledge in one single theorem prover seems riskier than encoding it in natural language. Why not try to have the best of both worlds?

2. Readable Proofs. One of the roles for proofs in mathematics is communication. Hence we need something analogous to the notion of "readable code" in programming: readable proofs. Consider the following proof, tell me what it does (no cheating!):

prove
 ('!p q. p * p = 2 * q * q ==> q = 0',
  MATCH_MP_TAC num_WF THEN REWRITE_TAC[RIGHT_IMP_FORALL_THM] THEN
  REPEAT STRIP_TAC THEN FIRST_ASSUM(MP_TAC o AP_TERM 'EVEN') THEN
  REWRITE_TAC[EVEN_MULT; ARITH] THEN REWRITE_TAC[EVEN_EXISTS] THEN
  DISCH_THEN(X_CHOOSE_THEN 'm:num' SUBST_ALL_TAC) THEN
  FIRST_X_ASSUM(MP_TAC o SPECL ['q:num'; 'm:num']) THEN
  POP_ASSUM MP_TAC THEN CONV_TAC SOS_RULE);;

Or the following:

Proof.
  rewrite EL2 with (p_prime := p_prime) (p_odd := p_odd) by (omega || auto).
  rewrite EL2 with (p_prime := q_prime) (p_odd := q_odd) by (omega || auto).
  rewrite m1_pow_compatible. rewrite <- m1_pow_morphism.
  f_equal. rewrite <- QR6. unfold a. unfold b. auto.
  rewrite <- Zmult_0_r with (n := (p - 1) / 2).
  apply Zmult_ge_compat_l. omega. omega.
Qed.

I honestly wouldn't guess the first example had to do with proving ${\sqrt{2}\notin\mathbb{Q}}$, nor would I have imagined the second had to do with the law of quadratic recipricocity. (The first in HOL Light, the second in Coq.) What do you suppose mathematicians of the 25th century would think of these proofs? Would our efforts formalizing mathematics in a theorem prover yield fruit akin to Euclid's Elements, or would it be more indecipherable than Russell and Whitehead's Principia?

Hence, I would like to assess the following question:

3. Puzzle. What qualities make a proof "readable"?

4. Sources. We're not in the wilderness on this matter. Some very smart people have thought very hard about this question. But we haven't progressed much in the past few decades, compared to designing programming languages. We all seem to draw waters from the same well.

Andrzej Trybulec created Mizar [10] specifically for readability. Over the years, Mizar went from an abstract idea to a concrete realization, iteratively improving its input language to make it resemble a working mathematician's proofs. A cynical critic may deride Mizar as merely natural deduction in Pascal syntax, but that concedes the point: it's readable and understandable on its own, without relying on the computer to tell us the "proof state" and remaining goals in a proof. (Empirically, it turns out that Mizar reflects how mathematicians write, based on a cursory glance at 40k articles; see Josef Urban's email to the Mizar mailing list.)

(For our earlier examples, I personally think that Mizar's proof of the theorem of quadratic recipricocity, Th49 of int_5.miz, while long, is clear and readable...even if I don't fully understand Mizar. Similarly, the proof that $\sqrt{2}\notin\mathbb{Q}$ is the first theorem in irrat_1.miz. It's longer than I would find in a textbook, as are all formal proofs, but I have no difficulty making sense of it.)

The other major source of inspiration, I think, is Mark Wenzel's Isar front-end for Isabelle [13] (and enlightening discussion of a prototype [12]). Here the system emerges bottom-up, as discussed in §3.2 of Wenzel's thesis [13]. Other theorem provers attempted to simply copy/paste Isar commands, but apparently they never caught on. I suspect this is due to copying the superficialities, rather than drawing upon the underlying principles, produced an awkward input language.

There are other sources worth perusing, like Zammit's "Structured Proof Language" [17] among others. We also mention Isar inspired many prototypes to adapt Mizar-style proofs to other theorem provers (Corbineau's Czar [2], Giero and Wiedijk's MMode [3], Harrison's Mizar Mode for HOL [4]). Also worth mentioning is ForTheL [11] which emerged from the Soviet SAD theorem prover [9], possibly a future input language for Lean.

I especially want to point out ForTheL [11] as a stand-out, partly because it was designed behind the Iron Curtain, and partly because it's more of a controlled language which resembles natural language.

5. Declarative proofs. One bit of terminology, John Harrison [5] calls the structured proof style "declarative style proofs", which seems fine. Some fanatics of the procedural camp dislike the term. Giero and Wiedijk [3] point out the differences between procedural and declarative styles are:

  1. a procedural proof generally run backward (from goal to the assumptions), whereas declarative proofs run mostly forwards (from the assumptions to the goal);
  2. a procedural proof generally does not have statements containing the statements which occur in the proof states, whereas declarative proofs do;
  3. a procedural proofs have few proof cuts, whereas declarative proofs have nearly one cut for each proof step.

It turns out these qualities do make a difference on the readability of a proof script. But this is a bit like trying to learn an author's writing style by examining the statistics of the author's writing, like "Average number of words per sentence" or "Average number of syllables per word".

6. Natural deduction, vernacular. A key insight worth discussing further is that declarative/structured proofs emerge from the natural deduction calculus. Curiously, Jaśkowski [6] (who invented natural deduction independent of, and earlier than, Gentzen) does this in his original paper. Wiedijk [15] considers this avenue using a formal grammar for the proof steps, leaving notions like "term" and "formula" [i.e., the heart of a proposition/theorem] informal. So far, so good; but this has been predominantly first-order logic. How do we articulate readable proofs for type theory?

7. We construct proof steps for a declarative proof style by assigning "keywords" to rules of inference in natural deduction.

The problem with trying to find a readable proof format from natural deduction for type theory is, well, there are a lot more inference rules in the calculus of constructions (as opposed to, say, first-order logic).

The Lean4 developers have unwittingly started the road to reinventing lambda-typed lambda calculus, which has fewer rules and ostensibly could be encoded into a declarative proof style input.

8. One linguistic avenue may be found in de Bruijn's "mathematical vernacular" [1], which then evolved into WTT [7].

8.1. Remark (Soft typing). What's worth noting is de Bruijn [1] states in §1.17, "In particular we have a kind of meta-typing (the 'high typing', see Section 3.6) in the language, whereas most other systems would have such things in the metalanguage." We now call such 'high typing' either 'soft typing' [16], or 'semi-soft typing' [8].

9. Another avenue would be to weaken existing structured proof formats, so they become independent of foundations (e.g., instead of writing "${x\in\mathbb{N}}$", we would write "${x}$ be a natural number" — type theory provers would translate this into "${x{:}\mathbb{N}}$", and set theory provers would translate this into "${x\in\mathbb{N}}$" as before).

If GNU Hurd's failures have taught us anything about engineering, it's that "a bird in the hand is worth two in the bush." That is to say, it's wiser to take an existing working framework, and modify it slowly until it becomes what we want.

10. Conclusion? If this post seems less-than-satisfying, that's probably because I've only identified a problem. No solutions have been offered. That's because this is a hard problem somewhere between language design (very hard) and mathematical logic. Since we lack a robust framework for designing languages, this is far more of an art than science.

Although we have fewer than a half-dozen examples of declarative proof styles (really, maybe 3?), we have some avenues of exploration. But the design space may be much larger than we realize: we just lack the tools to describe it adequately.

One closing thought I'd pass off, though, is that type theorists encode logic in a rather protracted manner. It may be that structured proofs for type theory would be forced to "compile down" into many more steps, or maybe we just haven't adequately found a way to connect type theory to working mathematics.

Bibliography

  1. Nick de Bruijn, "The Mathematical Vernacular". In Proceedings of the workshop on programming logic (eds. Peter Dybjer, Bengt Nordstrom, Kent Petersson, et al.), 1987. https://pure.tue.nl/ws/portalfiles/portal/4318923/599592.pdf

    See also F.3 of Selected Papers on Automath for a revised version of this manuscript.
  2. Pierre Corbineau, "A Declarative Language For The Coq Proof Assistant". In Marino Miculan, Ivan Scagnetto, and Furio Honsell, editors, Types for Proofs and Programs, International Conference, TYPES 2007, Cividale des Friuli, Italy, May 2-5, 2007, Revised Selected Papers, volume 4941 of Lecture Notes in Computer Science, pages 69–84. Springer, 2007. https://kluedo.ub.uni-kl.de/frontdoor/deliver/index/docId/2100/file/B-065.pdf
  3. Mariusz Giero and Freek Wiedijk, "MMode: A Mizar Mode for the proof assistant Coq". Technical report, ICIS, Radboud Universiteit Nijmegen, 2004. https://www.cs.ru.nl/ freek/mmode/mmode.pdf
  4. John Harrison, "A Mizar Mode for HOL". In Proceedings of the 9th International Conference on Theorem Proving in Higher Order Logics, TPHOLs '96, volume 1125 of LNCS, pages 203–220. Springer, 1996. https://www.cl.cam.ac.uk/ jrh13/papers/mizar.html
  5. John Harrison, "Proof Style". https://www.cl.cam.ac.uk/ jrh13/papers/style.html
  6. Stanisław Jaśkowski "On the Rules of Suppositions in Formal Logic". Studia Logica 1 (1934) pp.5–32. https://www.logik.ch/daten/jaskowski.pdf
  7. Fairouz Kamareddine and Rob Nederpelt, "A refinement of de Bruijn's formal language of mathematics". Journal of Logic, Language, and Information 13, 3 (2004), pp.287–340
  8. Cezary Kaliszyk and Florian Rabe, "A Survey of Languages for Formalizing Mathematics". https://arxiv.org/abs/2005.12876
  9. Alexander Lyaletski, "Evidence Algorithm Approach to Automated Theorem Proving and SAD Systems". http://ceur-ws.org/Vol-2833/Paper_14.pdf
  10. Roman Matuszewski and Piotr Rudnicki, "Mizar: The First 30 Years". Mechanized Mathematics and Its Applications 4, 1 (2005) pp.3–24. http://mizar.org/people/romat/MatRud2005.pdf
  11. Andrei Paskevich, "The syntax and semantics of the ForTheL language". Manuscript dated December 2007, http://nevidal.org/download/forthel.pdf
  12. Markus Wenzel, "Isar — a Generic Interpretative Approach to Readable Formal Proof Documents". In Y. Bertot, G. Dowek, A. Hirschowitz, C. Paulin, L. Thery, editors, Theorem Proving in Higher Order Logics, TPHOLs'99, LNCS 1690, (c) Springer, 1999. https://www21.in.tum.de/ wenzelm/papers/Isar-TPHOLs99.pdf
  13. Markus Wenzel, "Isabelle/Isar — a versatile environment for human-readable formal proof documents". PhD thesis, Institut für Informatik, Technische Universität München, 2002. https://mediatum.ub.tum.de/doc/601724/601724.pdf
  14. Markus Wenzel and Free Wiedijk. "A comparison of the mathematical proof languages Mizar and Isar". Journal of Automated Reasoning 29 (2002), pp.389–411. https://www21.in.tum.de/~wenzelm/papers/romantic.pdf
  15. Freek Wiedijk, "The Mathematical Vernacular". Undated manuscript (available since at least 2004), https://www.cs.ru.nl/ freek/notes/mv.pdf
  16. Freek Wiedijk, "Mizar's Soft Type System". In: K.Schneider and J.Brandt (eds.), Theorem Proving in Higher Order Logics 2007, LNCS 4732, 383–399, 2007. https://cs.ru.nl/F.Wiedijk/mizar/miztype.pdf
  17. Vincent Zammit, (1999) "On the Readability of Machine Checkable Formal Proofs". PhD Thesis, Kent University, 1999. https://kar.kent.ac.uk/21861/