[HN Gopher] Mathematical proof is a social compact
       ___________________________________________________________________
        
       Mathematical proof is a social compact
        
       Author : digital55
       Score  : 101 points
       Date   : 2023-08-31 16:58 UTC (6 hours ago)
        
 (HTM) web link (www.quantamagazine.org)
 (TXT) w3m dump (www.quantamagazine.org)
        
       | reso wrote:
       | By late-undergrad, it was intuitive to me that "proof" means "all
       | mathematicians who reads this agrees with it". Mathematics is
       | unique in that, mostly, the field can achieve consensus on
       | results, which we then call "proofs".
       | 
       | But similarly, it makes sense that, even if a result is is "true"
       | in a universal, objective sense, if the mathematician cannot
       | communicate this in a convincing fashion to the rest of the
       | mathematics world, I don't think we can call that result
       | "proved".
        
         | [deleted]
        
         | SleekEagle wrote:
         | Not that you were, but I don't quite understand why people get
         | so caught up on this fact. There are objective facts about the
         | nature of reality, and we are all (or at least competent
         | practitioners in the field) are thoroughly convinced that we
         | have identified a subset of these facts.
         | 
         | These presumed facts have helped us do things like go to the
         | moon and build skyscrapers, but then someone comes along with
         | the old "but how do you _actually_ know " argument of a college
         | freshman, and then we get into a conversation about the
         | potential social relativism of math.
         | 
         | All the while, people will see a half-assed psychology study
         | with a questionable procedure, weak at best, erroneous at worst
         | statistics and therefore tenuous at best conclusions, and this
         | study is taken to be "true" and might legitimately impact
         | notable institutions. Yet when we're talking about extremely
         | complicated topics that exist on the edge of the horizon of
         | human intuition, no matter how obvious the impact some people
         | just refuse to accept things as objective simply because they
         | fail to intuitively understand them.
         | 
         | Foundational fields like mathematics and physics are as
         | objective as we can get. If you don't accept that, your belief
         | about what is objectively true ends at cogito ergo sum and
         | that's that. This has always been such a pointless conversation
         | in my mind.
        
           | joelfried wrote:
           | We communicate with words, and people as a whole are used to
           | being lied to and gaslit regularly especially by those in
           | power. It's true that mathematics and the hard sciences have
           | mechanisms for understanding that are on a different scale
           | than, say, ethics and morality. However, it takes time for
           | people -- especially those currently engaged in questioning
           | the nature of their reality[1] -- to accept that in this
           | specific instance lying and gaslighting are a lot harder[2].
           | 
           | The people who eventually accept and internalize the
           | distinction around things that can be objectively shown to be
           | true are those who by in the large have done some of the work
           | to understand these things themselves. Godel's Incompleteness
           | Theorem is beautiful but it takes work to understand and if
           | it didn't, it wouldn't be much of a meaningful breakthrough.
           | Nobody is proving that 3+5=8 and then 4+5=9.
           | 
           | So what the average person sees is a high level language they
           | can't speak with people being absolutely positive that this
           | thing is special and true and incontrovertible. That raises
           | red flags when you're dealing with folks talking about normal
           | everyday stuff, doesn't it? It's a lot harder to say "but I
           | don't understand" and a lot easier to say "but what if you're
           | wrong" socially.
           | 
           | [1] As all college first years do, right? [2] Let's face it,
           | lying to people is never impossible, it's just harder to be
           | successful when you can be fact checked.
        
           | smif wrote:
           | > There are objective facts about the nature of reality
           | 
           | This is a pretty bold claim and you would have to do a bit of
           | work to make it more convincing. Besides, it's not really how
           | science works. Different theories wax and wane over time all
           | the time. What we're really doing with science is coming up
           | with ever better models that give us greater predictive
           | power.
           | 
           | You could argue that at the macro scale we're asymptotically
           | approaching some kind of objective truth with the whole
           | endeavor of science, but you can't simply tunnel across that
           | gap and make the leap to say that we know there is an
           | objective truth.
           | 
           | The best that we can do is probably say something along the
           | lines of "these are the best methods of getting closer to the
           | truth that we have available - if anyone claims to have
           | better methods, they are very likely wrong", but you still
           | need to have the humility to accept that even the best models
           | that we have to date are not infallible. They do not give us
           | the objective truth, nor do they promise to, but they are the
           | most effective tools that we have available to us at the
           | current time. This is critically not the same as claiming
           | that they give us the objective truth.
           | 
           | You could say that for all intents and purposes/everyday
           | usage, "sure, these are objective facts about reality" - but
           | I don't actually think that helps anyone and it serves to
           | foment mistrust towards science and scientists. I think the
           | best chance we have at preserving the status of science and
           | scientists in our society starts by being honest about what
           | it actually is giving us - which is quite a lot, but let's
           | not oversell it for the sake of convience or whatever.
        
             | alphanumeric0 wrote:
             | Is it a bold claim?
             | 
             | On that account, do you lean more towards flat earth
             | theory?
        
           | reso wrote:
           | This fact is interesting because many people grew up with an
           | idea of mathematics as having discovered fundamental truth,
           | and some of us, when we get deep into the field, realize that
           | Math is made up of human ideas, exactly the same as a very
           | complex boardgame we play in our heads, and that the belief
           | that these boardgames represent something fundamental about
           | reality is it's own leap of faith.
        
           | BeetleB wrote:
           | > All the while, people will see a half-assed psychology
           | study with a questionable procedure, weak at best, erroneous
           | at worst statistics and therefore tenuous at best
           | conclusions, and this study is taken to be "true"
           | 
           | ...
           | 
           | > Yet when we're talking about extremely complicated topics
           | that exist on the edge of the horizon of human intuition, no
           | matter how obvious the impact some people just refuse to
           | accept things as objective simply because they fail to
           | intuitively understand them.
           | 
           | I think the intersection of these two groups is the null set.
           | 
           | > Foundational fields like mathematics and physics are as
           | objective as we can get.
           | 
           | Objective? Yes. But objective does not equate to "true"[1].
           | One requires data, and the other lives only in the mind. It
           | is not at all problematic to ponder over whether mathematics
           | is "true" - most mathematicians have an opinion one way or
           | another, and they are not unanimous in their opinions.
           | 
           | [1] True w.r.t reality, not true in the logic sense.
        
       | hackandthink wrote:
       | >What AI can do that's new is to verify what we believe to be
       | true
       | 
       | This is not AI.
       | 
       | Combining Theorem Provers with AI is promising:
       | 
       | https://leandojo.org/
        
       | Tainnor wrote:
       | I find his scepticism about proof assistants like Lean a bit
       | weird. Of course, there is never absolute certainty, but there
       | are degrees. A proof in Lean is a quite strong guarantee, you'd
       | basically have to have a bug in Lean's core for it to be wrong,
       | which is possible but less likely than a flaw in a proof that
       | hasn't seen much scrutiny because it's rather unimportant (of
       | course, "big" results get so much scrutiny that it's also very
       | unlikely that they were wrong).
        
         | svat wrote:
         | I wouldn't characterize him as being sceptical about Lean; this
         | is what he says:
         | 
         | > _Some terrific developments have happened with proof
         | verification. Like [the proof assistant] Lean, which has
         | allowed mathematicians to verify many proofs, while also
         | helping the authors better understand their own work, because
         | they have to break down some of their ideas into simpler steps
         | to feed into Lean for verification. [...] I'm not saying that I
         | believe something like Lean is going to make a lot of errors.
         | [...] They can be a very valuable tool for getting things right
         | -- particularly for verifying mathematics that rests heavily on
         | new definitions that are not easy to analyze at first sight.
         | There's no debate that it's helpful to have new perspectives,
         | new tools and new technology in our armory._
         | 
         | In between (the "[...]"s above), he says:
         | 
         | > _But is this foolproof? Is a proof a proof just because Lean
         | agrees it's one? In some ways, it's as good as the people who
         | convert the proof into inputs for Lean. Which sounds very much
         | like how we do traditional mathematics. [...] I'm just not sure
         | it's any more secure than most things done by humans. [...] I'm
         | afraid I have a lot of skepticism about the role of computers.
         | [...] what I shy away from is the concept that we're now going
         | to have perfect logical machines that produce correct theorems.
         | You have to acknowledge that we can't be sure things are
         | correct with computers. Our future has to rely on the sense of
         | community that we have relied on throughout the history of
         | science: that we bounce things off one another. That we talk to
         | people who look at the same thing from a completely different
         | perspective. And so on._
         | 
         | The latter part is less about the correctness and more about
         | community: the main goal of mathematics is not merely a "true"
         | theorems, but (human) mathematical understanding. I wrote an
         | answer about this on math.SE in 2014 (mostly quoting Thurston
         | and Rota): https://math.stackexchange.com/questions/632705/why-
         | are-math...
        
           | [deleted]
        
           | epgui wrote:
           | I agree very much with you... but it seems to me like theorem
           | provers / computers can be a really helpful tool in building
           | trust and understanding between people or between groups of
           | people, in ways that printed paper can't quite match.
        
           | Tainnor wrote:
           | I agree that a big part of proving thing is understanding
           | intuitively "why" they are true. That's part of the reasons
           | why I don't think we'll just write proofs in Lean in 50 years
           | and leave it at that (the other reason is that, at least for
           | now, those systems are very "stupid" and need a lot of
           | handholding to prove facts that any maths undergraduate would
           | understand to be true).
           | 
           | But when he says that he doesn't believe that a proof in Lean
           | is "more secure than most things done by humans" I
           | fundamentally disagree.
        
             | svat wrote:
             | My reading is that for an individual theorem or proof,
             | having it checked by computer may well be more "secure".
             | But a culture where all mathematics is done (only) by
             | computer would be worse, because you'd have outsourced the
             | thinking and quest for understanding which was the point in
             | the first place.
        
               | Tainnor wrote:
               | The way theorem provers work right now, it's still a
               | human that has to come up with the proof, the theorem
               | prover just verifies it.
               | 
               | But yes, I wouldn't want computer proofs to "replace" the
               | proofs we have now.
        
         | jeremyjh wrote:
         | I read it more as him saying that at the end of the day a human
         | is inputting symbols into Lean, and a human is interpreting the
         | meaning of the proof that Lean has verified. Not that the
         | mechanics of the proof checking are in doubt, but that that
         | meaning of it all is still a social consensus.
        
         | epgui wrote:
         | > A proof in Lean is a quite strong guarantee, you'd basically
         | have to have a bug in Lean's core for it to be wrong
         | 
         | That, or an error in the specification (ie.: not the
         | implementation).
         | 
         | (I very much am with you that his attitude / way of framing it
         | is a bit weird)
        
           | Jweb_Guru wrote:
           | I will go further (since there are certainly bugs in the
           | kernels of proof assistants, more frequently than you might
           | imagine). _Even if there is a bug in Lean 's kernel_, it will
           | almost never invalidate a Lean proof, because bugs in the
           | Lean kernel (or the kernels of other proof assistants) are,
           | just like with normal programs, almost always bugs dealing
           | with edge cases testing the boundaries of mathematics or
           | syntax, not things that show up in everyday proofs. Coq has
           | fixed numerous proofs of False and, to my knowledge, these
           | fixes have never ended up invalidating a "real" proof that
           | wasn't specifically designed to test these kinds of edge
           | cases. Proofs are far more often broken by changes to the
           | surface syntax (e.g. inference algorithms or the tactic
           | language) than by changes to the kernel.
           | 
           | Obviously, this is not foolproof against malicious
           | "adversarial" mathematicians, so knowing something was proved
           | in Lean still requires some degree of checking. But it does
           | make it pretty extraordinarily unlikely that a false proof in
           | a proof assistant was arrived at by accident, which is far
           | from the case with the "old" way of doing mathematics.
           | 
           | (I'd also add that the concern about getting the
           | specification of a design wrong is something that rarely
           | lasts longer than attempts to actually use it to prove
           | anything nontrivial, since the proofs won't go through, but
           | there at least can be a lot of debate about how to interpret
           | specifications in some fields [e.g. computer science] or
           | whether certain specs are "actually" equivalent to the
           | statements someone wants to prove, especially for stuff like
           | category theory where manual proofs have a good deal of
           | handwaving... but these are minor technical issues compared
           | to the viewpoint of the article).
        
           | Legend2440 wrote:
           | It's the same "computers will never take MY job" attitude
           | that you see from programmers, artists, doctors, etc.
        
         | davidgrenier wrote:
         | I think his argument was restricted to a human-produced
         | mathematical result being ported to a Lean program where one
         | would be just as likely to commit a mistake. However I disagree
         | as well, I recall the difficulty of expressing what I wanted to
         | Coq being a barrier to expressing it incorrectly.
        
           | Tainnor wrote:
           | Yeah it's likely he hasn't worked much with Lean and may have
           | some misconceptions around it.
        
           | SkiFire13 wrote:
           | The thing about proof checkers is that they won't accept
           | invalid proofs (assuming their internal axioms are
           | consistents...), so if you make a mistake there it will
           | simply reject your proof. The only place where it won't catch
           | some mistakes is in proposition you want to prove, because
           | the only way for it to know what you want to prove if for you
           | to tell it, but that's much easier to humanly verify.
        
       | joe__f wrote:
       | What is the meaning of the word 'Compact' in the title here?
        
         | math_dandy wrote:
         | An agreement, basically.
         | 
         | https://legal-dictionary.thefreedictionary.com/Compact
        
         | passion__desire wrote:
         | One metaphor that I have seen ubiquitous in Nature is Consensus
         | Reality. Fiat currency, quantum systems agreeing on each
         | other's value, social contract, etc. It's everywhere and now
         | social concept of proof.
         | 
         | Here is Daniel Dennett talking it. https://youtu.be/32u12zjgJww
         | 
         | I like the no miracles argument in favour of science. If our
         | scientific theories don't track reality, how are they
         | successful in prediction the future.
         | 
         | Similarly the social aspect of mathematical proofs can be
         | replaced with requirements like being in congruence with
         | established facts (i.e explanatory power), predictive power,
         | and efficient representation (i.e. elegant compression) which
         | means faster programs / less computational steps required.
        
       | davidgrenier wrote:
       | Good teacher, his Number Theory book felt really good though I
       | have no comparable in Number Theory. I must say Number Theory and
       | Combinatorics are the most difficult topics I got acquainted with
       | in undergrad.
        
         | Tainnor wrote:
         | Number Theory is really weird because the definitions and
         | theorems are so often straightforward and easy to understand
         | (you could explain Fermat's Last Theorem to a bright school
         | kid), but the proofs can be devilishly complicated.
        
       | epgui wrote:
       | I mean... you can abuse language and math notation in ways that
       | you can't do for computer code. Math notation is actually
       | terribly and surprisingly informal compared to code. I'd just
       | argue that many[*] of these "social disagreements" would go away
       | within a computational framework. I think the future of
       | mathematics is in computer proofs.
       | 
       | [*] Note that the claim is "many" (ie.: a subset), not "all".
        
       | catskul2 wrote:
       | My brain does not like the phrase "social compact" probably
       | because I've heard "social contract" so often, and rarely if ever
       | hear "compact" used in this way. On the other hand I hear "pact"
       | much more often.
        
         | Tainnor wrote:
         | It's especially weird because "compact" has a very specific
         | meaning in mathematics.
        
         | droptablemain wrote:
         | Construct seems more fitting. I also read the headline and
         | thought it sounded like a mistake.
        
           | [deleted]
        
           | SamBam wrote:
           | I read a "social compact" as more of a "social agreement,"
           | which gets at the bottom of consensus that he's talking
           | about.
           | 
           | A "social construct" is quite different, that's something
           | entirely constructed by society, like manners or nationalism.
        
             | epgui wrote:
             | I made the same comment as you in response to parent
             | commenter (now deleted)... But then I looked at some
             | definitions of "social construct" and it seems to have
             | broader semantics than I had thought.
        
       | qsort wrote:
       | > But is this foolproof? Is a proof a proof just because Lean
       | agrees it's one? In some ways, it's as good as the people who
       | convert the proof into inputs for Lean.
       | 
       | Having studied CS in school I'm sort of triggered by this. Might
       | be the editorialization, but this is a statement I have a problem
       | with.
       | 
       | I am not one of those who think that computers will save us all.
       | My point of view is that computers are meaningless machines that
       | do meaningless operations on meaningless symbols unless proven
       | otherwise. This is what gets drilled in your head in any semi-
       | decent CS program and it's a point of view I came to agree with
       | completely.
       | 
       | But we have proven otherwise.
       | 
       | Proof checkers like Lean, Coq, Matita, Isabelle and the like are
       | not like normal code, they are more similar to type-systems and
       | code expressed in their formalisms is directly connected to a
       | valid mathematical proof (see the Curry-Howard isomorphism).
       | 
       | They are usually constructed as a tiny, hand-proven core, and
       | then built on their own formalism, ensuring that what is exposed
       | to the end user is perfectly grounded.
       | 
       | If a program is accepted by the proof checker, then _by
       | construction_ it must be a valid mathematical proof.
       | 
       | Of course, computers are physical machines that can potentially
       | make all sorts of mistakes. Hardware failures, cosmic rays,
       | whatever. But the probability of that happening on a large scale
       | is the same probability that seven billion people collectively
       | hallucinated elementary theory and it's in fact not true that
       | there are infinitely many prime numbers.
       | 
       | edit: Just a clarification: it's only this particular statement I
       | have a problem with. The article is very much worth your time. Do
       | not get derailed by the title, it's not some sort of "math is
       | racist" nonsense.
        
         | cscheid wrote:
         | I'm with you almost everywhere, but it's worth pondering this:
         | 
         | > If a program is accepted by the proof checker, then by
         | construction it must be a valid mathematical proof.
         | 
         | I'd phrase it as "If a program is accepted by a _correct_ proof
         | checker, then ...". That makes it clear that we can't escape
         | having to consider how and why the proof checker is considered
         | correct.
         | 
         | Edit: this is not entirely academic. from what I understand,
         | the unsoundness of Hindley-Milner in the presence of
         | polymorphic references was not immediately known (https://en.wi
         | kipedia.org/wiki/Value_restriction#A_Counter_Ex...).
        
           | markisus wrote:
           | This is a great point. In a computer proof system, you
           | produce judgements like "the term x has type is T", where the
           | type "T" encodes the statement of a theorem and "x" is the
           | proof. We must be certain that whenever the type checker
           | verifies that "x has type T", it means that the theorem "T"
           | is actually true. Such a meta-proof must be done on pen and
           | paper, before any code is written at all. Any bug in this
           | meta-proof would destroy our ability to rely on the computer
           | proof system.
        
             | cscheid wrote:
             | > Such a meta-proof must be done on pen and paper, before
             | any code is written at all.
             | 
             | Not so fast; it could be done on a _different automated
             | system_ (say, a smaller one) than the one you're building,
             | in which case you're now relying on that system's
             | correctness. Or it could even not be proven at all. This is
             | not too different from what mathematicians do when they
             | (often) say "assuming P != NP", or "assuming GRH", or
             | "assuming FLT", etc etc. It's simply that it's worth being
             | careful and precise.
        
           | wk_end wrote:
           | To be ruthlessly, uselessly pedantic - after all, we're
           | mathematicians - there's reasonable definitions of "academic"
           | where logical unsoundness is still academic if it never
           | interfered with the reasoning behind any proofs of interest
           | ;)
           | 
           | But: so long as we're accepting that unsoundness in your
           | checker or its underlying theory are intrinsically deal
           | breakers, there's definitely a long history of this, perhaps
           | somewhat more relevant than the HM example, since no proof
           | checkers of note, AFAIK, have incorporated mutation into
           | their type theory.
           | 
           | For one thing, the implementation can very easily have bugs.
           | Coq itself certainly has had soundness bugs occasionally [0].
           | I'm sure Agda, Lean, Idris, etc. have too, but I've followed
           | them less closely.
           | 
           | But even the underlying mathematics have been tricky.
           | Girard's Paradox broke Martin-Lof's type theory, which is why
           | in these dependently typed proof assistants you have to deal
           | with the bizarre "Tower of Universes"; and Girard's Paradox
           | is an analogue of Russell's Paradox which broke more naive
           | set theories. And then Russell himself and his system of
           | universal mathematics was very famously struck down by Godel.
           | 
           | But we've definitely gotten it right this time...
           | 
           | [0] https://github.com/coq/coq/issues/4294
        
             | tshaddox wrote:
             | > there's reasonable definitions of "academic" where
             | logical unsoundness is still academic if it never
             | interfered with the reasoning behind any proofs of interest
             | ;)
             | 
             | I wouldn't really so. If you were relying on the type
             | system to tell you about any errors you made in your
             | program, the revelation that your type system was unsound
             | is not "only academic" simply because your program
             | coincidentally didn't have errors. Just like it wouldn't be
             | "only academic" if you discovered that your car's airbags
             | weren't functional for the last 10 years even though you
             | never had any accidents.
        
               | llm_thr wrote:
               | [dead]
        
         | Ericson2314 wrote:
         | Yeah any statement that goes back and forth a bit too fast
         | between Lean and ChatGPT raises my suspicions.
        
         | staunton wrote:
         | The thing that these systems offer is "having to check less
         | code". In some situations, that's huge. In others, it's not
         | worth the effort it takes to get there and maybe doesn't
         | actually reduce the complexity of checking that it all makes
         | sense.
         | 
         | In other words, maybe you proved something beyond reasonable
         | doubt, but that's completely useless if you misunderstand _what
         | it was_ that you proved.
        
         | cvoss wrote:
         | > the same probability that seven billion people collectively
         | hallucinated elementary theory
         | 
         | Just for fun, it's not unprecedented that the entire human race
         | is disposed to mislearn basic truths about the universe which
         | have to be unlearned when you get to the point where it
         | matters. See quantum mechanics.
         | 
         | One hopes that conscientious mathematicians are careful to
         | select axiomatizations (such as ZF set theory or the type
         | system of Lean) which faithfully reflect reality. And, within
         | such a system, we can be extremely confident in what we're
         | doing. But the distinct possibility remains that the
         | axiomatization doesn't actually match our universe and, say,
         | admits a proof of false or implies something not physically
         | realizable, such as that a certain computation should
         | terminate, when no real life physical machine obeying the laws
         | of physics can ever do so.
        
           | mcguire wrote:
           | Be careful with things like "faithfully reflect reality". :-)
           | 
           | Euclidean geometry is intuitively good and seems to work on
           | relatively small areas of the Earth's surface, but non-
           | Euclidean geometries are inconsistent with it and more
           | faithfully reflect a reality that lacks infinite flat planes.
        
             | mcphage wrote:
             | Or even Newtonian Physics, which faithfully reflects
             | reality--as long as you're not too large, or you're not
             | traveling too fast. And then it starts to break down.
        
         | User23 wrote:
         | It's unfortunate that semiotic has become a joke word. C.S.
         | Peirce did seminal work tying it to logic.
        
         | oulipo wrote:
         | I think his argument is exactly that, those formal systems can
         | suffer from errors (even though they've been checked), because
         | the compiler could have errors, the processor could have
         | errors, etc
         | 
         | So in the end, it's always a question of "community" and
         | "belief that things seen by sufficiently many people are
         | right", eg that after 10 years of using a CPU, we've identified
         | and fixed most bugs, and so the program is even more likely to
         | be valid
        
         | mcguire wrote:
         | Once upon a time, the professor of a class on logic I was
         | taking pointed out that, if you proved some famous open
         | conjecture (I think Fermat's was the example) using some simple
         | proof that everyone had just missed so far, mathematicians
         | would pat you on the head, maybe give you an award or
         | something, and you (and the conjecture) would become another
         | forgotten footnote.
         | 
         | Major mathematical advances come from difficult problems when a
         | new branch of mathematics is invented to solve them. Think of
         | calculus, non-Euclidean geometry, and Turing machines (and
         | their equivalents).
         | 
         | The problem I see with Coq, the proof checker I've worked with
         | the most, and the others I've seen is that the proofs they
         | result in are neither good, understandable programs nor good,
         | understandable proofs. They are as complicated as complex
         | programs, but written in something that isn't a half-way clean
         | programming language, but they're also not written as a
         | traditional proof. That goes back up to the article's point
         | that the purpose of a proof is to convince and enlighten
         | others.
         | 
         | Sure, I'd have to say a proposition proven in Coq is _true,_
         | but how do you get from that to the ideas in the proof letting
         | you go on to other problems?
        
           | sebzim4500 wrote:
           | I guess the hope is that eventually there will be a language
           | which is roughly as easy to read/write as normal proofs in
           | English but can be machine checked.
           | 
           | Lean comes closer than Coq, but is still far from this ideal.
           | It may be that AI will need to be involved (as in, there
           | could be a tactic that generates proof terms using an
           | transformer or something).
        
             | hgsgm wrote:
             | Like Cobol?
             | 
             | Coqobol?
        
           | passion__desire wrote:
           | There is something similar Nima Arkani-Hamed talks about
           | Feynman equations during his amplituhedron lecture, pages and
           | pages of equations being reduced to simple representation.
           | 
           | @56:26 in https://youtu.be/sv7Tvpbx3lc
        
           | madsbuch wrote:
           | I don't think the point is to either have good math or good
           | code.
           | 
           | When you _develop software_ in Coq and the likes, it is
           | library code. It is high impact code that is guaranteed to be
           | correct. It is written once and not maintained - You would
           | never build the new Twitter in Coq.
           | 
           | When you _develop math_ on Coq, you build abstractions. You
           | just need to know that theorems that you are working on. Then
           | you pull in all the lemmas in coq to help you solve that
           | theorem. You do not need to understand the lammas coq uses to
           | help you prove your theorem - coq does that for you. So your
           | math is extremely concise. This is in opposition to normal
           | math, where you need to be able to explain the hierarchy your
           | theorem works on top of.
        
             | hgsgm wrote:
             | Normal math also concisely cites pre-existing theorems
        
         | markisus wrote:
         | I think the quote is about the fallibility of the humans who
         | convert theorems into statements of type theory. You could end
         | up with a valid theorem, but not the theorem you meant to
         | prove. This would be a bug in the statement of the theorem, not
         | the proof.
         | 
         | For example, you might want to prove that a certain sorting
         | algorithm is correct. You formalize the specification as "for
         | every two integers i, j, output[i] <= output[j]" and prove that
         | outputs of the algorithm satisfy this spec. However, this is
         | not a correct characterization of sorting, since the algorithm
         | might return the empty list.
        
           | Tainnor wrote:
           | 1. Sure, but verifying a spec is still _much_ easier than
           | verifying a whole proof (as we traditionally do).
           | 
           | 2. In that example that you gave, you would have additional
           | evidence: if you use your system and every sort returns an
           | empty list, you'd probably notice it quite quickly. You can
           | also do manual or automated testing to see that it does the
           | right thing for example inputs. If you then consider it to be
           | unlikely that someone writes code that would exactly only
           | work for example inputs and only satisfy the specification in
           | the dumbest possible way, then you get some really good
           | evidence that your algorithm is actually correct. 100%
           | certainty is never possible, but there are still degrees.
        
             | [deleted]
        
             | markisus wrote:
             | Agreed. I think the interviewee is a little pessimistic
             | when he says "I'm just not sure it's any more secure than
             | most things done by humans." about computer assistance. If
             | the ABC conjecture were to be proven in Lean, I'm pretty
             | sure it would be accepted as true by an overwhelming
             | majority of the mathematical community.
        
               | hgsgm wrote:
               | The ABC conjecture "proof" is rejected because it uses
               | definitions and arguments no one understands.
               | 
               | It would not necessarily be hard to program in those
               | incoherent definitions and get them to prove a result.
               | That wouldn't mean much.
               | 
               | This is different from just programming in the conjecture
               | and letting Lean find a complete proof.
        
           | crvdgc wrote:
           | It's an important problem and people in formal verification
           | are aware of it. One example of tackling this is that the
           | Lean LTE team accompanies the proof with "several files
           | corresponding to the main players in the statement" and
           | "[they] should be (approximately) readable by mathematicians
           | who have minimal experience with Lean [...] to make it easy
           | for non-experts to look through the examples folder, then
           | look through the concise final statement in challenge.lean,
           | and be reasonably confident that the challenge was
           | accomplished".
           | 
           | Details in this blog post: https://leanprover-
           | community.github.io/blog/posts/lte-exampl...
        
       | dcre wrote:
       | I will always remember when I first got into serious proof-based
       | math in the first semester of college, and I had to work hard to
       | develop my sense of what counts as a sufficient proof. I would
       | read the proofs in the textbook and hit the QED at the end and
       | not understand why it was enough. Eventually I came to understand
       | something like the framing here, which is that a proof is about
       | persuasion, persuasion is about judgment, and judgment (by
       | definition, maybe) can't be pinned down to clear rules.
       | 
       | The University of Chicago has a special math class format called
       | Inquiry-Based Learning designed around this idea, where you work
       | _together_ in class to put proofs together and work out a shared
       | understanding of what is sufficient. I didn 't take it but I wish
       | I had. You can read some people's experiences with it here[0].
       | 
       | [0]:
       | https://www.reddit.com/r/uchicago/comments/i1id9e/what_was_y...
        
         | pjacotg wrote:
         | I recently read this book review [0] where a mathematical proof
         | was described as a dialogue between two semi-adversarial but
         | collaborative actors, the Prover and the Skeptic, who together
         | aim to aquire mathematical insight.
         | 
         | I thought it was an interesting perspective.
         | 
         | [0] https://jdh.hamkins.org/book-review-catarina-dutilh-
         | novaes-t...
        
           | hgsgm wrote:
           | This also what philosophical discussion/debate is.
        
       | sctb wrote:
       | > Moreover, one could play word games with the mathematical
       | language, creating problematic statements like "this statement is
       | false" (if it's true, then it's false; if it's false, then it's
       | true) that indicated there were problems with the axiomatic
       | system.
       | 
       | I hope this isn't too far off topic, but can someone clarify
       | exactly how this problem indicts axioms? As an uninformed and
       | naive musing, it occurs to me that an issue with the statement
       | "this statement is false" is _this_. The whole of the statement,
       | that is, the thing having truth or falsehood, cannot be addressed
       | by one of its components.
        
         | Tainnor wrote:
         | You're precisely right that it's the self-referentiality that
         | gets us in trouble. Systems which don't have that don't suffer
         | from Godel-related problems (e.g. propositional logic, or
         | arithmetic without multiplication).
         | 
         | Unfortunately, what Godel showed is that any logical system
         | that is strong enough to capture addition and multiplication
         | over natural numbers - and _nothing more_ - includes self-
         | referentiality.
         | 
         | That includes the language of arithmetic - you can write down
         | formulas that refer to themselves - and Turing machines (via
         | the recursion theorem, any Turing Machine is equivalent to a
         | machine that knows its own source code). The constructions are
         | a bit technical, but it's possible.
        
           | finnh wrote:
           | Yeah I always thought that the construction of Godel numbers
           | was always the weakest part of the proof / the biggest leap
           | of faith / the part your prof would just hand-wave as being a
           | valid move.
           | 
           | Of course once you get into Turing machines it all flows more
           | naturally, what with all of us being accustomed to "code is
           | just data".
        
             | Tainnor wrote:
             | I agree that Turing Machines feel more natural to
             | programmers than first-order logic (although "natural"
             | doesn't necessarily mean "rigorously proven"), but there
             | are no leaps of faith involved in Godel's construction.
             | 
             | You can write down "P is provable" as a first-order
             | sentence of arithmetic (which involves some number
             | theoretic tricks), and you can also do the diagonalisation
             | trick that gives you self-referentiality. That's really all
             | you need.
        
         | rmzz wrote:
         | [dead]
        
         | kmod wrote:
         | IANA mathematician, but I read "axiomatic system" broadly as
         | including not just the axioms but also the logic in which they
         | are based. My understanding is that a common interpretation is
         | that ZFC axioms are a list of 10 strings of symbols, which only
         | have some sort of meaning when you pick a logic that gives
         | meaning to these symbols. But I think also that this particular
         | understanding of what axioms are ("formalism") is just one way
         | of understanding truth in mathematics, and there are others.
         | https://en.wikipedia.org/wiki/Philosophy_of_mathematics
         | 
         | As for this particular issue I think this wikipedia page is
         | relevant: https://en.wikipedia.org/wiki/Impredicativity
        
           | thorel wrote:
           | This idea of giving "meaning" to a set of axioms is precisely
           | captured by the notion of "interpretation" in logic [1]. The
           | rough idea is to map the symbols of the formal language to
           | some pre-existing objects. As you say, this gives one way of
           | formalizing truth: a sentence (string of symbols that respect
           | the syntax of your language) is true if it holds for the
           | objects the sentence is referring to (via a chosen
           | interpretation). This notion of truth is sometimes referred
           | to as _semantic_ truth.
           | 
           | An alternative approach is purely syntactic and sees a
           | logical system as collection of valid transformation rules
           | that can be applied to the axioms. In this view, a sentence
           | is true if it can be obtained from the axioms by applying a
           | sequence of valid transformation rules. This purely syntactic
           | notion of truth is known as "provability".
           | 
           | Then the key question is to ask whether the two notions
           | coincide: one way to state Godel's first incompleteness
           | theorem is that it shows the two notions do not coincide.
           | 
           | [1] https://en.wikipedia.org/wiki/Interpretation_(logic)
        
             | Tainnor wrote:
             | > Then the key question is to ask whether the two notions
             | coincide: one way to state Godel's first incompleteness
             | theorem is that it shows the two notions do not coincide
             | 
             | It's even more subtle than that. They do coincide in a
             | sense, which is proven by Godel's _completeness_ theorem
             | (well, at least in First-Order Logic). That one just says
             | that a sentence is provable from some axioms exactly iff it
             | 's true in every interpretation that satisfy the axioms.
             | 
             | So one thing that Godel's first incompleteness theorem
             | shows it's that it's impossible to uniquely characterise
             | even a simple structure such as the natural numbers by some
             | "reasonable"[0] axioms - precisely because there will
             | always be sentences that are correct in some
             | interpretations but not in others.
             | 
             | Unless you use second-order logic - in which case the whole
             | enterprise breaks down for different reasons (because
             | completeness doesn't hold for second order logic).
             | 
             | [0] reasonable basically means that it must be possible to
             | verify whether a sentence is an axiom or not, otherwise you
             | could just say that "every true sentence is an axiom"
        
               | thorel wrote:
               | Agreed, thanks for the clarifications. Another result
               | worth mentioning, which also shows that you cannot hope
               | to uniquely characterize a structure by "reasonable"
               | axioms is the Lowenheim-Skolem theorem which predates
               | Godel's incompleteness (although the history of these
               | results is somewhat convoluted).
               | 
               | There, the obstacle is in some sense of a simplest
               | nature: if your set of axioms admits a countable model,
               | then it admits models of all infinite cardinalities. In
               | other words, it shows that there is something
               | fundamentally impossible in trying to capture an infinite
               | structure (like numbers) by finite means (e.g.
               | recursively axiomatizable).
        
               | Tainnor wrote:
               | Lowenheim-Skolem is such a mind-blowing result. It's a
               | shame it's not talked about more often.
        
               | ginnungagap wrote:
               | > In other words, it shows that there is something
               | fundamentally impossible in trying to capture an infinite
               | structure (like numbers) by finite means (e.g.
               | recursively axiomatizable).
               | 
               | Let me just point out for other readers that Lowenheim-
               | Skolem applies to ANY first order theory (in a countable
               | language, or also in an uncountable language if stated in
               | the form that a theory with an infinite model with
               | cardinality at least that of the language has infinite
               | models in all cardinalities at least as big as that of
               | the language), it doesn't care about how complex the
               | axioms are from a computability point of view
        
               | Tainnor wrote:
               | You're correct, but I think the spirit of what GP wrote
               | is still true.
               | 
               | You can't capture an infinite structure fully by
               | "finitary" methods, either you use FOL and then you run
               | into L-S, or you use higher-order logic (for which L-S
               | doesn't apply) but then you don't have a complete proof
               | system anymore.
               | 
               | To tie it all together, L-S and incompleteness are about
               | different flavours of "not being able to capture
               | something". L-S is about models of different
               | cardinalities. These models do still all satisfy exactly
               | the same sentences. Incompleteness is about different
               | models actually satisfying different sentences.
        
         | qsort wrote:
         | This is probably the editorialization.
         | 
         | He's alluding to the fact that unlike what one might naively
         | intuit, it's impossible to formulate a set of axioms that can
         | formally express all mathematics.
         | 
         | Because of Godel's incompleteness theorems, any formal system
         | that's sufficiently powerful to formulate basic arithmetic (see
         | for instance Robinson arithmetic) and consistent (that is, it
         | cannot prove false statements) is both incomplete (meaning that
         | it's possible to formulate a wff within that system that the
         | system itself can neither prove nor disprove) and can't prove
         | it's own consistency.
         | 
         | This fact is often portrayed in popular media as being a "bug"
         | or a "missing foundation" of mathematics. That is inaccurate --
         | it's just a property of how logical systems work -- but it does
         | prove that the search for the holy grail of a grand unified
         | system of axioms for all of mathematics is destined to remain
         | fruitless.
         | 
         | Modern mathematics is most often implicitly assumed to be
         | formulated in a formal system called ZFC, but there are
         | alternatives.
        
           | Tainnor wrote:
           | I think the second theorem is the "worse" result for
           | mathematics. It shows that it's completely impossible to use
           | a weaker system, which is hopefully self-evident, to prove
           | the consistency of a stronger, but weirder system like ZFC.
           | 
           | Hilbert wanted to show that, even if you don't "believe" in
           | ZFC, we could at least try to convince you that it doesn't
           | lead to contradictions, but that fundamentally didn't work.
        
             | qsort wrote:
             | Oh this is interesting, I didn't know about this historical
             | detail.
             | 
             | It makes sense, Godel's first could be mostly seen as a
             | version of the halting problem, while the second is "game
             | over" for a Hilbert-like foundation program.
             | 
             | p.s. Just wanted to point out that it's funny how we wrote
             | _two_ mostly identical sets of comments. Great minds think
             | alike? :)
        
               | Tainnor wrote:
               | > Oh this is interesting, I didn't know about this
               | historical detail.
               | 
               | That's the characterisation I got from Peter Smith's book
               | about Godel's theorems. I didn't verify original sources
               | or anything, but it sounds very plausible to me.
               | 
               | And it also kind of answers the question about why we
               | should care: if a system can't prove its own consistency,
               | well that's not terribly interesting (even if it could,
               | we would have to believe the system a priori to trust its
               | own consistency proof). But if it also can't prove a
               | stronger system consistent, then that's much more
               | interesting.
               | 
               | > Great minds think alike?
               | 
               | That would be a bit too self-aggrandizing for me. :D I'm
               | very curious about logic (and maths in general), that's
               | why I know some stuff about it, but I'm no maths genius.
        
           | SleekEagle wrote:
           | Is the fact that these systems cannot prove their own
           | consistency actually a feature of the incompleteness theorem?
           | I thought it effectively boiled down to "you can keep either
           | consistency or completeness, not both". It's been a while
           | since my metamathematics course ...
        
             | qsort wrote:
             | It depends on exactly what you mean with those terms.
             | 
             | When you say "you can keep consistency or completeness but
             | not both" you are essentially stating Godel's first.
             | 
             | Morally speaking, Godel's first is "no system is complete",
             | but there are two exceptions: if the system isn't powerful
             | enough to formulate the Godel sentence then the proof
             | doesn't work, and any inconsistent system is trivially
             | complete because _ex falso quodlibet sequitur_ , i.e. any
             | wff is true.
             | 
             | The part about a system not being able to prove its own
             | consistency is Godel's second theorem.
             | 
             | But the theorem only does what it says on the tin: the
             | system not being able to prove its own consistency doesn't
             | mean that it being inconsistent! [1] This is the case for
             | ZFC, for example. We can't prove ZFC's consistency within
             | ZFC: that would violate Godel's second, and we know that
             | _either_ ZFC is inconsistent (that is, you can derive an
             | absurd from the axioms) _or_ ZFC is incomplete (that is,
             | there exists a well-formed formula of ZFC that cannot be
             | proved or disproved within ZFC).
             | 
             | We don't know which one it is.
             | 
             | [1] This implication and similar ones are often made in
             | pop-culture presentations of the foundation problem. I
             | generally vigorously object to them because they're not
             | only imprecise -- that's understandable -- but they leave a
             | layman with a completely wrong impression.
        
         | thorel wrote:
         | The article is a bit oversimplifying in summarizing the
         | axiomatic crisis as being problem with sentences like "this
         | statement is false".
         | 
         | This being said, your intuition is absolutely correct, the crux
         | of the issue is with 'this'. What mathematicians realized is
         | that if you are not careful with your choice of axioms, the
         | resulting logical system becomes too "powerful" in the sense
         | that it becomes self-referential: you can construct sentences
         | that refer to themselves in a self-defeating manner.
         | 
         | As others have mentioned, this is the idea underlying Godel's
         | incompleteness theorem but also, to some extent, Russel's
         | paradox that came before and is what the article is referring
         | to. In Russel's paradox, the contradiction comes from
         | constructing the set of all sets that contain themselves.
        
           | Tainnor wrote:
           | Maybe in analogy to Russell's paradox and how you can "fix"
           | it by distinguishing sets and classes, you can "fix" a Godel
           | sentence by adding it as an axiom, but then you'll just get a
           | new Godel sentence... and so on.
        
         | [deleted]
        
       | pelorat wrote:
       | Math is the description of how one number relates to another
       | number.
       | 
       | I'm not a math person but this Shinichi Mochizuki sounds like a
       | hack job.
        
         | pphysch wrote:
         | Mochizuki is an extreme case, but the problem of "bullshitting"
         | is omnipresent in academia, and particularly (pure) mathematics
         | which is a) perceived as consequential and b) impossible for
         | laypeople to verify.
        
         | brazzy wrote:
         | The vast majority of academic math has little or nothing to do
         | with numbers.
        
       | simonh wrote:
       | Whether a task is meaningful or meaningless depends on how we
       | think about meaning. I tend to think about it in terms of
       | correspondences between structures. Such structures are usually
       | what we think of as information, so this comment has meaning when
       | interpreted using the corresponding knowledge of English in your
       | brain. A computer simulation of the weather has meaning to the
       | extent that it corresponds to actual weather, and its behaviour.
       | The environmental mapping data structures in a Roomba's memory
       | has meaning to the extent it corresponds to the environment it is
       | navigating.
       | 
       | So meaning is about correspondences, but also about achieving
       | goals. We communicate for many reasons, maybe in this case to
       | help us understand problems better. The weather simulation helps
       | us plan activities. The Roomba's map helps it clean the room. So
       | there's also an element of intentionality.
       | 
       | What is the intention behind these mathematical proofs? What are
       | their correspondences to other information and goals?
        
       | pphysch wrote:
       | > Initially, Newton and Leibniz came up with objects called
       | infinitesimals. It made their equations work, but by today's
       | standards it wasn't sensible or rigorous.
       | 
       | It would be great if mathematics was widely presented and taught
       | in this messy, practical, human, truthful way rather than the
       | purist, mystical, mythical manner.
       | 
       | Calculus wasn't "discovered", it was painstakingly _developed_ by
       | some blokes that were trying to solve physical problems.
        
       | svat wrote:
       | Nice interview / thoughts. There was a great article almost
       | exactly 10 years ago, revolving around the same example
       | (Mochizuki's claimed proof of the abc conjecture) that this
       | article starts with: http://projectwordsworth.com/the-paradox-of-
       | the-proof/ (It's still worth reading, even as background to the
       | posted article.)
        
       | [deleted]
        
       | mcguire wrote:
       | Catarina Dutilh Novaes has been arguing much the same thing. She
       | has a 2021 book out, _The Dialogical Roots of Deduction,_ which
       | is on my list but I haven 't gotten there yet.
       | 
       | I also haven't watched this video, but I'm linking it because I
       | can't find an earlier one where she points out that the basis of
       | logic is in rhetoric.
       | 
       | https://youtu.be/0IOhYneseiM?si=vEfJ-JFCPF05zzNO
        
         | zodzedzi wrote:
         | Off-topic question: I've been seeing this "si" URL parameter
         | pop up a lot in shared youtube links lately. Which app exactly
         | pulled the URL for you?
        
       | nuc1e0n wrote:
       | Seems that Lean could be considered as a programming language for
       | mathematical proofs.
        
       ___________________________________________________________________
       (page generated 2023-08-31 23:00 UTC)