Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a very good thought. Some current projects are trying to develop computable mathematical foundations in a more structured way. Homotopy type theory (http://homotopytypetheory.org/) is one example that has a lot of buzz around it just now, but automated theorem proving has been trying to work with higher-order concepts for ages now.

In the classical approach of "compiling" everything into sets/logic/etc., you end up with just the assembly language problem that's being discussed, where all the high-level structure vanishes. In order to do your bottom-up approach instead, one of the things that needs to happen is to make the theory really compositional, so that once you've defined some abstraction or higher-level concept, you can use it in constructions and proofs without having to break the abstraction. You don't need to know - and in fact you shouldn't be able to find out - just how the natural numbers were constructed, as long as they work by the right rules. This motivates the use of type theory to describe mathematical objects, and say which operations are allowed. We want to be able to add two numbers and get another number, but we don't want to be able to intersect two numbers as if they were sets, even if they happen to have been built out of sets.

So I think you are right - or at least, there are plenty of people who agree with you that this is a good idea. It is difficult to actually do, of course, but that's life.



> We want to be able to add two numbers and get another number, but we don't want to be able to intersect two numbers as if they were sets, even if they happen to have been built out of sets.

Can't we do this in current mathematics?! I mean, no physicist or engineer ever thinks of numbers as sets, even if you are the kind of physicist that reads and understands mathematical proofs.


The vast majority of working mathematicians work in set-theoretic foundations, in which numbers are sets.


Right, this is how mathematics really works. But formalizations of mathematics may suffer from leaky abstractions. If we prove facts about numbers by compiling them into sets, and then using set-theoretic axioms, we might accidentally make it possible to prove things about numbers that are incorrect or meaningless.


Isn't this an abstraction problem that you solve by simply providing an "interface" or equivalent concept or access specifiers in oop like private/protected? All other modules that use the number module for applied math will just see an "interface" (let's call it GeneralNumber - as far as I know there are a few other alternate ways of defining numbers besides sets, right?), and the particular "implementation of numbers as sets".

For more abstract algebras or who knows what, the "numbers module" might also implement another more advanced interface that exposes more of the implementation, a "SetsNumber" interface. If you know have a proof that uses this interpretation of number that is tied to one particular "implementation", then there is nothing incorrect about it leading to weird or "meaningless" results, they would be correct for SetsNumber but not for GeneralNumber (or someone might need to take a good look and see if they can be made to work for GeneralNumber too).

(I know, the words are all wrong, it probably sounds either "all wrong" or like a gibberish to mathematicians that don't also happen to be programmers ...someone should figure out more appropriate terms :) )

And about leaky abstractions, I think they happen a lot in software because of the tradeoffs we make, like 'but we also need access to those low level stuff to tweak performance', 'but we need it done yesterday so it's no time to think it through and find the right mode' or 'our model has contradictions and inconsistencies but it's good enough at delivering usable tools to the end-user, so we'll leave "wrong" because we want to focus on something that brings more business value right now' etc. Also, there's a biggie: for some problems using no abstraction is not good enough (initial developing/prototyping speed is just to small), but if you figure out the right abstraction it will end up being understandable only by people with 'iq over n' or 'advanced knowledge of hairy theoretical topic x', and you can't hire just these kinds of people to maintain the product, so you knowingly choose something that's leaky but works and can be maintained by mediocre programmers, hopefully even outsourced :)


Yes, the interface idea is basically the right thing, but there are technical difficulties which aren't immediately obvious. For example, we'd like to be able to prove that two different implementations of the natural numbers are equivalent (one can do this mathematically, so if our system is going to handle general mathematics then it has to be capable of doing this). So we have to think quite hard about what this "equivalence" actually means. It's not enough to say that they satisfy the same axioms, because in general there can be all kinds of models for some set of axioms. It's closer to say that all number-theoretic statements about numbers-v1 are true of numbers-v2, and vice versa: but you can see that this is starting to get a bit hairy in terms of computable proofs.

A related problem, which speaks to the leaky abstraction issue, is "proof irrelevance". Typically, if I've proved something, it shouldn't matter exactly how I did it. But it turns out to be tricky to make sure that the proof objects in the system don't accidentally carry too much information about where they came from. Sure, you can define a way to erase the details, but you still have to prove that erasing doesn't mess up the deductive system.

None of this is insurmountable, but it's a glimpse into the reasons why encoding mathematics computationally is not trivial.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: