I'm forever baffled by the complete inability of our industry to see time or care about seeing that our most precious resource is time.
One thing is true: if you're not proving correctness of your code -- formally or informally -- then you are living in entropy and at very high risk of inefficiently delivering value through software. Knowing how to call "correct" on code is paramount.
And also -- yes, static type systems allow for (partial) machine verification of these proofs.
The missing piece is the innate -- and immense -- cost in having to express these proofs formally in a machine-checkable way.
Static type enthusiasts typically downplay these costs but they are simply wrong. I am not talking about the cost of learning how to code in a statically typed system--that should never be factored in. I am talking about the innate costs of formal verification (and strong static typing) that even the expert static typers pay. I have seen these guys work and they are delivering sub-optimally in time compared to the alternatives. Period.
I have been around the block with both static and dynamic typing systems and the latter by far optimizes for delivery throughput over time.
Formally proving correctness of your program has the upfront cost of formalizing the proof (to the degree required by the verification system) as well has having the effect of crystallizing your code in its current representation which makes it more difficult to (re-)factor for future uses.
Some of the (more reasonable) strong static type enthusiasts will concede that this kind of machine/type-proving is better done when the domain and code stabilizes. My hats off to these people for at least being honest about things.
However the next realization is that once code and domain stabilize the need/value for machine proving correctness (in typical business/data applications) drops substantially (for obvious reasons).
So the pragmatic value of strong type systems and formal verification is far lower than the proponents will have you believe. Of course we've known this truth forever but our industry forgets pretty quickly. Haskell and variants are on the rise in popularity; but make no mistake, if you are optimizing for overall delivery throughput over time -- even experts are swimming upstream with these languages.
Of course every time I point this out on HN I get downvoted -- but it kills me to think that a next generation of programmers are being misled down a path of formal purity with misrepresentative claims about the cost of using these tools in real business applications.
Just to dispel any idea that what I'm saying is philistinic, I am a mathematician/academic first, enjoy category theory, have written more than my share of academic proofs (including novel results), and think these tools are immensely fascinating.
But having been in industry now for 20+ years shipping web-scale and distributed data systems for business industries (what 90+% of us are doing, I imagine), where time is the most precious resource, I know with certainty that leaning on formal verification techniques (including strong static type systems) is an enormous tax compared to the alternative. That these tools work against fast-paced, iterative development.
It has also become evident to me that there is a vanguard of static type enthusiast who are not admitting (or perhaps do not understand) the relative cost of the pursuit. Who will point to a few null pointer errors (that, mind you, could be eliminated or reduced by other defensive coding techniques besides formal proofs) and use these to justify the herculean cost of their formal system.
If you're a junior or on the fence about static type systems - at least code in a weakly typed PL in which you can lean in one way or the other. If business outcome/throughput is what you value first and foremost, I guarantee you will gravitate more and more toward dynamic evaluation - especially as you realize the world of real business delivery produces constantly changing requirements and carrying out delivery in the face of purist formal modeling and proofs will be a substantial drag on what you can do for little relative benefit downtream.
I love the collaborative features of Google Docs and Google Sheets.
The thing that's missing from "Google Docs" is a decent collaborative outliner called "Google Trees", that does to "NLS" and "Frontier" what "Google Sheets" did to "VisiCalc" and "Excel".
And I don't mean "Google Wave", I mean a truly collaborative extensible visually programmable spreadsheet-like outliner with expressions, constraints, absolute and relative xpath-like addressing, and scripting like Google Sheets, but with a tree instead of a grid. That eats drinks scripts and shits JSON and XML or any other structured data.
Of course you should be able to link and embed outlines in spreadsheets, and spreadsheets in outlines, but "Google Maps" should also be invited to the party (along with its plus-one, "Google Mind Maps").
More on Douglass Engelbart's NLS and Dave Winer's Frontier:
For some time John Ousterhout was part of Sun Labs and Sun invested quite a bit of resources in it and bridges between Tcl and Java. The original idea was that Java was going to be the universal systems language and Tcl the universal scripting language. That was scrapped at some point because the messaging was too complex and it ended up being "Java for everything". Arguably we would have been better off with Java + Tcl than we ended up with Java + Javascript.
When that happened JO went to found Scriptics, but it never got the traction it should (not helped being branded as "parasites" by Stallman)
For anyone wondering about pricing, here's our approach.
- For new Stripe customers, this is free up to the first (lifetime) $1M of payments.
- For existing customers, there's no pricing change. You just get more functionality than before for free. This is what we generally try to do: we want Stripe to continually become better value for you over time, as you get more functionality for the same price.
- What we've seen over Stripe's history is that customers handling large amounts of revenue have been forced to pay substantial amounts for expensive third-party systems. So, we've decided to build something that we think will be better and cheaper -- and that will, over time, increase the net revenue of businesses built on Stripe.
I'm sorry about any confusion in our communication around this!
Just in case people were wondering, the site seems to have been overwhelmed for the past 10-15 minutes. But there's a YouTube demo of the tech https://news.ycombinator.com/item?id=16275040
I guess an oversimplified description would be that this is like a Jupyter Notebook specifically for JavaScript. Libraries like D3 are pre-loaded and immediately accessible. Am definitely interested in hearing the details about what it is built with and medium to long-term plans for the service.
Note that the Jupyter Notebook service generally requires you to be installing and running Python etc. on your own computer. Jumping into an Observable notebook is as easy as opening your browser and signing in via GIthub
It actually is not the extra function call that is the big hit, since if you think about it objc_msgSend also does two calls (the call to msgSend, which at the end then tail calls the imp). The dynamic instruction count is also roughly the same.
In fact objc_msgLookup actually ends up being faster in a some micro benches since it plays a lot better with modern CPU branch predictors: objc_msgSend defeats them by making every call site jump to the same dispatch function, which then makes a completely unpredictable jump to the imp. By using msgLookup you essentially decouple the branch source from the lookup which greatly improves predictably. Also, with a “sufficiently smart” compiler it can be win because it allows you to do things like hoist the lookup out of loops, etc (essentially really clever automated IMP caching tricks).
There are also a number of minor regressions, like now you are doing some of the work on a stack frame (which might require spilling if you need a register, vs avoiding spills by using exclusively non-preserved registers in an assembly function that tail calls). In the end what kills it is that the profiles of most objC is large flat sections that do not really benefit from the compiler tricks or the improved prediction, and the added call site instructions end up in increased binary sizes and negative CPU i-cache impacts.
One thing is true: if you're not proving correctness of your code -- formally or informally -- then you are living in entropy and at very high risk of inefficiently delivering value through software. Knowing how to call "correct" on code is paramount.
And also -- yes, static type systems allow for (partial) machine verification of these proofs.
The missing piece is the innate -- and immense -- cost in having to express these proofs formally in a machine-checkable way.
Static type enthusiasts typically downplay these costs but they are simply wrong. I am not talking about the cost of learning how to code in a statically typed system--that should never be factored in. I am talking about the innate costs of formal verification (and strong static typing) that even the expert static typers pay. I have seen these guys work and they are delivering sub-optimally in time compared to the alternatives. Period.
I have been around the block with both static and dynamic typing systems and the latter by far optimizes for delivery throughput over time.
Formally proving correctness of your program has the upfront cost of formalizing the proof (to the degree required by the verification system) as well has having the effect of crystallizing your code in its current representation which makes it more difficult to (re-)factor for future uses.
Some of the (more reasonable) strong static type enthusiasts will concede that this kind of machine/type-proving is better done when the domain and code stabilizes. My hats off to these people for at least being honest about things.
However the next realization is that once code and domain stabilize the need/value for machine proving correctness (in typical business/data applications) drops substantially (for obvious reasons).
So the pragmatic value of strong type systems and formal verification is far lower than the proponents will have you believe. Of course we've known this truth forever but our industry forgets pretty quickly. Haskell and variants are on the rise in popularity; but make no mistake, if you are optimizing for overall delivery throughput over time -- even experts are swimming upstream with these languages.
Of course every time I point this out on HN I get downvoted -- but it kills me to think that a next generation of programmers are being misled down a path of formal purity with misrepresentative claims about the cost of using these tools in real business applications.
Just to dispel any idea that what I'm saying is philistinic, I am a mathematician/academic first, enjoy category theory, have written more than my share of academic proofs (including novel results), and think these tools are immensely fascinating.
But having been in industry now for 20+ years shipping web-scale and distributed data systems for business industries (what 90+% of us are doing, I imagine), where time is the most precious resource, I know with certainty that leaning on formal verification techniques (including strong static type systems) is an enormous tax compared to the alternative. That these tools work against fast-paced, iterative development.
It has also become evident to me that there is a vanguard of static type enthusiast who are not admitting (or perhaps do not understand) the relative cost of the pursuit. Who will point to a few null pointer errors (that, mind you, could be eliminated or reduced by other defensive coding techniques besides formal proofs) and use these to justify the herculean cost of their formal system.
If you're a junior or on the fence about static type systems - at least code in a weakly typed PL in which you can lean in one way or the other. If business outcome/throughput is what you value first and foremost, I guarantee you will gravitate more and more toward dynamic evaluation - especially as you realize the world of real business delivery produces constantly changing requirements and carrying out delivery in the face of purist formal modeling and proofs will be a substantial drag on what you can do for little relative benefit downtream.