Strange piece. Nobody claimed that dynamic type systems are simpler, they're just... well, dynamic. Type system complexity and dynamism are orthogonal. What the dynamic and static distinction is about is when the program decides to look at what your types actually are. At runtime or compile time, nothing more.
This has some implications for simplicity but not in the sense that the article tries to argue about. The real benefit from dynamic languages is not that you don't have types, it's that your program becomes more flexible at runtime. This is riskier by definition, but it also has the advantage of not eliminating countless of valid programs that a static language eliminates, or only makes possible through fairly complicated constructs.
>"[...]One day you open that module which you haven’t touched in six months, and you see a function call where the third argument is null. You need to remember what kinds of variables you can pass to that third argument[...]"
You don't need to remember that at all. The issue here is that the author applies the static mindset to the dynamic paradigm. The correct behaviour here is to not 'expect' or take for granted any particular input, but for the object/function in question to on its own deal with whatever input it gets. In Smalltalk style OO you would say it's the object that is solely responsible for the message it receives. That's not surprising but the very essence of late-binding.
Depending on the power of the type system: Heterogeneous lists. They are hard in statically typed language if possible at all but trivial in dynamically typed languages.
You can definitely have heterogeneous lists in typed languages. They're almost always an anti-pattern, but you can do it in a few ways.
In Java, you could have a LinkedList<Object>. You'll need to cast to do any operations on the objects; but, it's a heterogeneous list anyway, so why would you?
There could be a reason, though.
Let's say they all have the same function, then, but they're different objects. Well, for that, you could write a wrapper object that all inherit from the same class, say:
interface Clicker { public void click(); }
public class ClickerWidget implements Clicker {
Widget w;
public void click() { w.click(with_parameters); }
}
Alternatively, Go supports "use the same function" in a different way by having an interface just define a set of functions, no matter their parent class, and as long as the type has that function, it works.
Please read my comment again. I didn't say no statically typed languages support them.
But your Java example isn't a true heterogeneous lists, very close but not really. You cannot put instances of value types into it.
Even if it were a true heterogeneous list. You have given up on static typing with such a list and defer it to runtime. Which is exactly the dynamic typing approach.
If you want to do tricky pointer stuff, the Java hack around that is an array that holds the pointer to the value instead; and, autoboxing allows ints into it. It’s a heterogeneous list.
The List<Object> cannot hold int, double etc. It can only hold the boxed variants. Auto boxing takes some of the pain away but it's still not a heterogeneous list which has the requirement that it can hold values of any type.
I'd wager that the up and coming inline classes in java also can't be put into a List<Object> but maybe I'm wrong.
No I don't think so. The main requirement of a heterogeneous list is that it can hold any type. In addition in a statically typed language I'd say to truly qualify as a heterogeneous list it should actually be typesafe. Java fullfills neither requirement.
Seems like the rub is "The main requirement of a heterogeneous list is that it can hold any type." when I think I would colloquially say, and use as a working definition, that they need to hold different types.
Also the rub is that "type" means two things: what a variable can have OR a class (that inherits from `Object`).
Like, chances are, the individual is going to say that Ruby provides that power; but, by the time you've reached Ruby levels of abstraction, the odds of any argument about performance and data marshalling go out the window.
It's a difference without a meaning, as far as I can tell.
> “Not having to” write types but having to think about them anyway is like doing math “not having to” write anything down and doing all calculations in your head. How is it an advantage to not use pen and paper to track down your train of thought as you do a complex calculation, and instead be restricted to do it only in your head?
If dynamic type systems are like not having pen and paper, then static type systems are like a teacher forcing you to "show your work" by writing every little step on pen and paper, no matter how obvious. What's interesting is finding good middle grounds.
So a typechecker which can infer types so well that it feels almost dynamic but is in fact static, like Ocaml (vs something like c# where I have to write type signatures which are completely obvious to infer).
Statically-typed languages are a much less cumbersome beast than when I first encountered them, that can't help but prejudice the debate. Python was an enormous breath of fresh air.
Type inference, generics and variable shadowing have mostly covered almost all the pain points since. I don't have to declare what's obvious, I don't have to cover irrelevant differences, and I don't have to consider the artefacts of types past.
Still think generics are static typists admitting the shortcomings of a type system, without actually doing so, though.
> Still think generics are static typists admitting the shortcomings of a type system, without actually doing so, though.
Wait, what? Any static type person will readily say that type systems have shortcomings (though I'd probably prefer limits or limitations as a word with fewer negative connotations). It's just a matter of which limitations.
If Dan Luu's article on static vs dynamic types [0] is to be believed, there's no significant difference between either paradigms.
This largely jives with my personal experiences using Python, Typescript, Clojure and Scala. This also jives with casual observations of the product landscape, where for every good product built in something like Python, you can find a good product built in Java.
The major problem with dynamic typing is performance. There are not many dynamic type systems added to existing languages which made the language faster. More secure yes. Better documentation via typed signatures yes. Better error messages. But only a few made it faster, by using the available type optimizations. Racket complained, python complained, perl5 complained, ruby complained, everybody complained. I think only PHP and my cperl gained speed with added dynamic types.
The compiler overhead adds to the runtime. With static types you don't care, your compiler and typechecker doesn't need to be Rocket science. With dynamic languages it needs to be. And your paired runtime optimizations, liked typed arrays, eliminated bounds checking, optimized int ops must be worth it. You don't need a jit with types.
Can someone help me understand why the author is making it sound like
def foo(bar, baz):
if bar == “thing1”
baz.methodFromTypeX()
elif bar == “thing2”:
baz.methodFromTypeY()
is something super complicated? Like I get that in some respects the type signature for this method is funky and seems nigh impossible to encode in a type system (which I guess is the point) but this kind of thing is not at all hard to grok as a human.
I think if you settle for the args being combined into a tuple you could essentially create a tagged union and the type checker should be able to refine off the comparison.
This has some implications for simplicity but not in the sense that the article tries to argue about. The real benefit from dynamic languages is not that you don't have types, it's that your program becomes more flexible at runtime. This is riskier by definition, but it also has the advantage of not eliminating countless of valid programs that a static language eliminates, or only makes possible through fairly complicated constructs.
>"[...]One day you open that module which you haven’t touched in six months, and you see a function call where the third argument is null. You need to remember what kinds of variables you can pass to that third argument[...]"
You don't need to remember that at all. The issue here is that the author applies the static mindset to the dynamic paradigm. The correct behaviour here is to not 'expect' or take for granted any particular input, but for the object/function in question to on its own deal with whatever input it gets. In Smalltalk style OO you would say it's the object that is solely responsible for the message it receives. That's not surprising but the very essence of late-binding.