In the Python version, you can mostly understand someone else’s code right off the bat, even as a mostly non-technical reader.
The details (e.g. that list indexes and ranges start at 0 by default and are half-open) are consistent and predictable after just a tiny bit of experience.
"Understanding" entails knowledge of the semantics. This means understanding the meaning of syntax, and the behaviour of the functions being called. This is true of both code fragments presented, so if you find one more intuitive than the other, that's your bias and not necessarily some objective feature of the syntax and semantics.
Maybe most people share your bias, and so it could qualify as a reasonable definition of "intuitive for most humans", but there's little robust evidence of that.
The first (and most important) level of “understanding” is understanding the intended meaning. Programming, like natural language, is a form of communication. The easier it is for someone to go from glancing at the code to understanding what it is intended to do, the more legible.
I claim that it is easier to achieve this level of understanding in Python than most other programming languages. (And not just me: this has been found to be true in a handful of academic studies of novice programmers, and is a belief widely shared by many programming teachers.)
Using words that the reader is already familiar with, sticking to a few simple patterns, and designing APIs which behave predictably and consistently makes communication much more fluent for non-experts.
There are deeper levels of understanding, e.g. “have carefully examined the implementation of every subroutine and every bit of syntax used in the code snippet, and have meditated for decades on the subtleties of their semantics”, but while helpful in reading, writing, and debugging code, these are not the standard we should use for judging how legible it is.
Languages that dump crap into the namespace on import are doing the wrong thing. Python has from x import * and every style guide says to never use it. Swift has a lot of other nice features, but the import thing is really a bungle. It is worse for everyone, beginners and experienced users alike. It is even bad for IDE users because you can't type imported-thing.<tab> to get the autocomplete for just the import. You're stuck with the whole universe of stuff jamming up your autocomplete.
>Is it? Does that range start at 0 or 1 or some other value? Does it include 15 or exclude it?
This is like reading a novel that says, "And then Jill began to count." and then asking the same questions. A non-technical reader does not need to know these details. The smaller details are not required to grok the bigger picture.
>Doesn't it? Is that UTC or local time? Or maybe it's CPU ticks? Or maybe the time since the start of the program?
When is the last time someone asked you, "Know what time it is?" and you responded with, "Is that UTC or local time?" Same thing, these details do not and should not matter to a non-technical reader.
Keep in mind, the audience is for non software engineers from people who barely know how to code, to people who do not know how to code but still need to be able to read at least some code.
> Does that range start at 0 or 1 or some other value?
What does range mean? Is it English? Attacking every single possible element of a language is not compelling. The defacto standard, is 0 indexing. The exceptions index at 1.
> - start = time.time(): doesnt need any explanation
People often oversimplify the concept of time. However, on average, the cognitive load for Python is lower than most. Certainly less than Swift. In one case I would look up what time.time actually did and in the case of Swift, I would throw that code away and work on another language with less nonsensical functions, like PHP. /s
> The defacto standard, is 0 indexing. The exceptions index at 1.
"Defacto standards" are meaningless. The semantics of any procedure call are completely opaque from just looking at an API let alone a code snippet, especially the API of a dynamically typed language, and doubly-so if that language supports monkey patching.
So the original post's dismissive argument claiming one sequence of syntactic sugar and one set of procedure calls is clearer than another is just nonsense, particularly for such a trivial example.
> However, on average, the cognitive load for Python is lower than most.
Maybe it is. That's an empirical question that can't be answered by any arugment I've seen in this HN thread.
No they arent. A mismatch between what is expected and doesnt happen, within a specific context contributes to cognitive load. "Intuitive" is a soft term with a basis in reality. The only language (Quorum) that has made an effort to do analyses was largely ignored. Usability in languages exist, with or without the numbers you wish for. Swift is less uable than many some languages and more than others.
Is it? Does that range start at 0 or 1 or some other value? Does it include 15 or exclude it?
> - start = time.time(): doesnt need any explanation
Doesn't it? Is that UTC or local time? Or maybe it's CPU ticks? Or maybe the time since the start of the program?
You've basically just demonstrated the assumptions you're used to, not any kind of objective evaluation of the code's understandability.