Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

IMO the first three lines of the program basically explain why academics and data programmers are never going to use Swift:

Python:

  import time
  for it in range(15):
     start = time.time()
Swift:

  import Foundation
  for it in 0..<15 {
     let start = CFAbsoluteTimeGetCurrent()
This is why people like Python:

- import time: clearly we are importing a 'time' library and then we clearly see where we use it two lines later

- range(15): clearly this is referring to a range of numbers up to 15

- start = time.time(): doesnt need any explanation

This is why academics and non-software engineers will never use Swift:

- import Foundation: huh? Foundation?

- for it in 0..<15 {: okay, not bad, I'm guessing '..<' creates a range of numbers?

- let start = CFAbsoluteTimeGetCurrent(): okay i guess we need to prepend variables with 'let'? TimeGetCurrent makes sense but wtf is CFAbsolute? Also where does this function even come from? (probably Foundation? but how to know that without a specially-configured IDE?)

EDIT: Yes everyone, I understand the difference between exclusive and inclusive ranges. The point is that some people (maybe most data programmers?) don't care. The index variable you assign it to will index into an array of length 15 the way you would expect. Also in this example the actual value of 'it' doesn't even matter, the only purpose of range(15) is to do something 15 times.



> - range(15): clearly this is referring to a range of numbers up to 15

Is it? Does that range start at 0 or 1 or some other value? Does it include 15 or exclude it?

> - start = time.time(): doesnt need any explanation

Doesn't it? Is that UTC or local time? Or maybe it's CPU ticks? Or maybe the time since the start of the program?

You've basically just demonstrated the assumptions you're used to, not any kind of objective evaluation of the code's understandability.


In the Python version, you can mostly understand someone else’s code right off the bat, even as a mostly non-technical reader.

The details (e.g. that list indexes and ranges start at 0 by default and are half-open) are consistent and predictable after just a tiny bit of experience.


"Understanding" entails knowledge of the semantics. This means understanding the meaning of syntax, and the behaviour of the functions being called. This is true of both code fragments presented, so if you find one more intuitive than the other, that's your bias and not necessarily some objective feature of the syntax and semantics.

Maybe most people share your bias, and so it could qualify as a reasonable definition of "intuitive for most humans", but there's little robust evidence of that.


The first (and most important) level of “understanding” is understanding the intended meaning. Programming, like natural language, is a form of communication. The easier it is for someone to go from glancing at the code to understanding what it is intended to do, the more legible.

I claim that it is easier to achieve this level of understanding in Python than most other programming languages. (And not just me: this has been found to be true in a handful of academic studies of novice programmers, and is a belief widely shared by many programming teachers.)

Using words that the reader is already familiar with, sticking to a few simple patterns, and designing APIs which behave predictably and consistently makes communication much more fluent for non-experts.

There are deeper levels of understanding, e.g. “have carefully examined the implementation of every subroutine and every bit of syntax used in the code snippet, and have meditated for decades on the subtleties of their semantics”, but while helpful in reading, writing, and debugging code, these are not the standard we should use for judging how legible it is.


I also agree that the Swift is more clear, but only because it seems very "this is what is happening, go look up what you don't know"

disclaimer: I use swift, but also I have used python.


Languages that dump crap into the namespace on import are doing the wrong thing. Python has from x import * and every style guide says to never use it. Swift has a lot of other nice features, but the import thing is really a bungle. It is worse for everyone, beginners and experienced users alike. It is even bad for IDE users because you can't type imported-thing.<tab> to get the autocomplete for just the import. You're stuck with the whole universe of stuff jamming up your autocomplete.


`import func Foundation.CFAbsoluteTimeGetCurrent` imports just that function.


I don't necessarily think Swift is more clear, just that the original argument was unjustified in claiming Python was "clearly" superior.


>Is it? Does that range start at 0 or 1 or some other value? Does it include 15 or exclude it?

This is like reading a novel that says, "And then Jill began to count." and then asking the same questions. A non-technical reader does not need to know these details. The smaller details are not required to grok the bigger picture.

>Doesn't it? Is that UTC or local time? Or maybe it's CPU ticks? Or maybe the time since the start of the program?

When is the last time someone asked you, "Know what time it is?" and you responded with, "Is that UTC or local time?" Same thing, these details do not and should not matter to a non-technical reader.

Keep in mind, the audience is for non software engineers from people who barely know how to code, to people who do not know how to code but still need to be able to read at least some code.


> Does that range start at 0 or 1 or some other value?

What does range mean? Is it English? Attacking every single possible element of a language is not compelling. The defacto standard, is 0 indexing. The exceptions index at 1.

> - start = time.time(): doesnt need any explanation

People often oversimplify the concept of time. However, on average, the cognitive load for Python is lower than most. Certainly less than Swift. In one case I would look up what time.time actually did and in the case of Swift, I would throw that code away and work on another language with less nonsensical functions, like PHP. /s


> The defacto standard, is 0 indexing. The exceptions index at 1.

"Defacto standards" are meaningless. The semantics of any procedure call are completely opaque from just looking at an API let alone a code snippet, especially the API of a dynamically typed language, and doubly-so if that language supports monkey patching.

So the original post's dismissive argument claiming one sequence of syntactic sugar and one set of procedure calls is clearer than another is just nonsense, particularly for such a trivial example.

> However, on average, the cognitive load for Python is lower than most.

Maybe it is. That's an empirical question that can't be answered by any arugment I've seen in this HN thread.


> Defacto standards" are meaningless.

No they arent. A mismatch between what is expected and doesnt happen, within a specific context contributes to cognitive load. "Intuitive" is a soft term with a basis in reality. The only language (Quorum) that has made an effort to do analyses was largely ignored. Usability in languages exist, with or without the numbers you wish for. Swift is less uable than many some languages and more than others.


The use of `let` to declare immutable values is well-established in programming languages. Academics have no problem with this (and, indeed, prefer it -- at least, everybody I've talked to about it in the PL research community seems to prefer it). The same or a similar form is used in Scala, OCaml, JavaScript, Lisp, Scheme, etc. Some of these languages provide mutable contrasting forms, such as `var`. Tracking mutability so explicitly allows for more advanced static analyses.

Using `..<` and `...` is pretty simple to figure out from context. The former produces an exclusively-bounded interval on the right, while the latter is an inclusive interval. This is functionality that more languages could stand to adopt, in my opinion.

I agree that the names themselves are not very transparent. However, they become less opaque as you learn the Swift ecosystem. Admittedly, this makes them not as immediately user-friendly as Python's simple names, but it's not as though they're some gigantic obstacle that's impossible to overcome.

Personally, I like Swift a lot (even though I never use it). It has a syntax that has improved on languages like Java and Python, it's generally fast, it's statically typed, and it has a large community. The fact that implicit nullable types are discouraged directly by the syntax is phenomenal, and the way Swift embraces a lot of functional programming capabilities is also great. If it weren't so tied to Apple hardware, I would likely recommend it as a first language for a lot of people. (I know that it runs on non-Apple hardware, but my understanding is that support has been somewhat limited in that regard, though it's getting better.)


> However, they become less opaque as you learn the Swift ecosystem

IMO that's essentially the problem. Most people* don't want to have to learn the ecosystem of a language because it's not their focus.

The other issue is that when you start googling for information about the Swift ecosystem, you're not going to find anything relevant to academic, mathematical, or data-science programming. All the information you will find will be very specific to enterprise-grade iOS and macOS development, which will be a huge turn-off to most people in this community.

EDIT: *academics


Writing off a language/syntax/toolset because you couldn’t be bothered doing < 5 minutes of searching to figure out something that will probably yield net-benefits in the future is an incredibly myopic thing to do in my opinion.


> you're not going to find anything relevant to academic, mathematical, or data-science programming

Yet.

The question is whether Google and other Swift enthusiasts can change that over time.


Like you said: people in PL research. They specifically work on researching programming languages. But that is just a tiny fraction of what academic world has.


> The use of `let` to declare immutable values is well-established in programming languages.

Javascript, Swift, and VBA have let.

C, C++, Java, C#, PHP, Python, Go don't have it.

I'm also willing to bet that if you haven't studied math in English let is a non-obvious keyword.


As a 10 year old child in the early 80s:

    10 LET A$ = "Hello world"
    20 PRINT A$
    30 GOTO 10


Are academics born with python knowledge? you still need to learn that range(10) is exclusive of the number ten, and that 'time' itself is not a function. Julia for example is much further from 'natural language' programming and seems quite popular.

It's more important that the language can accurately and succinctly represent the mental model for the task at hand, and the whole point of this article is that Swift can offer a syntax that is _more_ aligned with the domain of ML while offering superior performance and unlocking fast development of the primitives.


Julia is similar to matlab by design, which makes it easier for science and engineering folks who are already familiar with it.

I think functional programming advocates underrate simplicity of procedural languages. Programming is not math, algorithms are taught and described as a series of steps which translate directly to simple languages like Fortran or Python.

I think ML is great, but I’m skeptical if it is a big win for scientific computing.


Are algorithms and theory behind them not math themselves?


They are proven with with math, but their implementation in code certainly isn’t. If it were that simple, we would be using languages like Coq and TLA+ for writing software. But we usually don’t, because math does not cleanly translate into usable programs, it needs a human to distill it into the necessary steps the computer must follow.


No really. They are math themselves. Algorithms have nothing to do with implementation. The whole CLRS books algorithms are written with pseudocode. By your logic Turing machines and many other models of computations are not math. Just something is imperative doesn't mean it's not mathematics.


This is a pretty pedantic definition.

Plenty of excellent programmers are not mathematicians. How would that work if programming were just math? That’s like saying physics is just math while ignoring all of the experimental parts that have nothing to do with math.


Range is a concept from mathematics, so an academic should know it regardless if they know Python or not.

Most of the concepts in Python come from academics and mathematics, so it's an easy transition. I don't think math has a time concept in a straight forward way, so time is an edge case in Python.


Have you ever come across a bug where range(10) doesn't get to 10? Even if it is assumed knowledge, it doesn't seem to me to even approach the level of assumed knowledge of time coming from a 'Foundation' library rather than... you know... a time library.


CFAbsoluteTimeGetCurrent is a long deprecated API, so I'm not sure where that's coming from.

A current and more readable way of expressing this would be

  let start = Date().timeIntervalSinceReferenceDate
If you don't need exact seconds right away, you can simplify further to just:

  let start = Date()
which is easily as simple as the Python example.


This is not the same thing.

CACurrentMediaTime() / CFAbsoluteTimeGetCurrent() are first of all not deprecated (just check CFDate.h / CABase.h) but return a time interval since system boot so they are guaranteed to be increasing. It's just a fp64 representation of mach_absolute_time() without needing to worry about the time base vs seconds.

Date() / NSDate returns a wall clock time, which is less accurate and not guaranteed to increase uniformly (ie adjusting to time server, user changes time etc)


Oops, you're right on the deprecation point. CFAbsoluteTimeGetCurrent is not itself deprecated but every method associated with it is [1].

Also CFAbsoluteTimeGetCurrent explicitly calls out that it isn't guaranteed to only increase. CACurrentMediaTime is monotonic though.

CFAbsoluteTimeGetCurrent also returns seconds since 2001 and is not monotonic, so there's really no reason to use it instead of Date().timeIntervalSinceReferenceDate. The most idiomatic equivalent to the Python time method is definitely some usage of Date(), as time in Python doesn't have monotonic guarantees either.

[1] https://developer.apple.com/documentation/corefoundation/154...


> CFAbsoluteTimeGetCurrent is not itself deprecated but every method associated with it is.

Because they deal with calendar-related stuff that is better accessed through Date.


”CACurrentMediaTime() / CFAbsoluteTimeGetCurrent() […] are guaranteed to be increasing.”

That is true for CACurrentMediaTime, but that time stops when the system sleeps (https://developer.apple.com/documentation/quartzcore/1395996... says it calls mach_absolute_time, and https://developer.apple.co/documentation/driverkit/3438076-m... says ” Returns current value of a clock that increments monotonically in tick units (starting at an arbitrary point), this clock does not increment while the system is asleep.”)

Also (https://developer.apple.com/documentation/corefoundation/154...):

Repeated calls to this function do not guarantee monotonically increasing results. The system time may decrease due to synchronization with external time references or due to an explicit user change of the clock.


Python's time.time() call is also going to be affected by system time changes and thus not guaranteed to increase uniformly. So Date() in Swift and time.time() in Python are the same in that regard.


Correct, the appropriate function is time.monotonic().


Python's time() call is returning the unix epoch wall clock time. Newbies (and most engineers TBH) are not going to know the subtleties and reasons why you'd use a monotonic clock or to even think of using one or another.

So for this comparison, it is better to use Date().


CFAbsoluteTimeGetCurrent() returns wall clock time, as far as I can tell the exact same thing as -[NSDate timeIntervalSinceReferenceDate].

https://developer.apple.com/documentation/corefoundation/154...

https://developer.apple.com/documentation/foundation/nsdate/...


"long deprecated" as in 20 years long; the CF APIs exist mostly for compatibility with Mac OS 9. The only time you really would need to use those functions nowdays is for interfacing with a few system services on Apple platforms like low-level graphics APIs and whatnot.


CoreFoundation is in no way deprecated.


You're right; I quoted the parent comment even though "deprecated" was not an accurate word choice here, sorry. CF is not deprecated because it is needed on occasion when programming for Apple platforms, but the APIs are largely obsolete.


What's amusing is that -[NSDate timeIntervalSinceReferenceDate] is actually the older of the two, going back to NeXTStep's Foundation introduced with EOF 1994 and later made standard with OPENSTEP 4.0.


0..<15 is a range of numbers from 0, up to but not including 15. 0...15 is the corresponding range including 15.

I find this notation slightly clearer than the python version. It took me some time to remember whether range(15) includes 15 or not.


Also, does it start with 0 or 1 (or -3023 for that matter)? As a programmer you would assume of course it starts at 0, but since this thread talks about "non-programmer" academic types I think it's worth mentioning. What if I want a range of 1-15, or 20-50, can I still use range()? I can't tell from the Python example but I can tell exactly what I would need to change with the Swift one to make it work exactly how I'd want.


Very true, and this is especially important in data science, where the majority of languages, other than Python, are 1 indexed (Matlab, Julia, R, FØRTRAN).


CS student here so not much of a highly valued input.

Once I knew these two facts, it didn't add much confusion.

1. Indexing starts from 0

2. Thus, range can be thought "from up to one before x", x here would be 15.

And I learned this pretty early and did not get confused later on.


You learned it for one language. Now imagine that you're working with a handful of languages regularly, some of which have 1-based indexing, some 0-based, some of which may have closed ranges, others half-open ranges.

If you're anything like me, you'll end up spending quite a bit of time looking up the documentation to the range operator to remind yourself how this week's language works again.


You're kind of proving you've never used Swift before. The real problem with Swift has nothing to do with the Syntax or API. It has to do with the segmentation in documentation, training materials, best practices, and design patterns. The journey from 0 to "best way to do x" is awful with Swift compared to other languages. It's pretty damn telling that the best way to learn iOS dev is still piecing together random Ray Wenderlich blogs (written across different versions of Swift!).


The Swift manual is pretty good actually, documentation around Foundation is pretty complete although a bit spare but yeah....UIKit and other libraries used for creating iOS applications are really not very well documented. The last few years I've been copying code from WWDC demo's to learn about new stuff. I tried to learn how to capture RAW data when the API was just out, so no Stack Overflow answer out yet. It was hard as hell.

But anyway that's not a Swift problem. Swift itself is pretty easy to get into.


Ya it is UI bound (though you’d think that’s where a lot of the documentation would be!). I’d also say JSON handling and other data handling aspects are poorly documented, would you agree?


I agree Codable is one of the worst ways of dealing with JSON apart from manually parsing through dictionaries. I mean It Just Works on a very clean and consistent API, but if people started to mix snake_case with PascalCase, return "1" or any other garbage shit people write when they only thing they have to care about it JS clients then you're typing a lot of unreadable boilerplate.

Since we have custom attributes I will investigate soon if there's a nice framework around that can make it work a bit like the usual JSON libraries in for example C# work.


> - start = time.time(): doesnt need any explanation

Python's no saint when it comes to time stuff either. I had some code using time.strptime() to parse time strings. It worked fine. Then I needed to handle data from a source that included time zone information (e.g., "+0800") on the strings. I added '%z' to the format string in the correct place--and strptime() ignored it.

Turns out that if you want time zones to work, you need the strptime() in datetime, not the one in time.

BTW, there is both a time.time() and a datetime.time(), so even that line that needs no explanation might still cause some confusion.


Python isn’t alone in that. Designing a date&time library is so tricky that, even after having seen zillions of languages fail and then replace their date&time library with a better one, most new languages still can’t do it right the first time.

I think the main reason is that people keep thinking that a simple API is possible, and that more complex stuff can initially be ignored, or moved to a separate corner of the library.

The problem isn’t simple, though. You have “what time is it?” vs “what time is it here?”, “how much time did this take?” cannot be computed from two answers to “what time is it?”, different calendars, different ideas about when there were leap years, border changes can mean that ‘here’ was in a different time zone a year ago, depending on where you are in a country, etc.

I guess we need the equivalent of ICU for dates and times. We have the time zone database, but that isn’t enough.


I use Python a few times a month for some simple scripting usually. Every time I have to look up how to use `range()` correctly, usually because I forgot if it's inclusive or exclusive. Academics that are used to Matlab or Julia will also have to look up if it starts at 0 or 1.

Furthermore, it's obvious what `time()` does in this context, but if I was writing this code I would _absolutely_ have to look up the correct time function to use for timing a function.


Maybe, but by that metric nobody would ever be using Java.


That's exactly my point. The only people who use Java are professional software engineers mostly working at very large companies with teams in the hundreds. Almost nobody in academia uses Java.


None of them use Python either. A lot just use MATLAB.


> None of them [academics] use Python either.

Here’s a paper by Travis Oliphant describing SciPy that has >2500 citations. https://scholar.google.com/scholar?hl=en&as_sdt=0%2C33&q=pyt...

In many fields of science Python is already the dominant language, in others (like neuroscience), the writing is on the wall for Matlab. Approximately all the momentum, new packages, and new student training in systems neuroscience that I’ve seen in the last 5 years is in Python.


I apologize :-/ I should have been more clear. What I was referring to was in a little more broader context than just Data Science or ML. Many of the Engineering and Math PhDs I work with typically use MATLAB or Mathematica.


It really depends heavily on the engineering field. I do work in optical/physical engineering (photonics, nonlinear optics, quantum computing) and essentially operations research (optimization theory) and almost everything we use is Python (as a C + CUDA wrapper/HDL-type thing) and Julia (which I'm trying to introduce for code reusability, even if it is only marginally slower than the former).

At least in my university, most people really do use Python + C and Julia for many, many cases and MATLAB and such are used mostly in mechanical and civil engineering, some aero-astro (though a ton of people still use Python and C for embedded controllers), and Geophysics/Geophysical engineering (but, thanks to ML, people are switching over to Python as well).

I think even these fields are slowly switching to open versions of computing languages, I will say :)


Yeah I know what you mean. I'm mechanical engineering (Controls) and the vast majority of them still use MATLAB, but they are slowly moving towards more open computing languages. I can only consider this a great thing! :)

The issue I see is with the undergraduate curriculum in many Universities. This is where I see the legacy use of MATLAB is really hurting the future generation of students. Many still don't know proper programming fundamentals because MATLAB really isn't set up to be a good starting point for programming in general. To me, MATLAB is a great tool IF you know how to program already.


Oh yeah, it’s a killer I’m not going to lie. I have the same problem with some classes here (though I haven’t taken one in years) and it’s quite frustrating since students are forced to pay for mediocre software in order to essentially do what a normal calculator can do anyways (at least at the undergrad level).


I work in a massive research institution with a lot of medical doctors. They almost all use R if they can program. I try to encourage the use of Python to help them slowly pick up better programming fundamentals so they dont miss out on whatever the next wave is in a decade. Learning R doesn't teach you much about other languages but IMO learning Python can help you move languages.


> Many of the Engineering and Math PhDs I work with typically use MATLAB or Mathematica.

Yes, and the government still needs COBOL programmers.

Going forward, I believe Python has far more momentum than either MATLAB or Mathematica. I think far more MATLAB and Mathematica users will learn Python than the other way around in the future, and far more new scientific programmers will learn Python than either of those.


I really believe so too! I just hope that goes downstream to the undergrads in the fields too.


MATLAB's foothold in academia is due to legacy familiarity, cheap (but not free) academic licensing, a nice IDE, and good toolboxes for certain niches (Simulink, Control Toolbox). I used MATLAB for 12 years in academia and would consider myself an advanced user.

However, when I left academia (engineering) 8 years ago, its use was already declining in graduate level research, and right before I left most professors had already switched their undergrad instructional materials to using Python and Scilab. I observed this happening at many other institutions as well. Anecdotally, this trend started maybe 10 years ago in North America, and is happening at various rates around the world.

I'm in industry now and MATLAB usage has declined precipitously due to exorbitant licensing costs and just a poor fit for productionization in a modern software stack. Most have switched to Python or some other language. My perception is that MATLAB has become something of a niche language outside of academia -- it's akin to what SPSS/Minitab are in statistics.


I'm not denying any of this and agree with your analysis about MATLABs use. I'm just saying that it's still used a lot more than people on Hacker News like to think.

The University I work at still teaches MATLAB to new engineering students still.


Oh I understand, I was more responding to your original statement "None of them use Python either. A lot just use MATLAB" which would be an unusual state of affairs in this day and age, though I have no doubt it is true in your specific situation. It's just that your experience seems circumscribed and uncommon in academia today (well insofar as I can tell -- I don't know the culture at every university).


...nobody in academia uses python? I would strongly disagree. The whole point of this Swift library is to provide an alternative to PyTorch which is clearly very popular in the community.


Being in academia myself, I have to disagree as well. Academia has it's own languages and tools it prefers. They have just recently started warming up to Python.


MATLAB is used by a lot of engineers, Mathematica is used by mathematicians/physics theorists, Python is used widely by a lot of different fields.


MATLAB and Mathematica are the primary tools I see used by people at my uni. People are just starting to warm up to Python.


In bioinformatics / computational biology, Python is absolutely ubiquitous.

Same for any other field that uses ML extensively.


Academics and data programmers are not known for using Java.


Data engineers use java a lot although, w/ hadoop, kafka, GIS libraries, hive, etc.


CS academics definitely use Java.


Well Java is not intended to replace python for TF.


seeing 0..<15 without knowing the language I think, hmm, a range from 0 to 14.


That's exactly what it is, and so is python's range(15):

    >>> list(range(15))
    [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]
The important thing with syntax is to avoid the illusion of understanding. That's when the language user is confident that the syntax means one thing when it actually means something else. If the user is not sure what something means, they'll look it up in docs or maybe write a few toy examples to make sure it does what they think it does. Python's range() is ambiguous enough that I did this when I was learning the language. I was pretty sure it would create a range from 0 to 14, but I wanted to make sure it wasn't inclusive (0-15).

Examples of the illusion of understanding abound. These aren't all true for everyone, and HN users have been writing software long enough to have internalized many of them, but every language has them:

- Single equal as assignment. Almost every newbie gets bitten it. They see "=" and are confident that it means compare (especially if it's in a conditional).

- x ^ y means xor, not "raise x to the power of y"

- "if (a < b < c)" does not do what newbies think it does.

- JavaScript's this.

Sometimes syntax can make sense on its own, but create the illusion of understanding when combined with another bit of syntax. eg: Python newbies will write things like "if a == b or c" thinking that it will be true if a is equal to either b or c.

The illusion of understanding is the cause of some of the most frustrating troubleshooting sessions. It's the thing that causes newbies to say, "Fuck this. I'm going to do something else with my life."


>Examples of the illusion of understanding abound. These aren't all true for everyone, and HN users have been writing software long enough to have internalized many of them, but every language has them

>The illusion of understanding is the cause of some of the most frustrating troubleshooting sessions. It's the thing that causes newbies to say, "Fuck this. I'm going to do something else with my life."

About 14 years ago (give or take up to 4 years) I read about a study that was done at a prestigious CS university, where some tests were given entering CS students at the beginning of the course to see who was ok with arbitrary logic and syntax and who was not, IIRC it was 40% of the class who would get hung up on "but why" and "it doesn't make sense" and would end up failing but the ones who were able to cope with the arbitrary nature of things would graduate and the others would end up dropping out, changing studies.

About every couple of months I wish I could find that damn paper / study again.

NOTE: my memories of this study might also have been faded by the years, so...


Are you thinking of The Camel has Two Humps[1]? I don't think it was ever published in a journal and the author later retracted it.[2]

It seems like the conclusions of the study were overstated, but the general idea is correct: Those who apply rules consistently tend to do better at programming than those who don't. This is true even if their initial rules are incorrect, as they simply have to learn the new rules. They don't have to learn the skill of consistently applying rules.

1. http://www.eis.mdx.ac.uk/research/PhDArea/saeed/paper1.pdf

2. http://www.eis.mdx.ac.uk/staffpages/r_bornat/papers/camel_hu...


I guess it is, it certainly seems like, although I had the memory that their main claim was that the ability to handle arbitrary mental models (not completely logical ones) was the differentiator between those who succeeded and not.

And embarrassingly this thing I've gone around believing for the last 14 years isn't so.


I think were JavaScript's this is concerned, it's different than the others - the others are just little tricky bits of syntax, this is it's own little problematic area of deep knowledge in the language.

It's more like saying people don't understand all of how pointers work in C.


To be honest my thought is "what the fuck is this? It's probably 0-14, with steps of 1, but I have no idea what i would cahange to get steps of 0.1.


I haven't read OP yet but I don't see what is the issue here. I honestly think what you see as issues are perhaps because of your lack of exposure to a wider range of languages?

CF clearly is a prefix for a library. I'll take an educated guess it means Core Foundation? Pretty common pattern of naming things, with +/- to be certain. And once you've seen it, it is just there, and you know precisely what it means. So 10 minutes of your life to learn what CFxxx() means.

Let. I like lets. Some don't. Surely we can coexist?

x..y is also not unique to Swift. It has a nice mathematical look to it, is more concise.

Btw, is that 'range' in Python inclusive or exclusive? It isn't clear from the notation. Must I read the language spec to figure that out? .. /g


> CF clearly is a prefix for a library. I'll take an educated guess it means Core Foundation?

It does, and the prefix is only there because C's namespacing isn't great.


`let` is not a terribly hard to understand keyword. Especially if you've had exposure to functional programming. Most academics I knew actually started out programming the functional way, rather than OO. So I'm not sure if I agree 100% with what your saying.


It's not terribly difficult to understand `let` if you have a background in math, given that nearly every formal proof defines variables with `let x:=`


I think only academics with a background in CS will typically be familiar with functional programming.

For everyone else, they will have used something simpler like Excel, C, R, Python.


The "Foundation" and "CFAbsoluteTimeGetCurrent" are very easily fixable surface level details.

"range(15)" vs "0..<15" could go either way.

"let" vs "var" in Swift is indeed something that adds verbosity relative to Python, and adds some cognitive load with the benefit of better correctness checking from the compiler. Very much a static vs dynamic typing thing. That's where you'll see the real friction in Swift adoption for developers less invested in getting the best possible performance.


"Python is slow" argument just shows complete ignorance about the subject. (and there may be good arguments for not using python)

First of all if you are doing "for i in range(N)" then you are already doing it wrong, for ML and data analytics you should be using NumPy "np.arange()", Numpy arange doesnt even run in "python" it's implemented in C. So it may even be faster than swift '..<' . Let me know when you can use swift with spark.


This is actually one of the most frustrating parts about using python. You can’t write normal python code that performs well. Instead you have to use the numpy dsl, which I often find unintuitive and too often results in me needing to consult stack overflow. This is very frustrating because I know how I want to solve the problem, but the limitations of the language prevent me from taking the path of least resistance and just writing nested loops.


my point is that the benchmark is deceiving, again if you are doing data analytics or ML, then you already are using numpy/pandas/scipy, so thats not a valid argument.


But it is. A good compiler could unroll my loop and rewrite it with the appropriate vector ops. But that isn’t possible with just python right now.


The way a range is defined in Swift looks scary for a programmer but immediately looks very natural as soon as you imagine you've forgotten programming an only know math.

time.time() (as well as datetime.datetime.now() and other stuff like that) always looked extremely ugly to me. I would feel better writing CFAbsoluteTimeGetCurrent() - it seems more tidy and making much more sense once you calm down and actually read that.


Python is great for scripting or rapid prototyping because of this, but I can definitely understand why someone would want a more literal language like Swift. Even in your example you can glean more information from the Swift code.


Been writing Swift for years, this is a very weak argument :)


”start = time.time(): doesnt need any explanation”

So, is that evaluated when that statement is run, when the value is first read (lazy evaluation, as in Haskell), or every time ‘start’ gets read. For example, Scala has both

  val start = time.time()
(evaluates it once, immediately),

  lazy val start = time.time()
(evaluates it once, at first use), and

  def start = time.time()
(creates a parameterless function that evaluates time.time() every time it is called)


A lot of academics write C




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: