Don't you get a bunch of incompatible packages when you restrict to specific fixed version numbers without indication? I guess this is only helpful if you don't plan to reuse your code in another project.
Good question. The approach I proposed works for production applications that require an easy-to-replicate, deterministic environment, but I wouldn't recommend it if you're trying to build, say, Python packages or frameworks meant to be used in diverse environments.
I'm not sure it's strictly necessary to have the tunnel be low pressure. It's nice for the sake of less resistance, but the speed of sound is actually lower at lower pressures, and so would more readily produce a sonic boom.
It’s worth noting that this isn’t necessarily that big of an issue for China. A lot of their HSR is grade-separated and not infrequently up on viaducts. It’s not a particularly big step in my mind to enclosing that.
That't not an excuse. You can build infrastructure without violating peoples rights. It may be more difficult but definitely doable. Look at Japan or Europe. This is a political issue based on a missing long term agenda.
There is no cite, they seem to change the wordings of the paper to avoid 1 to 1 copy of text. E.g. they change 'label distribution model' with 'lesion distribution model'. Their chapter 'Learning Image-Text Mapping Model' basically describes the method of the first paper with minor but bad changes and it is not really self contained. They claim they simply forgot to cite. For me, it doesn't look like forgetting a cite. I wanted to hear different opinions.
I haven’t looked deeply into it, but I see a paper about a method and one about using a method that may or may not be that same method for detecting aneurysms (I couldn’t find ‘aneurysm’ in the first paper)
The two papers share (part of) a picture that may or may not have been original in either paper.
I haven’t checked which paper was published earlier, but that likely is the second (it cites a 2017 paper, while the first is from a 2015 conference)
Python is a scripting language, so its strong point is being used as a glue, and half of its standard library is implemented in C anyway.
Plus, the "C-implementations" he mentions are available as readily usable modules (numpy), or semi-transparent jit/aot compilers only needing a few annotations (Cython, numba), not actual C you have to write.
Besides, isn't the whole point: finish your project fast with the language you know using whatever it makes available to easily speed your code up?
As opposed to: "be a purist and not use wrapped libs written in another language".
Who cares for that? Even if it comes up, it's to avoid the hassle of having to deal with an additional language, setup etc -- which for numpy, Cython etc is almost none-existent (as you don't need to actually deal with C).
And of course, despite the purity of Julia's "single language", the hassle of moving to a totally different language, which few use, is not yet stable in syntax, compiler etc, and has fewer libs, should also be considered...
The specific purpose of the benchmark, though, is to compare implementations of the same algorithm natively in the language itself, as explained explicitly on the Julia website just under the table of benchmark results (see quote below).
As such, I do think the article misses the point somewhat. Of course, if there's a numpy function that does what you want, you'd use it in real life. But what if there isn't? The nice thing about Julia is that the function can be written in Julia itself, and fast.
> It is important to note that these benchmark implementations are not written for absolute maximal performance (the fastest code to compute fib(20) is the constant literal 6765). Rather, all of the benchmarks are written to test the performance of specific algorithms implemented in each language. In particular, all languages use the same algorithm: the Fibonacci benchmarks are all recursive while the pi summation benchmarks are all iterative; the “algorithm” for random matrix multiplication is to call the most obvious built-in/standard random-number and matmul routines (or to directly call BLAS if the language does not provide a high-level matmul), except where a matmul/BLAS call is not possible (such as in JavaScript). The point of these benchmarks is to compare the performance of specific algorithms across language implementations, not to compare the fastest means of computing a result, which in most high-level languages relies on calling C code.
> Of course, if there's a numpy function that does what you want, you'd use it in real life. But what if there isn't?
I have been in this exact situation, a numerical algorithm that was missing from Numpy but the rest of the project is in Python.
The solution is:
1. Write a Python function that operates on numpy arrays,
2. Add a few Cython type declarations to loop variables,
3. Mark the source file as "compile with Cython at runtime", which seamlessly turns the Python function into a C library.
The end result was a 1000x speedup compared to pure Python, very close to numpy built-in functions working on similarly sized arrays. And it needed only about 5 lines of setup code and type declarations for a few variables - all the code could still be Python and use all of Python even in the compiled files.
>The specific purpose of the benchmark, though, is to compare implementations of the same algorithm natively in the language itself, as explained explicitly on the Julia website just under the table of benchmark results (see quote below).
But then they go and write their own sort instead of using the language provided ones when offered. All these show is that julia is apparently faster than incredibly unidiomatic python written by someone who clearly doesn't write python. Okay. That's neat.
Numpy is such an essential library for any type of scientific computing in Python that ignoring it would be missing the point, if anything. The library infrastructure is part of the appeal of a programming language and Numpy is the default for anything compute-heavy in Python.
>shifting all the runtime heavy computation to C-implementations... this article is missing the whole point
The author understands your perspective but he's deliberately using a different one. The idea is that a data scientist user would realistically use NumPy/SciPy optimized C libraries instead of writing raw loops in "pure Python" to walk pure Python lists that model matrices. Therefore comparing pure Python code (interpreted by the canonical CPython interpreter) to Julia is the opposite of his goal.
The article's title is: "How To Make Python Run As Fast As Julia"
The author wanted to write about: "How To Make Python _Projects_ Run As Fast As Julia"
But many readers insist that the article should have been: "How To Make Pure Python Code Run As Fast As Julia"
(The 2nd type of article is also interesting, but the author didn't write it and didn't claim to.)
The article's comment permalink doesn't seem to jump to his exact comment so I'll copypaste the text here:
>There is indeed a disagreement about the purposes of the benchmarks. I see at least two purposes at stake here.
>1. A user point of view, which is to see how t best accomplish things in a given language. It is the result of various tradeoffs, including this: balance the time and effort to code something with the efficiency you get. That's the view of most Python users reacting to my post. We don't mind using Python libraries, even if they aren't written in 'pure' Python. Actually, the massive set of existing Python libraries is probably one key reason for its success.
>2. A language implementer point of view, which focuses on how elementary language operations perform. That's the purpose of Julia micro benchmarks I think.
>If people do not agree on the yardstick they use, then the discussion is not going to be fruitful. This disagreement explains most of the comments I saw until now.
> The author understands your perspective but he's deliberately using a different one.
In this case the way the author shows it isn't the best one: he modifies Python code to be more realistic - that's ok, but doesn't he do the same thing for Julia? Obviously, writing a recursive fibonacci functions isn't the best way to implement it. Obviously, using caching can improve performance. But why not to apply these changes to both implementations?
>he modifies Python code to be more realistic - that's ok, but doesn't he do the same thing for Julia?
Yes, I agree he didn't rewrite the Julia fibonacci examples the same way as Python.
My comment was speaking more to the usage of "optimized C libraries" in his benchmarks as being appropriate for his particular goal. (As response to poster hojijoji's objection to C-implementations.)
I used "goal" to mean his "overall goal" of showing optimized C libraries instead of pure Python for various scenarios. (My response to poster hojijoji objection to C libraries.) Using C libs is not an invalid benchmark if one understands why the author used them.
Yes, when the author didn't change both Python AND Julia fibonacci examples in exactly the same 1-for-1 manner, it does detract from his overall message because it invites nitpicking. (The nitpicking is reasonable if you're hyperfocused on that fibonacci example.)
Based on your other responses in this thread, you seem to want him to write Python-vs-Julia benchmarks that's suitable for benchmarksgame[1]. You have a valid perspective but that's not the article he claimed to write.
My question is then, why bring up Julia at all? Of course, there will be nitpicking when you put two languages against each other, in a benchmark written for a specific purpose, and then start to modify the implementation for one of the languages. It seems like the goal of the blog post would just as well be achieved by saying "here are some ways of speeding up a function in Python".
Because he wasn't writing about Python in a vacuum. In his very first paragraph[1], one can see that the article was a response to Julia's benchmark.
Your question could be reversed for the authors of julialang.org website and they could've restricted themselves to say "here are some ways of writing functions in Julia" -- without bringing up Python at all.
But the Julia folks didn't do that because ... people like to write comparisons to other things!
[1] see 1st paragraph that begins and ends with: "Should we ditch Python and other languages in favor of Julia for technical computing? [...] did the Julia team wrote Python benchmarks the best way for Python?"