It's been long since I've heard of Julia. It seems it has hard times picking up steam... Any news ? (yeah, I check the releases and the juliabloggers site)
Yeah, it didn’t have the explosive success that rust had. Most probably due to a mixture of factors, like the niche/academic background and not being really a language to look at if you didn’t do numerc computing (at the beginning) and therefore out of the mouth of many developers on the internet.
And also some aspects of the language being a bit undercooked. But, there’s a but, it is nonetheless growing, as you probably know having read the releases, the new big thing is the introduction of AOT compilaton. But there’s even more stuff cooking now, strict mode for having harder static guaratees at compile time, language syntax evolution mechanisms (think rust editions), cancellation and more stuff I can’t recall at the moment.
Julia is an incredibly ambitious project (one might say too ambitious) and it shows both in the fact that the polish is still not there after all this time, but it is also starting to flex its muscles. The speed is real, and the development cycle is something that really spoiled me at this point.
The problem with MATLAB is that idiomatic MATLAB style (every operation returns a fresh matrix) can easily become very inefficient: it leads to countless heap memory allocations of new matrices, resulting in low data-access locality, i.e. your data is needlessly copied around in slow DRAM all the time, rather than being kept in the fastest CPU cache.
Julia's MATLAB-inspired syntax is at least as nice, but the language was from the ground up designed to enable you writing high-performance code. I have seen numerous cases where code ported from MATLAB or NumPy to Julia performed well over an order of magnitude faster, while often also becoming more readable at the same time. Julia's array-broadcast facilities, unparalleled in MATLAB, are just reason for that. The ubiquitous availability of in-place update versions of standard library methods (recognizable by an ! sign) is another one.
In our group, nobody has been using MATLAB for nearly a decade, and NumPy is well on its way out, too. Julia simply has become so much more productive and pleasant to work with.
The function on that slide is dominated by the call to rand, which uses quite different implementations in Julia and Python, so may not be the best example.
Julia is compiled and for simple code like that example code will have performance on par with C, Rust etc.
I tested how PyPy performs on that. Just changing the implementation of Python drops the runtime from ~16.5s to ~3.5s in my computer, approximately a 5x speedup:
xxxx@xxxx:~
$ python3 -VV
Python 3.11.2 (main, Apr 28 2025, 14:11:48) [GCC 12.2.0]
xxxx@xxxx:~
$ pypy3 -VV
Python 3.9.16 (7.3.11+dfsg-2+deb12u3, Dec 30 2024, 22:36:23)
[PyPy 7.3.11 with GCC 12.2.0]
xxxx@xxxx:~
$ cat original_benchmark.py
#-------------------------------------------
import random
import time
def monte_carlo_pi(n):
inside = 0
for i in range(n):
x = random.random()
y = random.random()
if x**2 + y**2 <= 1.0:
inside += 1
return 4.0 * inside / n
# Benchmark
start = time.time()
result = monte_carlo_pi(100_000_000)
elapsed = time.time() - start
print(f"Time: {elapsed:.3f} seconds")
print(f"Estimated pi: {result}")
#-------------------------------------------
xxxx@xxxx:~
$ python3 original_benchmark.py
Time: 16.487 seconds
Estimated pi: 3.14177012
xxxx@xxxx:~
$ pypy3 original_benchmark.py
Time: 3.357 seconds
Estimated pi: 3.14166756
xxxx@xxxx:~
$ python3 -c "print(round(16.487/3.357, 1))"
4.9
I changed the code to take advantage of some basic performance tips that are commonly given for CPython (taking advantage of stardard library - itertools, math; prefer comprehensions/generator expressions to loose for loops), and was able to get CPython numbers improve by ~1.3x. But then PyPy numbers took a hit:
xxxx@xxxx:~
$ cat mod_benchmark.py
#-------------------------------------------
from itertools import repeat
from math import hypot
from random import random
import time
def monte_carlo_pi(n):
inside = sum(hypot(random(), random()) <= 1.0 for i in repeat(None, n))
return 4.0 * inside / n
# Benchmark
start = time.time()
result = monte_carlo_pi(100_000_000)
elapsed = time.time() - start
print(f"Time: {elapsed:.3f} seconds")
print(f"Estimated pi: {result}")
#-------------------------------------------
xxxx@xxxx:~
$ python3 mod_benchmark.py
Time: 12.998 seconds
Estimated pi: 3.14149268
xxxx@xxxx:~
$ pypy3 mod_benchmark.py
Time: 12.684 seconds
Estimated pi: 3.14160844
xxxx@xxxx:~
$ python3 -c "print(round(16.487/12.684, 1))"
1.3
I tested staying in CPython but jitting the main function with numba (no code changes but adding the jit decorator and expected type signature, and adding the same jit warmup call before the benchmark that the Julia version uses), and its about an 11× speedup. Code:
import random
import time
from numba import jit, int32, float64
@jit(float64(int32), nopython=True)
def monte_carlo_pi(n):
inside = 0
for i in range(n):
x = random.random()
y = random.random()
if x**2 + y**2 <= 1.0:
inside += 1
return 4.0 * inside / n
# Warm up (compile)
monte_carlo_pi(100)
# Benchmark
start = time.time()
result = monte_carlo_pi(100_000_000)
elapsed = time.time() - start
print(f"Time: {elapsed:.3f} seconds")
print(f"Estimated pi: {result}")
Base version (using the unmodified Python code from the slide):
$ python -m monte
Time: 13.758 seconds
Estimated pi: 3.14159524
The pre-compilation speed/caching performance ("time to first plot") has practically been solved since 2024, when Julia 1.10 became the current LTS version. The current focus is on improving the generation of reasonably-sized stand-alone binaries.
I heard it for 10 years, I gave it too many chances. Each time it was solved, then it was going to be solved in a new release right around the corner, again and again. Maybe it is now, I don't care anymore.
Julia is a very powerful and flexible language. With very powerful tools you can get a lot done quickly, including shooting yourself into the foot. Julia's type-system allows you to easily compose different elements of Julia's vast package ecosystem in ways that possibly were never tested or even intended or foreseen by the authors of these packages to be used that way. If you don't do that, you may have a much better experience than the author. My own Julia code generally does not feed the custom type of one package into the algorithms of another package.
One can hardly call using the canonical autograd library and getting incorrect gradients or using arrays whose indices aren't 1:len and getting OOB errors “shooting oneself in the foot” — these things are supposed to Just Work, but they don't. Interfaces would go a long way towards codifying interoperability expectations (although wouldn't help with plain old correctness bugs).
With regard to power and flexibility, homoiconicity and getting to hook into compiler passes does make Julia powerful and flexible in a way that most other languages aren't. But I'm not sure if that power is what results in bugs — more likely it's the function overloading/genericness, whose power and flexibility I think is a bit overstated.
Zygote hasn’t been the “canonical” autodiff library for some time now, the community recognized its problems long ago. Enzyme and Mooncake are the major efforts and both have a serious focus on correctness
In this debate there seems to be a pervasive impulse to point out "that specific problem doesn't exist anymore," and to their credit the developers are generally good at responding to serious problems.
However, the spirit of the original post was about the lack of safeguards and cohesive direction by the community to find ways to preempt such errors. It's not an easy problem to solve since Julia's composability and flexibility adds complexity not encountered in other languages. The current solution is, 'users beware', while there are a few people working on ways to enforce correct composability. I think it's best to acknowledge that this is an ongoing issue and that it's not a problem anymore because the specific ones pointed out are fixed.
One of the basic marketing claims of the language developers is that one author's algorithm can be composed with another author's custom data type. If that's not really true in general, even for some of the most popular libraries and data types, maybe the claims should be moderated a bit.
> one author's algorithm can be composed with another author's custom data type
This is true, and it's a powerful part of the language - but you can implement it incorrectly when you compose elements together that expect some attributes from the custom data type. There is no way to formally enforce that, so you can end up with correctness bugs.
My experience is that it is true, if you throughly implement the interfaces that your types are supposed to respect. If you dont, well, thats not the languages fault.
What exactly does he technically want to preserve? Does he really care about amplitude modulation? Or does he care about the frequency band (medium wave, HF) and its propagation properties? Or does he care bout the geographic reach of these stations?
Amplitude modulation is a historically important technology, because it was technically very simple to receive in the early history of radio, and because it was more bandwidth-efficient than FM. But it remains utterly badly suited for mobile reception, because it is highly sensitive for multi-path interference (unlike FM).
We have now far better modern, digital modulation schemes, including DAB and DVB-T2 for VHF and DRM for long, medium, and short-wave transmission. They provide (thanks to OFDM) much better audio quality and interference resistance than the old analogue modulation schemes, and they are also far more power efficient, which substantially reduces the enormous electricity bills of the transmitter stations. They also are very bandwidth efficient, and can be used in single-frequency networks.
Agreed about digital modulation, but I recall hearing multipath a lot more on FM, or perhaps it was just more obnoxious there. Traveling around large buildings or under a metal bridge would bring that familiar rapid flutter as the car moved through places where the reflections would reinforce and cancel. (You can drive through a lot more wavelengths per unit time for FM than AM.)