WML/WAP got a bad rap I think, largely because of the way it was developed and imposed/introduced.
But it was not insane, and it represented a clarity of thought that then went missing for decades. Several things that were in WML are quite reminiscent of interactions designed in web components today.
Instead of offloading batch computations to a proprietary cloud, it’s better to actually optimize the incredibly slow and unstable computational kernel.
In any case that’s not the happy path, Mathematica gets stuck in symbolic computations for ages. My FFT-based research in Mathematica slowed to a crawl, tens of minutes of waiting, even with 90% of the code compiled to binary. MATLAB finishes this task in milliseconds.
No, no... I know better than putting too much work into something before poking the core devs and seeing if it's something they'd be interested in.
If they don't want code written by a robot then what do I care? Mostly I wanted to see how well the daffy robots could work in an established code base and I chose one I was familiar with to experiment on and they were less than receptive so, their loss, I suppose...
Unfortunately, this characterizes the entire project: "cool" examples with no practical utility. Meanwhile, the language itself is incredibly strange (functions via patterns are an example of strange language choice), extremely slow, and very unstable.
In short, it's developing in the wrong direction.
I switched from Mathematica to Matlab in my work; it was the best investment of time in the entire project
No, M series is a system on chip (SoC), that’s why it’s able to run local LLM models in a range impossible for other laptop brands: VRAM == RAM, unified shared memory at max speed for both CPU and GPU
I run qwen models on MBA M4 16 Gb and MBP M2 Max 32 Gb, MBA is able to handle models in accordance with its vram memory capacity (with external cooling), e.g. qwen3 embedding 8B (not 1B!) but inference is 4x-6x times slower than on mbp. I suspect weaker SoC
Anyway, Apple SoC in M series is a huge leverage thanks to shared memory: VRAM size == RAM size so if you buy M chip with 128+ Gb memory, you’re pretty much able to run SOTA models locally, and price is significantly lower than AI GPU cards
reply