Hacker Newsnew | past | comments | ask | show | jobs | submit | zero_bias's commentslogin

It’s called WML/WAP

I think we can do better than a 15x15 text window

WML/WAP got a bad rap I think, largely because of the way it was developed and imposed/introduced.

But it was not insane, and it represented a clarity of thought that then went missing for decades. Several things that were in WML are quite reminiscent of interactions designed in web components today.


Gopher today (and even more Gemini) can do almost anything Wap did but without being a dead platform.

Have you read the WML 1.x spec? Let alone WML 2.x which never really happened. It had much more interesting scope than Gemini does.

Gemini is not a good or sensible design. It's reactionary more than it is informed.


Instead of offloading batch computations to a proprietary cloud, it’s better to actually optimize the incredibly slow and unstable computational kernel.

In any case that’s not the happy path, Mathematica gets stuck in symbolic computations for ages. My FFT-based research in Mathematica slowed to a crawl, tens of minutes of waiting, even with 90% of the code compiled to binary. MATLAB finishes this task in milliseconds.


Universe could be probability based GoL simulation; basic Turing machine cannot handle that


Is this your pull request?


No, no... I know better than putting too much work into something before poking the core devs and seeing if it's something they'd be interested in.

If they don't want code written by a robot then what do I care? Mostly I wanted to see how well the daffy robots could work in an established code base and I chose one I was familiar with to experiment on and they were less than receptive so, their loss, I suppose...


Unfortunately, this characterizes the entire project: "cool" examples with no practical utility. Meanwhile, the language itself is incredibly strange (functions via patterns are an example of strange language choice), extremely slow, and very unstable.

In short, it's developing in the wrong direction.

I switched from Mathematica to Matlab in my work; it was the best investment of time in the entire project


This function is user contributed. It's not official.


No, M series is a system on chip (SoC), that’s why it’s able to run local LLM models in a range impossible for other laptop brands: VRAM == RAM, unified shared memory at max speed for both CPU and GPU


Strix Halo has the same unified RAM with no separation.

Sadly it’s not in many laptops, probably the easiest way to obtain it is in the Framework Desktop or a mini pc.


I run qwen models on MBA M4 16 Gb and MBP M2 Max 32 Gb, MBA is able to handle models in accordance with its vram memory capacity (with external cooling), e.g. qwen3 embedding 8B (not 1B!) but inference is 4x-6x times slower than on mbp. I suspect weaker SoC

Anyway, Apple SoC in M series is a huge leverage thanks to shared memory: VRAM size == RAM size so if you buy M chip with 128+ Gb memory, you’re pretty much able to run SOTA models locally, and price is significantly lower than AI GPU cards


Safari on iOS support uBlock origin too


Python is ubiquitous in ML, often you have no choice but to use it


When primordial black holes formed, there was no matter that could clump around them, as the matter at that time had a very high temperature


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: