Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's interesting that the virtual machine is neither very fast nor it is memory efficient. If you really want to have max speed + portability, it's hard to beat restricted subset of C, especially since almost every platform has highly optimized compiler. Something like:

"Code must conform to C89 with -nostdlib, and can only link to libuxn (that we wrote). And use uxn_main() instead of main(), as libuxn defines main. No binary dependencies allowed, all source files must be in project directory, and only uxn build system can be used"

The the authors would only need to write libuxn for each platform they support, which is certainly easier and faster than writing a whole emulator.

But I am guessing this solution did not satisfy other criteria, perhaps things like "playful" and "build from first principles". It's a pity though - distributing apps in source code form instead of emulator binary would make them much more modifiable by end users.



One of the authors talks about using C with SDL or libdraw here: https://news.ycombinator.com/item?id=31715080

Since they were on a boat with a Raspberry Pi and not much battery power or internet, in addition to execution speed and portability, they were probably also concerned with:

- speed of compilation and linking (on small machines like Raspberry Pi)

- binary size of anything that might have to be updated or shared over the internet

- having to troubleshooting emergent issues without the ability to lookup documentation or download updates or tools from the internet

In this situation, a simpler language on a simpler VM is probably going to be faster to develop with than compiling/linking a subset of C, and after the initial implementation of the VM, might present less opportunity for an unintended interaction of leaky abstractions in your libuxn and your toolchain to ruin your day on a day when you don't have internet connectivity to check Stackoverflow or to update some buggy dependency.


Interesting comment, thanks for finding it!

I have a raspberry pi 2 (one of the first one, very low) and tried compiling orca.c on it with gcc. It took 7 seconds and produced 38KB executable. This is more than what I expected, but it's still pretty reasonable. However, I can see how this can be annoying with very rapid iteration cycles.

(Btw, building alone (no linking) takes 5.4 sec, so compilation time dominates. And optimized build (-O3) was over a minute. Not something to be done as part of development.)

But then I remembered about lighter compilers, and tried 'tcc'. This was substantially faster - just 0.6 seconds for the whole process! I think this is very reasonable, and for newer Pi's it would be even faster.

Also note that my idea was to forgo standard libraries and only link to libuxn (and only include uxn.h). This theoretical libuxn will have the same role as UXN virtual machine - written only once, and then frozen in stone, with no updates. This will take care of large binary sizes - if you are only linking to libuxn there are no updates to download. And there is no need to lookup documentation or consult stack overflow, as you are only allowed to link to libuxn, no third-party libraries.

Will such limited environment be inconvenient? Somewhat, but less that full-blown VM. Will it take effort to write and maintain libuxn on all platforms? Yes, but less effort than full-blown VM. Won't libuxn have some bugs requiring updates? Likely, but I bet it will have fewer bugs than full-blown VM.

As for other things (updating binary over internet, shareing


There's an effort to port uxn to DuskOS (https://duskos.org/). The goal here isn't to be maximally performant or memory efficient (though at this point, running uxn on DuskOS on certain architecture is faster than the mainline uxn implementation).

Rather, these are computing platforms to maximize usefulness in the event of a societal collapse.

That is why there is a handbook of uxn opscode made to include hand gestures (https://wiki.xxiivv.com/site/uxntal_opcodes.html), so that computing and transmission of computing can continue even with the loss of the computing hardware or documentation.


Dusks OS badly needs a ZMachine interpreter. The number of software running on DuskOS would skyrocket. From a Tetris implementation to tons of libre IF (and games) such as SpiritWrak, All Thing Devours, Reversi, ...

https://jxself.org/git/


I don't get it.. DuskOS mentions "it runs plenty fast on an old Pentium 75 MHz with 16mb of RAM." - that's a lot! I'd expect something geared for "civilization collapse" to be compatible with smaller embedded micros, like 80 KB RAM of esp8266 and 500 KB of RAM of ESP32.

I remember programming on 286 and 386 machines with 33 Mhz and 2-4 MB of RAM, and it was perfectly usable with Pascal and even C (although C was annoyingly slow, a few seconds per build). If your idea of old machine is Pentium with a _whole megabyte_ of RAM, or even more (gasp!), you don't need to pay Forth penalty, you can have normal languages with good ergonomics.


The smaller embedded micros are the target of CollapseOS, which is folded into DuskOS. The stage DuskOS is targeting is when we stop being able to have the ability to make new computers and new chips, and we start looking to salvage old ones laying around. There is also recovering knowledge from disks.


Infocom also took the VM approach and I can still play their games years later, and it worked out for them in the short term too.


Simple C code from 1990 still works, as long as it does not depend on non-standard libraries.

Imagine how cool it would be if old games were distributed in source code form, so that anyone can modify and "remix" it as much as they want with a simple text editor?

This was impossible back in 1990s, but today we can get a C compiler (tcc) which takes less than 1 megabyte of space and compiles+links a game in less than a second. As long as you don't depend on too many third-party libraries, you can ship C code, compile the game on each start and user won't even notice!


Imagine compiler errors.


Infocom's games require little CPU by their nature, and most of the memory is spent on graphics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: