zfs.rent is in the wrong location and I can't see anything about zfs send/receive support on rsync.net. What kind of VPS product has multiple redundant disks attached? Aren't they usually provided with virtual storage?
You can consider something like syncthing to get the important files onto your NAS, and then use ZFS snapshots and replication via syncoid/sanoid to do the actual backing up.
Or install ZFS also on end devices, and do ZFS replication to NAS, which is what I do. I have ZFS on my laptop, snapshot data every 30 minutes, and replicate them. Those snapshots are very useful, as sometimes I accidentally delete data.
With ZFS, all file system is replicated. The backup will be consistent, which is not the case with file level backup. With latter, you have to also worry about lock files, permissions, etc. The restore will be more natural and quick with ZFS.
I can't speak to zfs but I don't find btrfs snapshots to be a viable replacement for borgbackup. To your filesystem consistency point I snapshot, back the snapshot up with borg, and then delete the snapshot. I never run borg against a writable subvolume.
This is strictly about how much memory LuaJIT can address on that platform. There’s no “significant performance issues”. If your workload fits inside 2 GiB (IIRC) it will be fast. LuaJIT is a world class JIT.
For a long time a new garbage collector has been an open issue for LuaJIT, which would fix this particular issue, and make the language even faster. Last I checked this was actively being worked on.
> If your workload fits inside 2 GiB (IIRC) it will be fast.
This has nothing to do with the issue being mentioned here.
You are understating the severity of this issue and seem to be confusing it with a different issue that is no longer relevant in 2.1.
Lua without JIT is one of the fastest interpreted languages. In fact it might just be the fastest. LuaJIT is also one of the most advanced JIT compilers out there, frequently beating V8 and others.
There are very valid complaints about Lua. It lacks useful quality of life features. Some of its design decisions grate on users coming from other languages. Its standard library is barebones verging on anemic, and its ecosystem of libraries does not make up for it.
But after using Lua in many places for well over a decade, I’ve gotta say, this is the first time I’ve heard someone claim it’s slow. Even without JIT, just interpreter to interpreter, it’s consistently 5-10x faster than similar languages like Python or Ruby. Maybe you’re comparing it to AOT systems languages like C and C++? That’s obviously not fair. But if you put LuaJIT head to head with AOT systems languages you’ll find it’s within an order of magnitude, just like all the other high quality JITs.
Vanilla Lua traditionally doesn’t show any substantial difference to python or ruby. You’re probably talking about LuaJIT in interpreter mode (jit off).
> Lua without JIT is one of the fastest interpreted languages.
Maybe I should say it is "still slow". Of course you can find examples of 5-10x over python performance (not that this is saying much), this is not the norm. There's just no way, PUC-Lua is still fundamentally dynamic hash tables for lookup with a big switch statement, there's a certain limit to how fast this can go.
Let's just say there are plenty of game devs that have found that Lua quickly becomes too slow for the task it has been pressed into which then becomes an exercise of exporting more and more stuff to C++ or whatever. There is Luau of course, which sort of proves this point.
These interlanguage comparisons are mostly meaningless anyway. Python may be slow, but you do the heavy lifting in other ways and that ecosystem is huge. Since the Lua ecosystem is small though you are usually on your own. LuaJIT FFI is sorta cool, but wrapping can still be a PITA.
> Maybe you’re comparing it to AOT systems languages like C and C++? That’s obviously not fair.
Or C#/.NET/Java/Kotlin/JVM. Which you can often use as part of a plugin system.
> this is the first time I’ve heard someone claim it’s slow.
Quite ironic to say this, because I'm pretty sure this is why Mike Pall created LuaJIT in the first place. For example http://lua-users.org/lists/lua-l/2011-02/msg00742.html (but there are many other great in depth essays in the archive)
> LuaJIT is also one of the most advanced JIT compilers out there, frequently beating V8 and others.
No it's not these days. LuaJIT is awesome and may have been alien technology in 2005 but has a lot of missing functionality compared to the most advanced JIT implementations.
There were of course many examples of LuaJIT beating V8 on particular microbenchmarks through the years, and many of these are rehashed from over a decade ago and are outdated. I wouldn't say this means it "frequently" beats V8, especially in a practical sense.
Anyone that has much experience with LuaJIT and has stared at a few traces is aware of "NYI" - there is a ton of stuff that is not implemented in the JIT and means fallback to the interpreter, including even pedestrian uses of closures: https://github.com/tarantool/tarantool/wiki/LuaJIT-Not-Yet-I.... There is a ton of stuff that current V8 does to optimize real world larger applications that LuaJIT JIT will bail on.
LuaJIT does well when you write code in a particular style that exploits its strengths. It's not quite as amazing for general purpose use.
Elsewhere you suggest that a GC rewrite is actively being developed, but LuaJIT development has slowed tremendously, the mailing list is not very active and I wouldn't be holding my breath for anything major (this new GC has been talked about for well over a decade).
I expected at least a little bit more substance and insight, but the whole article basically boils down to, “if Intel’s fab splits off, they risk becoming another GlobalFoundaries.”
Which, yeah, I don’t think that’s much of a revelation to anyone discussing Intel’s future. There’s no argument put forward that it’s not still the best path for Intel.
Every extant Unix has been rewritten since the original AT&T code, Ship of Theseus style. We still consider them members of the Unix family, because they can trace their lineage directly. One could built a Git repo showing every code change from the original Unix release through modern day BSDs, if only we had granular commit info going back that far.
We could in principle do something similar for Darwin (if we had enough of the historical code), which is the core of MacOS, which is based on NeXT, which was based on BSD with a new kernel. That makes MacOS every bit as much a member of the Unix/BSD family as FreeBSD is.
Mac OS X was essentially a continuation of NeXTSTEP, which is BSD with a new novel kernel. In fact, if you look into the research history of the Mach kernel at the core of XNU, it was intended as a novel kernel _for_ BSD. NeXT went and hired one of the key people behind Mach (Avie Tevanian), and he became one of the core systems guy that designed NeXTSTEP as a full OS around Mach.
Early in the proliferation of the Unix family, member systems went in one of two directions -- they based their OS on upstream AT&T Unix, or they based it on Berkley's BSD, and added their own features on top. NeXT was one of the latter. Famously, the original SunOS also was.
While Sun would eventually work closely with AT&T to unify their codebase with upstream, NeXT made no such change. NeXTSTEP stayed BSD-based.
The other extant BSDs like FreeBSD and NetBSD were also based directly on the original BSD code, through 386BSD.
If I have my history correct, Apple would later bring in code improvements from both NetBSD and FreeBSD, including some kernel code, and newer parts of the FreeBSD userland, to replace their older NeXT userland which was based on now-outdated 4.3BSD code. I think this is where the confusion comes in. People assume MacOS is a only "technically" a Unix by way of having borrowed some code from NetBSD and FreeBSD. They don't realize that it's fully and truly a BSD and Unix by way of having been built from NeXT and tracing its lineage directly through the original Berkeley Software Distribution. That code they borrowed was replacing older code, also BSD-derived.
> Ah, yes. Drew DeVault. The expert in ... developing 30M LOC OS kernels with billions of dollars on R&D investment in a few years with a small team in an experimental language. "Just make Linux 2", it's so simple, why didn't we think of this?!
This is an unusual take considering Drew DeVault actually does have experience developing new kernels [1] in experimental languages [2].
Drew's own post [3] (which the linked article references) doesn't downplay the effort involved in developing a kernel. But you're definitely overplaying it. 30M SLOC in the Linux kernel is largely stuff like device drivers, autogenerated headers, etc. While the Linux kernel has a substantial featureset, those features comprise a fraction of that LOC count.
Meanwhile, what Drew's suggesting is a kernel that aims for ABI compatibility. That's significantly less work than a full drop-in replacement, since it doesn't imply every feature is supported.
Not to mention, some effort could probably be put into developing mechanisms to assist in porting Linux device drivers and features over to such a replacement kernel using a C interface boundary that lets them run the original unsafe code as a stopgap.
> This is an unusual take considering Drew DeVault actually does have experience developing new kernels in experimental languages.
Not exactly, a new toy kernel is different than reimplementing Linux!
> Meanwhile, what Drew's suggesting is a kernel that aims for ABI compatibility.
Drew and you are missing the point, which is: we don't want to wait years to use a new OS that may or may not come. "Hey, go fork it!" is a fine response to plenty of feature requests, but not this one. The point here explicitly was that in-tree support is/was better, so everyone got watch the development of the thing.[0]
Moreover the actual problem is bad behavior by C kernel devs, to which a leader would say: "This is something we are doing. This project can fail for lots of reasons but it won't fail because kernel C devs are too toxic to work with. You'll apologize to Wedson. You said -- you're not learning Rust. Guess what? Your contributions are paused, while you learn Rust, and assist in building the ext2 reimplementation. Perhaps, as you do, you can make clearer your concerns to the Rust for Linux team, and learn what theirs are as well."
> 30M SLOC in the Linux kernel is largely stuff like device drivers, autogenerated headers, etc.
And? These drivers matter otherwise you have Haiku, Redox, etc OSs that seems promised to an "eternal" beta status because they have to reimplement these drivers..