Hacker Newsnew | past | comments | ask | show | jobs | submit | HaroldCindy's commentslogin

We need to develop new etiquette around submitting AI-generated code for review. Using AI for code generation is one thing, but asking other people review something that you neither wrote nor read is inconsiderate of their time.


I'm getting AI generated product requirements that they haven't read themselves. It is so frustrating. Random requirements like "this service must have a response time of 5s or less" - "A retry mechanism must be present". We have a specific SLA already for response time and the designs don't have a retry mechanism built.

The bad product managers have become 10x worse because they just generate AI garbage to spray at the engineering team. We are now writing AI review process for our user stories to counter the AI generation of the product team. I'd much rather spend my time building things than having AI wars between teams.


Oof. My general principle is "sending AI-authored prose to another human without at least editing it is rude". Getting an AI-generated message from someone at all feels rude to me, kind of like an extreme version of "dictated but not read" being in a letter in the old days.


Wow, this describes _exactly_ what I've started to see from some PMs.


I expect this is / was a very common problem for people porting 32-bit game code to newer compilers. I work on a fairly old codebase that forces use of x87 for a handful of code paths that don't work correctly otherwise. GCC will use default to x87 if you do an i386 compile, but will default to SSE for 64-bit builds, so you have to be careful there too.


I'm the contractor responsible for SL's Luau VM integration, appreciate the kind words about the Luau integration!

We're still in figuring out our async strategy for user-facing APIs to be honest, so these references are super helpful. We already have preemptive scheduling of execution, but it's most likely to be some kind of wrapper around `coroutine.create()` where an event loop is responsible for driving execution flow and internal `coroutine.yield()`s let you specify what you're `await`ing.

We'll likely have an RFC for how that will all work within the year, but several users have written their own bespoke `async` / `await` implementations for SL in Lua already.


> bespoke `async` / `await` implementations for SL in Lua already.

I did see these as well as some eventloop-like wrappers, but it's cool to have a built in implementation would be great so each script doesn't need to ship it.

How do you plan on migrating data from Mono's VM to Luau? Off the top of my head I can't think of any method that would be 100% reliable.


>How do you plan on migrating data from Mono's VM to Luau?

It helps a lot that you're only dealing with LSL that was compiled to .NET CIL by a single compiler and transformed into a state machine via an internal tool that predates `async` / `await` in .NET. Luckily you don't need a strategy that works for arbitrary .NET assemblies.

We can inspect those assemblies and the saved script state is stored in an LL-defined serialization format that includes everything on the stack / reachable via the heap. That could be converted to the script state serialization scheme we created for Luau.

The biggest complication would be that .NET CIL presents a stack-based bytecode whereas Lua(u) bytecode is register-based. There's prior art there, IIRC Android's Dalvik bytecode format is register-based and isn't generally compiled directly, you compile stack-based Java bytecode and the Android devkit has tools that convert it to stack-based Dalvik. We could use a similar scheme to convert the limited subset of CIL we need into Luau bytecode, possibly with some Dalvik-like extensions that allow use of "extended" registers for cases where we'd run into Luau's 255 register limit.

I'd like to eventually open-source SL's existing internal tooling for Mono so that people can get a better sense of the problem space and how that conversion would work. It really should have been public from the outset, and I believe the original author of SL's Mono integration wanted it to be.

Migrating existing Mono scripts onto Luau is a bit far out though, since we're still working on the core VM stuff.


Speaking of script state serialisation, is there any improvement to the size of those when being stored/transferred.

IIRC one of the biggest fail points in region crossings is that the source simulator has to serialise and send the state of all scripts attached an agent to the target simulator, and if this fails the crossing fails and the user logs out (and in many cases the script will get marked as not running)


> Speaking of script state serialisation, is there any improvement to the size of those when being stored/transferred.

They're about the same as before, they weren't terribly large to begin with though. From what I've seen the region crossing issues aren't caused by script state serialization, but hard to track down issues in edge cases in object handoff that're outside the scope of my contract.


One thing that was immediately apparent upon switching VMs was that a lot of the existing overhead was in scheduling, context switching and the implementation of the actual library functions like `llDoWhatever()`.

We haven't even used Luau's JIT at all yet, but preemptive scheduling of what's typically trivial glue code is much cheaper and easier with a VM that supports it as a natural consequence of its design versus transforming everything into a state machine at the AST or bytecode level for a VM that doesn't.

> Actually, the biggest problem is that each idle program uses about 1us per frame, which adds up.

More scheduler overhead to resolve :)


(For those not familiar with Second Life, every object that does something has a little program in it. Every chair you can sit on, every door you can open, and every clothing item where you can change colors has a small program written by some users. Most of those programs are almost always idle. But there's a tiny amount of CPU time consumed on each frame for each idle program, about 1us to 2us in the Mono implementation. A region can have 10,000 little programs, each eating 1us on each simulation cycle, 45 times a second. This adds up.)


The amount of heavily custom scripted HUDs required to do things in Second Life seems pretty insane last time I checked it, one for a head, one for a body, one for animations etc. I'm surprised the viewer doesn't have a way to "dock HUDs" where they can be activated/deactivated with one click when not in use in viewer managed regions.


Agreed. I think an underappreciated aspect of choosing a script VM in the space Roblox is in (user-generated content where scripts are content) your product is at the mercy of whoever controls your scripting implementation.

The scripting engine is an integral part of your product, and you need to "own" it end-to-end. Any bugs that creep into new versions of your scripting engine, any API breakage or design changes that impact your usecase are things that you are responsible for. Roblox owns the entire toolchain for Luau, and it's relatively small compared to the set of libraries required to compile to and execute WASM in a performant way.

The nuances of your typical JITing WASM runtime or V8 are pretty hard to learn compared to a simpler VM like Luau, it's a big reason why I've used Luau in my own projects.


To be fair, both `Analysis` (the type-checker, not necessary at runtime or compile time) and `CodeGen` (the optional JIT engine) have no equivalent in PUC-Rio Lua.

If you look purely at the VM and things necessary to compile bytecode (AST, Compiler and VM) then the difference in code size isn't as stark.

Having worked with both Lua 5.1 and Luau VM code, Luau's codebase is a heck of a lot nicer to work on than the official Lua implementation even if it is more complex in performance-sensitive places. I have mixed feelings on the structural typing implementation, but the VM itself is quite good.


Further, these extra components are easy to omit if you don't want to use them.

The REPL that we offer in the distribution doesn't include any of the analysis logic and it's just 1.7mb once compiled (on my M1 Macbook). I'm not sure how much smaller it gets if you omit CodeGen.

Luau can be pretty small if you need it to be.


Did you say just 1.7MB? For the REPL alone? Or is that with a bunch of heavyweight libraries?

The Lua REPL executable on my cellphone is 0.17 megabytes.


Also for the first ten years I used computers I was using all kinds of REPLs on computers that didn't have 1.7MB of RAM. On my first computer, which had one floppy drive and no hard drive, 1.7MB would have been 17 floppy disks, or 26 times the size of its RAM. So I'm kind of unconvinced by this stance that 1.7MB is a small REPL.

I mean, it's smaller than bash? But even your mom is smaller than bash.


> If you look purely at the VM and things necessary to compile bytecode (AST, Compiler and VM) then the difference in code size isn't as stark.

I suspected as much, but I didn't want to guess since I'm not familiar with either codebase. Thanks for the info!


I wasn't aware that single-file Java without a top-level static class was possible now, that + JBang seems quite useful for small tasks.

One nit:

> Python programmers often use ad-hoc dictionaries (i.e. maps) to aggregate related information. In Java, we have records:

In modern Python it's much more idiomatic to use a `typing.NamedTuple` subclass or `@dataclasses.dataclass` than a dictionary. The Python equivalent of the Java example:

    @dataclasses.dataclass
    class Window:
        id: int
        desktop: int
        x: int
        y: int
        width: int
        height: int
        title: str

        @property
        def xmax(self) -> int: return self.x + self.width

        @property
        def ymax(self) -> int: return self.y + self.height


    w = Window(id=1, desktop=1, x=10, y=10, width=100, height=100, title="foo")


This is obviously valid, but it's definitely more common in a language like Python to just dump data inside a dict. In a dynamic language it's a far more flexible structure, it's the equivalent of HashMap<? extends CanBeHashed, LiterallyWhatever>, which is obviously a double edged sword when it comes to consuming the API. Luckily more rigid structures are becoming more popular at the API boundary.


That's deranged, just use a namedtuple and some functions. Even decorators for something this simple are a code smell.

What do you do when another module needs ymin, inheritance?

OO is dead, leave it buried umourned.


There's at least one more to add to the pile, Google's Fuchsia is primarily written in Rust and aims to support the Linux ABI through "starnix".

See https://fuchsia.dev/fuchsia-src/concepts/components/v2/starn... and https://fuchsia.googlesource.com/fuchsia/+/refs/heads/main/s...


No, just download the source and check. It's 14M C/C++ and 4M in Rust. There is another 3.3M in Go and 1.1 in Dart. This is a usual trope that Rust is just about to replace C++ but in fact more and more gets written in C++.

curl -s "https://fuchsia.googlesource.com/fuchsia/+/HEAD/scripts/boot..." | base64 --decode | bash


Fuchsia is written in C++, not Rust.


It's half-half. The kernel is c++, but it is small relative to the overall OS which is predominantly use space. Growth in rust far outpaces c++ so in a few years c++ will likely be a much smaller fraction. Also, notably, starnix is entirely written in rust.


No, just download the source and check. It's 14M C/C++ and 4M in Rust. There is another 3.3M in Go and 1.1 in Dart. This is a usual trope that Rust is just about to replace C++ but in fact more and more gets written in C++.

curl -s "https://fuchsia.googlesource.com/fuchsia/+/HEAD/scripts/boot..." | base64 --decode | bash


A lot of what you're seeing is third party dependencies, not code fuchsia developers have written. A lot of that is third party code is also dead and isn't compiled into anything (like mesa). If you do a more detailed analysis of what ends up in an actual fuchsia image, it'll look a bit more like 50/50 (and that's close to what you get if you just remove the third_party/ directory). It would be strange on fuchsia to start on a new project and choose c++ over rust these days. Most critical components of the os including filesystems, network stack, linux emulation layer, init system, etc are all rust. Things which are not are likely to be rewritten in rust eventually.

Go and dart are basically gone from the system as well. Go is only used in the legacy netstack which has been displaced by one written in rust, and is otherwise only used in host tools for building fuchsia. I believe dart is almost gone, with a few remnants left in the form of build tools. Flutter does support fuchsia (and that is dart), but that support is not maintained in a fuchsia repo.

Source: I work on fuchsia.


Thanks for the clarification, I was definitely under the misapprehension that Fuchsia was basically 100% C++.


>> the VM assumes that the bytecode was generated by the Luau compiler (which never produces invalid/unsafe bytecode)

Yep, to that end they also have a basic bytecode verifier (only used in debug mode / when asserts are enabled) that validates the compiler only outputs valid bytecode, and I believe they continuously fuzz the compiler to make sure those asserts can't be triggered. See https://github.com/luau-lang/luau/blob/0d2688844ab285af1ef52...

It's fairly robust (and Luau bytecode isn't _that_ complex,) but they made the right decision disallowing direct bytecode execution.


Any upstreaming was unlikely to ever be acceptable to the PUC-Rio folks.

PUC-Rio Lua uses C (with a tiny bit of C++ for stack unwinding on errors if folks don't want to use `longjmp()`,) Luau is strictly C++. Luau rewrites quite a lot of the core structures, and they aren't compatible. Further, Luau is a fork of PUC-Rio Lua _5.1_ (the current is 5.4 I think?) and intentionally picks and chooses features from later Lua versions. Even things like improvements to the core VM interpreter loop wouldn't be upstreamable, because Lua 5.2+ has a fundamentally different model to allow for yielding in metamethods. PUC-Rio would not accept changes other than fixes for critical bugs in Lua 5.1.

I like both PUC-Rio Lua and Luau, but a hard fork with no upstreaming was the only option here. The architectural differences of the VMs are so large now that it would amount to PUC-Rio adopting Luau. Supposing there is some tiny bit that PUC-Rio is interested in, they can cherry-pick it out of Luau, since they use the same license.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: