> There is no interpreter or VM, all of the code is converted directly into native machine code. This means you can expect better efficiency than Lua. However, this also means that Nelua cannot load code generated at runtime. The user is encouraged to generate code at compile-time using the preprocessor.
Sounds like this is less like Typescript (take the "base" language, add typechecking, and then "transpile" back into the base language) and more like Crystal (take the "base" language, and create a separate language that's similar to it but with static types and AOT compilation)
If you're looking for something more in the Typescript "transpiler" vein, there's Teal (https://github.com/teal-language/tl), written by the guy who created LuaRocks (and htop, interestingly). It still seems a little "early" in its life; it supports most everything you'd need, including some support for Typescript-style "type declaration files" for third-party libs, but it lacks the nice distribution mechanism Typescript grew over time for shipping/installing those declaration files (right now just have to "vendor" them yourself).
- Destructors are avoided for reasons, but then a full GC is wedged in. Unless you enable a particular flag, then you have to do all memory management manually, making code non-portable. Why on earth wouldn't you just use reference counting with destructors? Unless I'm missing something, that's better in every case.
- Avoids LLVM because 'C code works everywhere', then doesn't support MSVC.
- 1-indexed in libraries copied from Lua, 0-indexed elsewhere. That's just about the only worse thing you could do than 1-indexing everywhere.
- Preprocessor model instead of the features being a more integrated component of the language. Zig has proved a function-based approach can work well and C's legacy has proved that preprocessors absolutely blow.
- Aforementioned preprocessor directives get seriously wedged. You need them for polymorphism, varargs, etc.
- No closures. This one seems the most baffling to me because Lua has the __call metamethod, which is exactly what you'd use for that, and they're 99% of the point of anonymous or inner functions.
> 1-indexed in libraries copied from Lua, 0-indexed elsewhere. That's just about the only worse thing you could do than 1-indexing everywhere
I'll do you one worse: in APL, you can set the index origin to 1 or 0 at will using the special variable ⎕IO. APL also uses dynamic scoping, so if you're in the habit of passing functions around, zaniness will result. (Thankfully this is not common.)
There was even a practice at one point of writing code that would work correctly with any value of ⎕IO.
I'd suggest you submit a pull request, but then you'd get this response from the PR parser:
"Nelua is open source,
but not very open to contributions in the form of pull requests,
if you would like something fixed or implemented in the core language
try first submitting a bug report or opening a discussion instead of doing a PR.
The authors prefer it this way, so that the ideal solution is always provided,
without unwanted consequences on the project, thus keeping the quality of the software."
Also GC by default and systems programming language just don't mix. Rust tried to have GC in it at the early stage of development, but the developers realized that it must be get rid of to make it compete with C/C++.
RAII in general is avoided, because supporting it gives many unwanted consequences, increasing the language complexity and simplicity going against the goals.
> Unless you enable a particular flag, then you have to do all memory management manually, making code non-portable.
You can make portable code to work with or without the GC, the standard libraries do this. Of course you have more work to support both, but you usually don't need to, choose your memory model depending on your problem requirements (realtime, efficiency, etc) and stick with one.
> Why on earth wouldn't you just use reference counting with destructors?
Reference counting is not always ideal because it has its overhead, and the language is not to be slower than C, relying on referencing counting would impact a lot. The user can still do manual reference counting if he needs to, like some do in C. Also referencing counting requires some form of RAII which is out of the goals as already explained.
> Avoids LLVM because 'C code works everywhere', then doesn't support MSVC.
MSVC-Clang can be used, naive MSVC is just not really supported running directly from the Nelua compiler, but the user can grab the C file and compile himself in MSVC. Better support is not provided for MSVC just because of lack of time and interest. But Nelua supports many C compilers like GCC, TCC, Clang. Supporting more backends than just LLVM is still better than not officially supporting MSVC.
> 1-indexed in libraries copied from Lua, 0-indexed elsewhere.
1-indexing is used in just a few standard library functions for compatibility with Lua style APIs. On daily use this usually matters little, also you can either make your code 1-indexed in Lua style, or 0-indexed in systems programming style, it's up to your choice. The language is agnostic to 1/0-indexing, just some libraries providing Lua style APIs use it, you could completely ignore and not use them, you can not even use the standard libraries or bring your own (like in C).
> Preprocessor model instead of the features being a more integrated component of the language. Aforementioned preprocessor directives get seriously wedged. You need them for polymorphism, varargs, etc.
The language specification and grammar can remain more minimal, by having a capable preprocessor, instead of having many syntax, semantics and rules.
Also Nelua preprocessor is not really just a preprocessor, as it provides the context of the compiler itself while compiling, this gives the language some powerful capabilities that only meta-programming the language to understand better. Calling a preprocessor does not do it justice, but the name is used due the lack of a better name.
> No closures. This one seems the most baffling to me because Lua has the __call metamethod, which is exactly what you'd use for that, and they're 99% of the point of anonymous or inner functions.
The language is under development, not officially released yet, and this feature is not available yet but on the roadmap. Nevertheless people can code fine without closures, many code in C without closures for example.
Having some very widely used things being 0 indexed, and others 1 indexed, kind of points to a problem in the language, regardless of if it's technically a language syntax issue:
At the end of the day you're still going to have a ton of time wasted by devs having to check, and there will be bugs eventually of things being not converted or double converted between the two index schemas
Hello, are you the language author? I've a question, the page says
> Safe
> Nelua tries to be safe by default for the user by minimizing undefined behavior and doing both compile-time checks and runtime checks.
And
> Optional GC
> Nelua uses a garbage collector by default, but it is completely optional and can be replaced by manual memory management for predictable runtime performance and for use in real-time applications, such as game engines and operational systems.
But of course, if one prefers manual memory management, then the code will be unsafe, right? Because use-after-free might occur.
(More specifically, free() is always unsafe in every low level lang, unless you have some static checking like Rust's borrow checker or ZZ and ATS compile-time proofs, which I think nelua doesn't have.)
You are correct, when not using the GC and doing manual memory management the code will be unsafe as similar as in C, but the language generate some runtime checks (that can be disabled) to minimize undefined behavior. Nelua does not provide safety semantics like Rust because this increases the language complexity a lot, thus out of the goals. Nelua is not another safe language, and the compiler will not stand in your way when doing unsafe things, the user can do unsafe things like in C.
Even after many years of not using terralang I still cannot forget what a good of an idea it is.
Nelua seems like a more pragmatic implementation of similar ideas, but generates C code instead of embedding the llvm. And doesn't generate code at runtime. But still, things like ecotypes should be possible.
It will be interesting to play with the compile time lua scripting. Also, as mentioned in the other comments not sure about the GC. But there seems to be a manual memory management option.
But still, looks great, kudos to the author, keep up the great work.
PS: If I would implemented it, I would deviate a bit from Lua and replace local with let. It's highly subjective but I think it would make code "prettier"(whatever that means)
> PS: If I would implemented it, I would deviate a bit from Lua and replace local with let. It's highly subjective but I think it would make code "prettier"(whatever that means)
The idea behind it is to have the lowest syntax barrier possible for Lua developers so they can migrate from Lua to Nelua without a sweat.
You could change the language grammar through the preprocessor to accept "let" as an alias for "local". But I recommend people get used to Lua syntax because the metaprogramming will be done in Lua anyway, thus both programming and metaprogramming contexts have similar syntax.
As long as we're bikeshedding, I've often thought that Lua should warm up to using `my` as a keyword in place of `local`. `let` doesn't convey that it's a scope keyword (LISP heritage notwithstanding), while `local` is long and `loc` is an eyesore.
Eyesore is a precise word for the local keyword, from what I see the variable names in scripts are usually short, and the local keyword seems to draw more attention than the variable names themselves.
Of course this can be alleviated with syntax highlighting that would slightly mute the local keyword. And, people using Lua they day probably learn to ignore the keyword automatically.
This looks really fucking cool. I've long said that Lua is a neat language, but hobbled by a few warts/footguns. I think this project corrects many of those issues. Does this project provide a package manager with automated dependency resolution? That was an issue in Lua, IIRC.
About package manager, I don't think it has one yet (at least officially that I know of), but feel free to join discord [1] and ask the core developers about it.
Off by one errors are also the easiest thing in the world to test. I never think much about whether I should do `n` or `n - 1`, I just write a test and then fix it if needed.
Very unfortunate choice of terminology in this context. Most of the math textbooks I used in college used "natural numbers" to mean the positive integers. Others used it to mean the nonnegative integers. So the term "natural numbers" presents the same uncertainty to a reader as indexing in an unknown programming language does :-)
When you casually program in a language, being consistent with what people are familiar with is important to avoid introducing subtle bugs.
A lot of times, languages like Lua get introduced by enthusiasts in a way that others have to touch when things break but not enough to master. Think of shops that have those one off Perl scripts and the challenge with touching them.
This seems to get posted a lot in 0 vs 1 threads as a way to end the conversation in appeal to authority, but for arguments sake, Roberto Ierusalimschy (of Lua) has explained his design choice:
> When we started Lua, the world was different, not everything was C-like. Java and JavaScript did not exist, Python was in an infancy and had a lower than 1.0 version. So there was not this thing when all the languages are supposed to be C-like. C was just one of many syntaxes around.
> And the arrays were exactly the same. It’s very funny that most people don’t realize that. There are good things about zero-based arrays as well as one-based arrays.
> The fact is that most popular languages today are zero-based because of C. They were kind of inspired by C. And the funny thing is that C doesn’t have indexing. So you can’t say that C indexes arrays from zero, because there is no indexing operation. C has pointer arithmetic, so zero in C is not an index, it’s an offset. And as an offset, it must be a zero — not because it has better mathematical properties or because it’s more natural, whatever.
> And all those languages that copied C, they do have indexes and don’t have pointer arithmetic. Java, JavaScript, etc., etc. — none of them have pointer arithmetic. So they just copied the zero, but it’s a completely different operation. They put zero for no reason at all — it’s like a cargo cult.
I have to confess that this EWD on 0-1 indexing is one of my pet peeves. Whenever the topic comes up, someone will invariably link to it uncritically, even though Djikstra's argument is biased in favor of 0-based indexing.
For example, he talks about how (a < i ≤ b) is unnatural, but then fails to mention that this happens with 0-based indexing when we have to iterate backwards.
I guess it's because there isn't a huge difference but 0 definitely is slightly more elegant than 1. And when you're used to programming with the elegant system it's really jarring to switch to an unusual and slightly worse system.
Elegance is a term developers love to throw around to get out of having to define why something is or is not good. Your answer is nothing more than "I like it more".
Nonsense. Are you saying the concept of elegance doesn't exist? I didn't explain why 0 is more elegant but that doesn't mean I'm not saying that it is more elegant.
One example is when you're processing arrays or matrices in blocks. Using right-open intervals (also the most elegant option) you end up with intervals like [0, 4) [4, 8) [8, 12). Pretty trivial to calculate those bounds! It's just 4N.
With 1-based you end up adding and subtracting 1 all over the place.
Another example is indexing into flattened arrays. To get to the relevant element it's just (row_len * col + row). With 1-based indices it's (row_len * (col - 1) + row).
Basically I've done a lot of 1-based programming (in Matlab), and you definitely end up adding and subtracting 1 in places where you wouldn't need to with 0-based programming.
> Are you saying the concept of elegance doesn't exist?
No...You can feel like something is elegant. Others may not feel that way. Saying "This is elegant" doesn't explain why, and since it's totally subjective it's just kind of a meaningless thing to say. Similar to measuring something by how "interesting" it is.
The rest of your post just verifies that your original post should've been "I just like it more."
> > I guess it's because there isn't a huge difference but 0 definitely is slightly more elegant than 1.
I don't see any claim by me that I was explaining why it is more elegant.
> > "0 definitely is slightly more elegant than 1."
> Totally subjective. You like it more.
Would you say the same about beauty?
> "Margot Robbie is more beautiful than Susan Boyle"
> Totally subjective. You like her more.
Let me clear things up for you:
1. Just because you don't explain something (why 0-based is elegant) doesn't make it any less true or more subjective.
2. Lots of things like beauty and elegance are PARTIALLY subjective. There is room for debate. But that doesn't mean that everyone completely disagrees with each other and there's no shared consensus.
3. Elegance doesn't just mean "you like it more". Beauty and elegance are reasons to like things more.
If by "clear things up", you mean behave with hostility towards someone pointing out your bullshit, sure, you sure did.
> I don't see any claim by me that I was explaining why it is more elegant.
Yes, from _my_ post.
> Elegance is a term developers love to throw around to get out of having to define why something is or is not good. Your answer is nothing more than "I like it more".
> Would you say the same about beauty?
Philosophical debates are cute and this is whataboutism. You're using a subjective term to compare two concepts as though that term is shared universally by others. It's not. To quote me: You like it more.
> 1. Just because you don't explain something (why 0-based is elegant) doesn't make it any less true or more subjective.
Not don't, can't. Or more directly, it's subjective. You like it more. You saying why you think something is more elegant doesn't make any other person agree with you. Because it's subjective.
> 2. Lots of things like beauty and elegance are PARTIALLY subjective. There is room for debate. But that doesn't mean that everyone completely disagrees with each other and there's no shared consensus.
Cute whataboutism.
> 3. Elegance doesn't just mean "you like it more". Beauty and elegance are reasons to like things more.
Subjective reasons to like something more.
> This is a weird debate.
It would be a debate if it were possible to have debates about things that were subjective, but it's not, which is my whole point. You like it more.
It can definitely create problems when translating an algorithm from one to the other. I know it was a frustration when I translated algorithms from Matlab to C++.
Something I'd like to see more of are GC and memory allocation guarantees, not just an option to disable it.
Most of the time GC is fine as long as it's low enough latency. Where it matters you don't really want any memory management - which isn't useful, since the memory needs to me managed somewhere in the code. For that reason the pattern isn't to switch to a non-GC'd language because it doesn't have GC but because you can guarantee all the memory is allocated up front and that no freeing or allocation happens in the tight loops.
That's not actually sufficient though. Even in production code bases accidental allocation can slip through. What you'd really like to see is a `no-alloc:` block which forbids operations that may allocate or free memory - GC'd or otherwise - and force the programmer to allocate before entering the block and free outside it.
Nim's new garbage collector was designed with embedded systems in mind [1]. It uses non-atomic/non-locking reference counting combined with C++11 move semantics. The usage of move semantics enables both performance increases and a method to ensure memory is only "owned" by one thread (which enables non-atomic reference counting). Theres optional cycle collection support but I prefer to keep embedded code acyclical.
On your second point you can use Nim's effect system to ensure no allocations occur in a given function or its callees. Currently no one has done this, and might be tricky with the reference counting, but not too difficult either.
I used Nim with ARC for a latency sensitive embedded project. To avoid allocation on the tight reading loop I pre-allocated the values needed, or made them global variables. A few extra integer ops didn't matter, but allocation is sloooow. However, I'd eventually like to add allocation tracking to the effect system. Rather, make the existing effects more granular for this purpose.
That's not really what I meant. The example is showing the problem, the transition to manual memory management just makes it a bit easier to write a program without memory operations in critical sections.
What I'm talking about is a hypothetical language that makes memory operations in critical sections forbidden and considered a compilation error.
This is possible in Nim using the "soft real-time" GC with "GC_step Mode" see https://nim-lang.org/docs/gc.html. You can even tell the GC how much time it can pause.
> In fact, I see many similarities between Nelua and Nim.
I am the Nelua author, and I've used Nim for a reasonable amount of time before creating Nelua, thus Nim served as one of the inspirations for the project and they share similarities. But while Nim resembles Python, Nelua resembles Lua. Also when comparing both, Nelua tries to be more minimal and generate more readable and compact C code.
I've thought about something like this...that if a compiler could tell that code uses only stack-based allocation, it could turn off GC. Of course, that's very limiting, which is why we need more complicated things like Rust's type system to avoid GC.
Java does that, both the HotSpot JIT and the AOT compiler (Substrate VM) I believe will do it. Now it can't always tell that data is local to the stack, but when it can, it won't allocate it on the Heap and will automatically switch to stack allocation (and no GC for it).
It's not production-ready and isn't a GC'd language, but Zig's approach might be to your taste:
> Zig programmers must manage their own memory, and must handle memory allocation failure.
> This is true of the Zig Standard Library as well. Any functions that need to allocate memory accept an allocator parameter. As a result, the Zig Standard Library can be used even for the freestanding target.
While I don't begrudge anyone from creating a new language, I'm trying to figure out what this brings to the table that languages like Nim, Zig, Ecl, Crystal, V, Forth, Gambit, etc... lack. The only thing I can really find is "syntax". Lua isn't really my cup of tea, but to each their own. Regardless, looks like a fun project.
Well, you can still use Lua 5.4 from inside Nelua since it's integrated as part of the backend mechanism for the metaprogramming and not only, therefore I don't think it breaks portability...unless I did not fully understand you correctly.
Can you please clarify a bit more your concern about it?
It seems like global is a requirement for newindexing on _G (or if _G is even a thing in Nelua) from the documentation unless I'm reading it incorrectly.
You are correct, but this is an enhancement to have better compile time type checking and runtime efficiency. In the future _G table with similar Lua semantics is considered to be implemented to have better compatibility with Lua code. But using the "global" keyword will always be recommended, this way the compiler can do better type checking and generate more efficient code.
Arrays are 0 based, when tables are implemented they will be indexed from 1. I believe one of Nelua's design goals is that unaltered Lua code will be valid Nelua code.
>Arrays are 0 based, when tables are implemented they will be indexed from 1
Really?? I can see all the "off by one" bugs already!
I wonder if a tool to convert Lua to Nelua wouldn't be a better idea than making the language incoherent.
Or maybe always make the programmer select the offset like Ada does.
Actually Nelua have been tested to work with Arduino, AVR, RISC-V and some other embedded devices. You can make some free standing code with the language if you follow some rules (avoid APIs that uses lib C and use a custom allocator).
>Although copying Lua syntax and semantics with minor changes is a goal of Nelua, not all Lua features are implemented yet. Most of the dynamic parts, such as tables (...)
Ah I guessed so. The concept of lua tables probably doesn't map to efficient machine code. Real shame because that's what I consider the "killing feature" of Lua as a language.
Is there a way when using those kinds of new languages in a given platform (let's say iOS) to use a modern set of libraries to perform common tasks ( such as http networking, json parsing, websockets, basic multimedia features, etc) ?
Or would i be doomed to rebuild it myself from scraps and random C libs i'd have to gather myself ?
In Nelua you can create new specialized types using compile time parameters, they are called "generics", you can use this for example to create a specialized vector type.
> and typeclasses there?
Yes, but with some metaprogramming. In Nelua you can create a "concept", that is just a compile time function called to check whether a type matches the concept.
Good macro system is a must, indeed, that's very nice. Though just macro system is not enough.
In order to be useful a language should support a combo of HKTs, typeclasses (better implemented with implicits) _and_ macros with implicit materialization. Scala has that (and I'm using Scala) but it has some quirks and problems and I would be really happy to see something with these features but with fast compiler and separated from JVM.
HKTs are something like generics for generics, so no.
E.g. a method takes some input and returns some output that is a list can have integers or strings, so you use generics instead of the concrete type integer/string.
But what if your input type is not a list of CONCRETE_TYPE but instead a WRAPPER_TYPE of CONCRETE_TYPE? E.g. say you have a method that can sort not only lists but also sets or trees, where each of these collections can contain an arbitrary (but comparable) concrete type like integer or string.
This WRAPPER_TYPE which can be parametrized with one or more other types (either concrete types or other wrapper types of arbitrary shapes) encapsulates the concept of a higher kinded type (where a simple type like integer or string would be a low-kinded type in this lingo).
In that case, basic generics are unfortunately not sufficient to express this.
In practice this is used mostly in libraries by library authors to provide users flexibility. It is especially helpful to help with compatibility between libraries that don't know of each other in advance. But it is also very useful for application developers to model different implementations of services/abstractions that have different properties.
Yep, thanks. Though, unfortunately, that's very far from what I can have in Scala. Maybe I'm wrong but seems like that I can't express even basic primitives like functor or monoid using this language. That's sad.
You want this new language to implement a particular set of features in a similar way to a specific, named feature from Scala which they themselves are moving away from in 3.0?
> which they themselves are moving away from in 3.0?
Nope, they are not.
> specific, named feature from Scala
HKTs and typeclasses are not specific to Scala. Implicits and incoherense is, somehow, specific, though it seems to be lot, lot better than coherent implementation and that's why it stays in 3.0.
> You want this new language to implement a particular set of features
Yes, in 2021 it should be obvious that HKTs and typeclasses are must for engineers productivity. Also I can't comprehend how virtually everyone ignores insanely productive concept of implicit materializers. It's very unfortunate that language designers prefer to stick with old dead ideas and don't want to learn from Scala nor Haskell.
Both Lua and Ruby are optimized for aesthetically pleasing syntax that is easy to parse for humans, even for people new to programming. I think "end" works better for that goal than curly braces which most people don't even know how to type and that are rarely used in English outside programming.
Personally I vastly prefer keywords to special characters. My brain has more trouble parsing symbols than words. I have real trouble reading mathematical notation but if I see an algorithm written out with words I am able to grok it quite easily. I think lot's of people are the other way round, not sure . Might be because I was a heavy reader in my youth.
That said I switch between C-like and Lua/Ruby style syntaxes back and forth all the time and it is really not an issue.
Terra was an inspiration to create Nelua in its metaprogramming aspects. Terra's famous Brainfuck example (on the front page of Terra) is available in Nelua's examples folder for comparison of meta programming.
Teal add type annotations and type checking to Lua, and transpiles to Lua, while Nelua is a new systems programming language with optional type annotations and compiles to C.
@ubertaco also made a good comparison in his comment with Teal.
Interesting. If you want to write whole application in Lua-like language, this looks very nice. However one would need some framework to work with it to make, say a simple game.
nice if you are brave enough to use language which wasn't released yet and installs from `master`.
God, does no one have respect for things in development anymore? This place shits on anything that doesn’t have documentation and an ecosystem with the equivalent of 10 years of free labor out of the gate and it pisses me off.
It’s a total turn off to share something with a development community made up of mostly people who don’t actually make things themselves or people who think they know better than people with actual experience in a given discipline.
It’s comments like this and the one below calling languages shit that makes this site a shithole to read sometimes.
I found Lua a rather quirky language, bit like JS or PHP. Non of these have to my taste (since we're talking about flavour) a very nice syntax, nor do they to me demonstrate clear vision in their design. To me Lua flavour tastes like shit.
Lua certainly has quirks (as all languages do), but I've not heard people complaining about it lacking vision or consistency. In fact, you'll hear a lot of people mention Lua's very _deliberate_ design choices. Changes happen slowly, and it's not afraid to break backwards compatibility for the betterment of the language. That's in stark contrast to JS and PHP which add whatever feature is popular at the time and can't purge or change APIs for fear of "break the web".
E.g: I felt the same "taste" about your comment. It doesn't add anything objective to the discussion. And it sounds a lot like what my 4 year old son says about brussel sprouts.
Sounds like this is less like Typescript (take the "base" language, add typechecking, and then "transpile" back into the base language) and more like Crystal (take the "base" language, and create a separate language that's similar to it but with static types and AOT compilation)
If you're looking for something more in the Typescript "transpiler" vein, there's Teal (https://github.com/teal-language/tl), written by the guy who created LuaRocks (and htop, interestingly). It still seems a little "early" in its life; it supports most everything you'd need, including some support for Typescript-style "type declaration files" for third-party libs, but it lacks the nice distribution mechanism Typescript grew over time for shipping/installing those declaration files (right now just have to "vendor" them yourself).