Hacker Newsnew | past | comments | ask | show | jobs | submit | Raphael_Amiard's commentslogin

> Today, the criticism about complexity seems naive, because many later languages have become much more complex than Ada

I don’t think you really understand what you’re saying here. I have worked on an ada compiler for the best part of a decade. It’s one of the most complex languages there is, up there with C++ and C#, and probably rust


Mind you, that suggests that the sentence is at least half-true even if "much more complex" is a big overstatement, since Rust, "modern" C++ and the later evolutions of C# are all relatively recent. (What would have compared to Ada in complexity back in the day? Common Lisp, Algol 68?)

As a matter of general interest, what features or elements of Ada make it particularly hard to compile, or compile well? (And are there parts which look like they might be difficult to manage but aren't?)


You're right in your first part. Ada 83 is less complex than modern C++ or Rust. However Ada kept evolving, and a lot of complexity was added in later revisions, such as Ada 95, which added a kind of bastardized and very complex Java style object model layer.

Ada features that are hard to compile are very common in the language. It is generally a language that is hard to compile to efficient code, because rules were conceived in an abstract notion of what safety is. But in general Ada is an extremely over specified language, which leaves very little space to interpretation. You can check the Ada reference manual if you want, which is a use 1000 pages book (http://www.ada-auth.org/arm.html)

* Array types are very powerful and very complicated * Tasking & threading are specified in the language, which seems good on paper, but the abstractions are not very efficient and of tremendous complexity to implement. * Ada's generic model is very hard to compile efficiently. It was designed in a way that tried to make it possible to compile down both to a "shared implementation" approach, as well as down to a monomorphized approach. Mistakes were done down the line wrt the specification of generics which made compiling them to shared generics almost impossible, which is why some compiler vendors didn't support some features of the language at all. * Ada scoping & module system is of immense complexity * The type system is very vast. Ada's name & type resolution algorithm is extremely complex to implement. functions can be overloaded on both parameters & return types, and there is a enclosing context that determines which overloads will be used in the end. On top of that you have preferences rules for some functions & types, subtyping, derived types, etc ...

This is just what comes to mind on a late Friday evening :) I would say that the language is so complex that writing a new compiler is one of those herculean efforts that reach similar heights as writing a new C++ compiler. That's just a fe


And despite all that complexity, you make it work very well (I've used GNAT since about 2002).

what do you mean under Ada's complexity? E.g. C++ is really complex because of a lot of features which badly interoperate between themselves. Is this true for the Ada lang/compiler? Or do you mean the whole complexity of ideas included in Ada - like proof of Poincaré conjecture complex for unprepared person.

"Is this true for the Ada lang/compiler"

Yes, Ada has a lot of the same kind of fractal complexity that C++ has, which derives from unforeseen interaction of some features with some other.

On top of that, as I said in another comment, features are extremely overspecified. The standard specifies what has to be done in every edge case, often with a specification that is not very practical to implement efficiently


I imagine Swift is also a very difficult language to compile.

and Julia

There is none as far as affine types go, even is there is a parallel to be made with limited types, but they don’t serve the same purpose.

The way Ada generally solves the same problem is by allowing much more in terms of what you can give a stack lifetime to, return from a function, and pass by parameters to functions.

It also has the regular « smart pointer » mechanisms that C++ and Rust also have, also with relatively crappy ergonomics


The very obvious flaw with that argument is that flying is defined by, you know, moving in the air, whereas intelligence tends to be defined with the baseline of human intelligence. You can invent a new meaning, but it seems kind of dishonest


I love systems programming language and have worked on the Ada language for a long time. I find Zig to be incredibly underwhelming. Absolutely nothing about it is new or novel, the closest being comptime which is not actually new.

Also highly subjective but the syntax hurts my eyes.

So I’m kind of interested by an answer to the question this articles fails to answer. Why do you guys find Zig so cool ?


It’s hard to do something that is truly novel these days. Though I’d argue that Zigs upcoming approach to Async IO is indeed novel on its own. I haven’t seen anything like it in an imperative language.

What’s important is the integration of various ideas, and the nuances of their implementation. Walter Bright brings up D comptime in every Zig post. I’ve used D. Yet I find Zigs comptime to be more useful and innovative in its implementation details. It’s conceptually simpler yet - to me - better.

You mention Ada. I’ve only dabbled with it, so correct me if I’m wrong, but it doesn’t have anything as powerful as Zigs comptime? I think people get excited about not just the ideas themselves, but the combination and integration of the ideas.

In the end I think it’s also subjective. A lot of people like the syntax and combination of features that Zig provides. I can’t point to one singular thing that makes me excited about Zig


Scala did async io in a very similar way over a decade ago except it was far more ergonomic, in my opinion, because the IO object was implicit. I am not convinced by either scala or zig that it is the best approach.


As someone who still thinks one should write C (so as a completely uncool person), what I like about Zig is that it is no-nonsense language that just makes everything work as it is supposed to be without unnecessary complications, D is similar, except that it fell into the trap of adding to many features.

So, no, I do not really see anything fundamentally new either. But to me this is the appealing part. Syntax is ok (at least compared to Rust or C++).

Having said this, I am still skeptical about comptime for various reasons.


We've recently adopted Zig at a few systems at our company but I think maybe "cool" or "new" is the wrong metric?

I view Zig as a better C, though that might be subjective.


It gets hyped by a few SV influencers.


Came here to say that. It’s important to remember how biased hacker news is in that regard. I’m just out of ten years in the safety critical market, and I can assure you that our clients are still a long way from being able to use those. I myself work in low level/runtime/compilers, and the output from AIs is often too erratic to be useful


>our clients are still a long way from being able to use those

So it's simply a matter of time

>often too erratic to be useful

So sometimes it is useful.


Too erratic to be net useful.


Even for code reviews/test generation/documentation search?


Documentation search I might agree, but that wasn’t really the context, I think. Code reviews is hit and miss, but maybe doesn’t hurt too much. They aren’t better at writing good tests than at writing good code in the first place.


> wasn't the context

yeah, I'm just curious about the vibe in general

> good tests

are there any downsides to adding "bad tests" though? as long as you keep generated tests separate, it's basically free regression testing, and if something meaningfully breaks on a refactor, you can promote it to not-actually-slop


I would say that the average Hacker News user is negatively biased against LLMs and does not use coding agents to their benefit. At least what I can tell from the highly upvoted articles and comments.


SPARK allows you to formally prove that your code is correct according to a given specification. It can thus provides much stronger guarantees than what Rust would be able to provide.

Similar technology exists for Rust, but it is much less advanced than SPARK is (https://github.com/xldenis/creusot)


This is about firmware, nothing to do with the performance of GPUs...


Firmware and drivers have a massive impact on the performance of GPUs. It's not just hardware.


The article states they had no performance hit from switching to SPARK.


It’s rare to see a thread where everyone is simultaneously correct but talking past each other.

None of you are mistaken.


I noticed this happening and just stopped replying :p



Yes, and security has a large performance impact.

Just look at the performance costs of bounds-checking array access in C++ code.

Or more macro, the performance impacs of AV tools or Windows Defender on your system


> Yes, and security has a large performance impact.

Not necessarily. The linked blog talks about SPARK which is about running your code through theorem provers to mathematically formally verify that your code does the correct thing _in all instances_.

Once you have passed this level of verification - you can disable assertions and checks in the release version of the application (whilst of course - having the option of keeping them enabled in development releases).


>Just look at the performance costs of bounds-checking array access in C++ code.

If your compiler can prove you dont need bounds-checking it will remove the check and the performance would be the same. Hence, if your program has been proven to have no runtime errors you dont need them.


> If your compiler can prove you dont need bounds-checking it will remove the check and the performance would be the same

and in practice that is a very big "if"


Wouldn’t the performance costs of bounds checking on arrays be the same if the computer was doing it or if your code was doing it?

By that logic C/C++ doing no bounds checking speeds your code up?


> Wouldn’t the performance costs of bounds checking on arrays be the same if the computer was doing it or if your code was doing it?

It depends. The C programmer can choose to do the bounds checking in a for loop by just checking once before the loop begins, or once per iteration even if an array is accessed multiple times in the loop, or the safe language might have more overhead than a simple if statement in the C code. This can, of course, go the opposite direction (the safe language has verified the loop bounds, but the C programmer is checking before every array access). It's a battle between the C programmer and the designer and/or implementer of the safe language.

One of the reasons I like C is it gives you more control. This can be a good or a bad thing. This can lead to some really performant code you couldn't do in most languages or it can lead to some gnarly security problems. Maybe both in the same spot of code.

I use C to write mostly pet projects at home. I use it at work without having a choice in the matter.


Yes, which is why compiling on different optimization settings will have bounds checking on or off in C++


Well, yes, it does. Whether or not that’s a good tradeoff is a different question.


Not defending Russia in general, or in particular russian government. As a French person whose national medias are completely taken over by multinationals (source https://www.monde-diplomatique.fr/cartes/PPA), not only the ban was a bit laughable in terms of banning propaganda, but also RT France's perspective and journalism was refreshing. It was presenting a skewed version of the world, but if you believe as I do that we won't attain information via objectivity, which is a nebulous concept anyway, but via plurality, RT France's disappearance is a net loss for the French media landscape, and I'm a bit alarmed at the black&whiteness of views I see here and in other places.


>if you believe as I do that we won't attain information via objectivity, which is a nebulous concept anyway, but via plurality, RT France's disappearance is a net loss for the French media landscape

I urge you to reconsider this philosophical standpoint, as it fails under adversarial conditions. It is possible for me to distort your view of the world - make it less accurate - even while telling you only true things. Selective truth can convey negative information. I merely need to have an idea of your pre-existing beliefs, and only correct some of them. If your terminal value is letting people be better informed, as defined by allowing them to make better predictions about the world, then permitting state propaganda outlets to tinker with their minds is a net loss.

Note that the Russian state does not share your viewpoint about free information flow. Can you really consider RT to be a good-faith participant in the public dialogue? If you're looking for a "fair" principle behind banning RT, I would argue that that is a good one - if you censor media, expect your media to be censored likewise.


Adding 5 propaganda sources to 1 actual new source doesn't improve anything it dilutes it. Plurality in that case only makes things worse. RT and Sputnik are garbage, they weren't great before but they're absolute garbage, rivaling "Infowars" in the USA. However, I think banning RT is bad as well and encourages politicians to ban other news sources that are reliable but opinionated.


To play devil’s advocate, if you could share the name of any actual news sources you deem credible and then give me a minimum number of objectively verifiable examples of incorrect or misinformation you would need to see from them in order to discredit them as an actual news source, I would gladly take up the challenge. Genuine offer.

My point is not to claim that disreputable sources should be treated as legitimate news. Instead it is to point out that some of the most credible news media have been caught spreading fake news to manipulate public opinion when it was politically convenient.

Historically this has happened most often in the run up to or the early stages of war.

“In a time of war the first casualty is the truth” - some guy on the Internet


> For verification in general, is the expense of verification in this case because of the model needed to verify Ada? For instance, perhaps a language that makes different choices might have a model checker that could scale better.

I don't think so. The SPARK subset has been chosen to aid verification. The problem of proof is inherently computationally hard, and the most gains you can expect will come from advances in solver technology, both algorithmic and in terms of scaling to multiple cores or GPU eventually.

Just my opinion :) But I work at AdaCore (not on SPARK) so have some familiarity with the subject.


There are also toolchains shipped as part of Alire since 1.1:

https://github.com/alire-project/alire/blob/release/1.1/doc/...

So you have a workflow similar to cargo in Rust:

* Install package manager * Let package manager install toolchains * ??? * Profit


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: