Hacker Newsnew | past | comments | ask | show | jobs | submit | OneDeuxTriSeiGo's commentslogin

Yeah if you see something labelling itself "MIL-SPEC", that's grade A snake oil bullshit.

That said military spec stuff is actually generally a good sign that something is of higher quality than random off the shelf garbage but only if you know there's a specific spec you want it to work with. And most of the time you aren't even necessarily looking for a MIL-STD (standard) but rather a MIL-PRF (performance rating/spec).

So like if something is "MIL-SPEC" run. But if you see say a spool of fiber that is "MIL-STD-1678 compliant" and more importantly "MIL-PRF-49291 compliant" and "MIL-PRF-85054 compliant", that's probably a really good sign that it'll do its job. The former PRF documenting perf requirements for the fiber itself and the latter PRF the cabling/sheath's corrosion and deterioration resistance.

It's the military so odds are it'll probably cost extra for that and it'll still kinda suck but it'll suck in exactly the way they promised.



Anything with “Marine” in its title, is usually 5X more expensive, but worth it.

Nothing sucks more than having the engine crap out, 150Km offshore, because your fuel injection system got corroded.


Hell, anything even close to salt water is apt to get ate. What's funny is you'll see people say they want to retire and get a beach house. No. You. Don't. Blowing sand is hard on stuff, getting in gears and moving parts. But the salt, the salt is like some alien monster that just dissolves things that flat landers would never expect. Get the smallest amount of saltwater flooding in a closet with equipment and things start to corrode away like it's an alien acid world.

Fun fact: The "traditional" way of making it was extracting piperine from black pepper and reacting that with nitric acid. Nowadays it's made in other more industrial scalable ways.

https://en.wikipedia.org/wiki/Piperidine#List_of_piperidine_...

But yes, the same base precursors (and their siblings) are used to manufacture ADHD meds (ritalin/concerta), antidepressants (paxil), insect repellents (picaridin/bayrepel), hair loss medications (rogaine), allergy meds (claritin), anti-psychotics (haldol), anti-diarrhea meds (imodium), and many others. And also PCP.

So it's non-trivial to prevent. The core of the issue is that the one pot Gupta method came about in the 2000s and it made it extremely easy to manufacture fentanyl using these basic building blocks for so much of the pharma industry. Not only just making it easier to source ingredients but it took out all the steps and made the process easy as hell as well.


That perspective is always such a fundamental misunderstanding of evolution and natural selection. (Yes I know It's for a comedy bit but I see this way too often).

Darwin's theory of evolution and natural selection were never really about directly competing against other members of their species. There was certainly a component of that but natural selection is predominantly about competing against nature itself.

It's all about developing traits that help a given individual or community/ecosystem survive and thrive. And unsurprisingly in most ecosystems it's not competition from peers but rather competing against weather, environmental conditions, and the food chain/predators. So what you see is that at basically every single level (from plants and microbes, up through insects, birds, mammals, and at all stages of human history) you have a constant push for mutualistic behaviors.

It's why birds warn their entire ecosystem (including other bird species and non-bird species) about predators and danger. Or as another bird example, migratory birds will cooperate and share food even when migrating with birds of different species. Anything that can bolster the ability to survive and thrive for the community as a whole (and often entire ecosystem) ends up driving evolution far more than advantages for a single individual. Doubly so with punishing adversarial advantages for individual that end up disproportionately harming the community/whole.


That's only part of the truth. Animals do cooperate within and even across species, but they also compete, even within a species - wolves, ants, and chimpanzees are all territorial (as are many others), and the latter two are known to engage in war within their own species: https://www.livescience.com/animals/land-mammals/a-decade-lo...

And the competing against nature itself you mention, is often determined by the territory a group is able to claim. Some places get drought, others freeze, and in others food is plentiful. Nature may not be a free-for-all deathmatch, but it's not a pacifist coop either. At least, most species don't behave that way.


Oh certainly. But that's the thing. Even with species being territorial, that serves a broader purpose in the ecosystem. Territoriality for predators is important to prevent concentration of predators, overpredation, and then depletion of prey species (which has many downstream effects).

And because of that, territoriality tends to be fairly low in most species until the food supply becomes constrained. And even then it's a gradient where hostilities generally only escalate out of desperation rather than innate competition. i.e. Competing between individuals or communities tends to occur mainly when they fail to compete against the environment and run out of other options.

But really my point was just about the general sentiment that it's "against evolution" or "against natural selection" to help the weak and that doing so is something that humans do out of a unique sense of love or kindness or whatever.


It's more or less the same thing. CUDA TIle is the name of the IR, cuTile is the name of the high level DSLs.


CUDA Tile is an open source MLIR Dialect so it wouldn't take much to write MLIR transforms to map it from the Tile IR to TOSA or gpu + vector + some amdgpu or other specialty dialects.

The Tile dialect is pretty much independent of the nvidia ecosystem so all it takes is one good set of MLIR transform passes to run anything on the CUDA stack that compiles to tile out of the nvidia ecosystem prison.

So if anything this is actually a massive opportunity to escape vendor lock in if it catches on in the CUDA ecosystem.


Yes, but why would you want to use this over the other MLIR dialects that are already cross platform?


That's not really the point. The point is that Nvidia is updating a lot of their higher level CUDA tooling to integrate with and compile to Tile IR. So this gives an escape hatch for tools built on top of CUDA to deploy outside the ecosystem.


Or it's Nvidia doing an Embrace Extend Extinguish on MLIR.


TileIR license means llvm can just fork and support it themselves as needed.


I'm super appreciative to see posts like this. I love NixOS and Nix in general but getting cross compile to "just work" became a lot less trivial in a post-flake world and I feel like every time I have to do it I need to relearn the entire process.

Having a clear guide like this to keep as a handy reference is massively appreciated.


There's a lot of things that really jam up recycling.

One of them is plastic grocery bags. They just cause a lot of problems in the mechanisation of recycling so it's very non-trivial to work around them.

Oils and biowaste of course are of course another issue. Especially for corrugated fiberboard (brandname: cardboard) and the like.

And then also it's hard for machines or lineworkers to easily differentiate plastics without sufficient market or regulatory pressure. If consumers are already generally sorting by broad category then they take most of the legwork out (leaving the facility to check their work) and those consumers also apply market pressure on manufacturers to make it obvious how their product is expected to be recycled.

And of course also there's just a general component of everyone doing a little at a time to keep things organised from the start making the entire process an order of magnitude easier and more efficient for everyone downstream.


I'd suppose this really depends on how you are developing your codebase but most code should probably be using a trailing return type or using an auto (or template) return type with a concept/requires constraint on the return type.

For any seriously templated or metaprogrammed code nowadays a concept/requires is going to make it a lot more obvious what your code is actually doing and give you actually useful errors in the event someone is misusing your code.


I don't understand why anyone would use auto and a trailing return type for their functions. The syntax is annoying and breaks too much precedent.


Generally, you don't. I'm not sure why the parent suggested you should normally do this. However, there are occasional specific situations in which it's helpful, and that's when you use it.


It just is an improvement for a bunch of reasons.

1. Consistency across the board (places where it's required for metaprogramming, lambdas, etc). And as a nicety it forces function/method names to be aligned instead of having variable character counts for the return type before the names. IMHO it makes skimming code easier.

2. It's required for certain metaprogramming situations and it makes other situations an order of magnitude nicer. Nowadays you can just say `auto foo()` but if you can constrain the type either in that trailing return or in a requires clause, it makes reading code a lot easier.

3. The big one for everyday users is that trailing return type includes a lot of extra name resolution in the scope. So for example if the function is a member function/method, the class scope is automatically included so that you can just write `auto Foo::Bar() -> Baz {}` instead of `Foo::Baz Foo::Bar() {}`.


1. You're simply not going to achieve consistency across the board, because even if you dictate this by fiat, your dependencies won't be like this. The issue of the function name being hard to spot is easier to fix with tooling (just tell your editor to color them or make them bold or something). OTOH, it's not so nice to be unable to tell at a glance if the function return type is deduced or not, or what it even is in the first place.

2. It's incredibly rare for it to be required. It's not like 10% of the time, it's more like < 0.1% of the time. Just look at how many functions are in your code and how many of them actually can't be written without a trailing return type. You don't change habits to fit the tiny minority of your code.

3. This is probably the best reason to use it and the most subjective, but still not a particularly compelling argument for doing this everywhere, given how much it diverges from existing practice. And the downside is the scope also includes function parameters, which means people will refer to parameters in the return type much more than warranted, which is decidedly not always a good thing.


1) consistency, 2) scoping is different and can make it a significant difference.

I have been programming in C++ for 25 years, so I'm so used to the original syntax that I don't default to auto ... ->, but I will definitely use it when it helps simplify some complex signatures.


Consistency (lambdas, etc.)


it makes function declarations/instantiations much more grep-able.


Most of the relevance of this is limited to C++ library authors doing metaprogramming.

Most of the "ugly" of these examples only really matters for library authors and even then most of the time you'd be hard pressed to put yourself in these situations. Otherwise it "just works".

Basically any adherence to a modicum of best practices avoids the bulk of the warts that come with type deduction or at worst reduces them to a compile error.


I see this argument often. It is valid right until you get first multipage error message from a code that uses stl (which is all c++ code, because it is impossible to use c++ without standard library).


Those aren't the ugly part of C++ and to be entirely honest reading those messages is not actually hard, it's just a lot of information.

Those errors are essentially the compiler telling you in order:

1. I tried to do this thing and I could not make it work.

2. Here's everything I tried in order and how each attempt failed.

If you read error messages from the top they make way more sense and if reading just the top line error doesn't tell you what's wrong, then reading through the list of resolution/type substitution failures will be insightful. In most cases the first few things it attempted will give you a pretty good idea of what the compiler was trying to do and why it failed.

If the resolution failures are a particularly long list, just ctrl-f/grep to the thing you expected to resolve/type-substitute and the compiler will tell you exactly why the thing you wanted it to use didn't work.

They aren't perfect error messages and the debugging experience of C++ metaprogramming leaves a lot to be desired but it is an order of magnitude better than it ever has been in the past and I'd still take C++ wall-o-error over the extremely over-reductive and limited errors that a lot of compilers in other languages emit (looking at you Java).


Most of the time it's as uou say!

But "I tried to do this thing" error is completely useless in helping to find the reason why the compiler didn't do the thing it was expected to do, but instead chose to ignore.

Say, you hit ambiguous overload resolution, and have no idea what actually caused it. Or, conversely, implicit conversion gets hidden, and it helpfully prints all 999 operator << overloads. Or there is a bug in consteval bool type predicate, requires clause fails, and compiler helpfully dumps list of functions that have differet arguments.

How do you debug consteval, if you cannot put printf in it?

Not everyone can use clang or even latest gcc in their project, or work in a familiar codebase.


> How do you debug consteval, if you cannot put printf in it?

This will be massively improved in the next std release hopefully since Compile Time Reflection looks like it's finally shipping.

And of course there exist special constexpr debuggers but more generally you really should be validating consteval code with good test suites ahead of time and developing your code such that useful information is exposed via a public (even if internal) interface.

And of course in the worst case throws function as decent consteval printf even if the UX isn't great.

But of course there's more work on improving this in the future. There's a technique I saw demoed at one of the conferences this year that lets you expose full traces in failure modes for both consteval and non-const async code and it seems like it'll be a boon for debugging.

----

> Not everyone can use clang or even latest gcc in their project, or work in a familiar codebase.

Sure note everyone is that lucky but it's increasingly common. AFAIK TI moved entirely to clang based toolchains a few years back and Microchip has done the same. The old guard are finally picking up and moving to modular toolchain architectures to reduce maintenance load and that comes with upstream std support.


To be fair to C++, the only languages with actually decently debuggable metaprograms are Lisp and Prolog.

Modern C++ in general is so hostile to debugging I think it's astounding people actually use it.


And I see this argument often. People make too much fuss about the massive error messages. Just ignore everything but the first 10 lines and 99.9% of the time, the issue is obvious. People really exaggerate the amout of time and effort you spend dealing with these error messages. They look dramatic so they're very memeable, but it's really not a big deal. The percentage of hours I've spent deciphering difficult cpp error messages in my career is a rounding error.


Do you also consider that knowing type deduction is not necessary to fix those errors, unless you are writing a library? Because that is not my experience (c++ "career" can involve such wildly different codebases, it's hard to imagine what others must be dealing with)


> it is impossible to use c++ without standard library

Citation needed. This is common for embedded application, since why would anyone program a STL for that?


There's actually multiple standard libraries for embedded applications and a lot of the standard library from C++11 and on was designed with embedded in mind in particular.

And with each std release the particularly nasty parts of std get decoupled from the rest of the library. So it's at the point nowadays where you can use all the commonly used parts of std in an embedded environment. So that means you get all your containers, iterators, ranges, views, smart/RAII pointers, smart/RAII concurrency primitives. And on the bleeding edge you can even get coroutines, generators, green threads, etc in an embedded environment with "pay for what you use" overhead. Intel has been pushing embedded stdlib really hard over the past few years and both they and Nvidia have been spearheading the senders and receivers concurrency effort. Intel uses S&R internally for handling concurrency in their embedded environments internal to the CPU and elsewhere in their hardware.

(Also fun side note but STL doesn't "really" stand for "standard template library". Some projects have retroactively decided to refer to it as that but that's not where the term STL comes from. STL stands for the Adobe "Software Technology Lab" where Stepanov's STL project was formed and the project prior to being proposed to committee was named after the lab.)


AFAIK Stepanov only joined Adobe much later. I think he was at HP during the development of the STL, but moved to SGI shortly after (possibly during standardization).

The other apocryphal derivation of STL I have heard is "STepanov and Lee".


AFAIK He started work on the STL at HP and then SGI based on his and Musser's work on Generic Programming but it didn't get the name STL until he was at the STL.

This of course coming from Sean Parent who was and afaik still is quite close with Stepanov. Sean Parent of course being famous in his own right but also being notable for bringing Stepanov to Adobe and being the one to push for Stepanov to propose the STL to WG21 (the C++ ISO Std committee).


gladly! Since deep-linking draft pdf from phone is hard, here is the next best thing: https://en.cppreference.com/w/cpp/freestanding.html

freestanding requires almost all std library. Please note that -fno-rtti and -fno-exceptions are non-conformant, c++ standard does not permit either.

Also, such std:: members as initializer_list, type_info etc are directly baked into compiler and stuff in header must exactly match internals — making std library a part of compiler implementation


> freestanding requires almost all std library.

have you actually read the page you linked to? None of the standard containers is there, nor <iostream> or <algorithm>. <string> is there but marked as partial.

If anything, I would expect more headers like <algorithm>, <span>, <array> etc to be there as they mostly do not require any heap allocation nor exceptions for most of their functionality. And in fact they are available with GCC.

The only bit I'm surprised is that coroutine is there, as they normally allocate, but I guess it has full support for custom allocators, so it can be made to work on freestanding.


> Please note that -fno-rtti and -fno-exceptions are non-conformant, c++ standard does not permit either.

I did not know that.

My understanding was that C does not require standard library functions to be present in freestanding. The Linux kernel famously does not build in freestanding mode, since then GCC can't reason about the standard library functions which they want. This means that they need to implement stuff like memcpy and pass -fno-builtin.

Does that mean that freestanding C++ requires the C++ standard library, but not the C standard library? How does that work?


Honestly? No idea how the committee is thinking. When, say, gamedev people write proposal, ask for a feature, explain it is important and something they depend on and so on, it gets shot down on technicality. Then they turn around and produce some insane feature that, like, rips everything east to west (like modules), and suddenly voting goes positive.

The "abstract machine" C++ assumes in the standard is itself a deeply puzzling construct. Luckily, compiler authors seem much more pragmatic and reasonable, I do not fear -fno-exceptions dissapearing suddenly, or code that accesses mmapped data becoming invalid because it didn't use start_lifetime_as


So as to your understanding

> freestanding C++ requires the C++ standard library, but not the C standard library

is true?

> The "abstract machine" C++ assumes in the standard is itself a deeply puzzling construct.

I find the abstract machine to be quite a neat abstraction, but I am also more of a C guy.


Looking here:

https://eel.is/c++draft/compliance

One of required headers in freestanding, <cstdlib>, is labelled "C standard library", but it is not <stdlib.h> Something similar with other <csomething> headers.

This kinda implies C library is required, if I read it correctly, but maybe someone else can correct me: https://eel.is/c++draft/library.c


So from the GCC 10 documentation:

> The ISO C standard defines (in clause 4) two classes of conforming implementation. A "conforming hosted implementation" supports the whole standard including all the library facilities; a "conforming freestanding implementation" is only required to provide certain library facilities: those in '<float.h>', '<limits.h>', '<stdarg.h>', and '<stddef.h>'; since AMD1, also those in '<iso646.h>'; since C99, also those in '<stdbool.h>' and '<stdint.h>'; and since C11, also those in '<stdalign.h>' and '<stdnoreturn.h>'. In addition, complex types, added in C99, are not required for freestanding implementations.

> The standard also defines two environments for programs, a "freestanding environment", required of all implementations and which may not have library facilities beyond those required of freestanding implementations, where the handling of program startup and termination are implementation-defined; and a "hosted environment", which is not required, in which all the library facilities are provided and startup is through a function 'int main (void)' or 'int main (int, char *[])'. An OS kernel is an example of a program running in a freestanding environment; a program using the facilities of an operating system is an example of a program running in a hosted environment.

> GCC aims towards being usable as a conforming freestanding implementation, or as the compiler for a conforming hosted implementation. By default, it acts as the compiler for a hosted implementation, defining '__STDC_HOSTED__' as '1' and presuming that when the names of ISO C functions are used, they have the semantics defined in the standard. To make it act as a conforming freestanding implementation for a freestanding environment, use the option '-ffreestanding'; it then defines '__STDC_HOSTED__' to '0' and does not make assumptions about the meanings of function names from the standard library, with exceptions noted below. To build an OS kernel, you may well still need to make your own arrangements for linking and startup. *Note Options Controlling C Dialect: C Dialect Options.

> GCC does not provide the library facilities required only of hosted implementations, nor yet all the facilities required by C99 of freestanding implementations on all platforms. To use the facilities of a hosted environment, you need to find them elsewhere (for example, in the GNU C library). *Note Standard Libraries: Standard Libraries.

> Most of the compiler support routines used by GCC are present in 'libgcc', but there are a few exceptions. GCC requires the freestanding environment provide 'memcpy', 'memmove', 'memset' and 'memcmp'. Finally, if '__builtin_trap' is used, and the target does not implement the 'trap' pattern, then GCC emits a call to 'abort'.

So the last paragraph means that my remark about the Linux kernel might be wrong.

So the required headers are all about basic constants for types, the types themselves (bool), and basic language features like stdarg, iso646 or stdalign. Sounds sensible to me. Not sure what C++ does with that.

This matches with https://en.cppreference.com/w/c/language/conformance.html . Since C23, stdlib.h is also required for dynamic allocation, string conversions and some other things.

This also actually matches the links provided by you. In https://eel.is/c++draft/cstdlib.syn you see that not all declarations are actually marked for freestanding implementations.


Depends on the type of LIDAR. LIDAR rated for vehicle use is at a wavelength opaque to the eyes so it hits the surface and fluid of your eye and reflects back rather than going through to your cones and rods.

It isn't however opaque for optical glass (since the LIDAR has to shine through optical glass in the first place) so it hits your camera lens, goes straight through, and slams the sensor.


You seem to be implying that all automotive lidar are 1550 nm but that's not true. While there are lots of 1550 nm automotive lidars (Luminar on Volvo, Seyond on NIO) there are also plenty of 850 nm to 940 nm lidars are used in cars (Hesai, Robosense, etc). Those can pass through water and get focused to your retina, but they are also a lot lower power so they do not damage cameras.


Also although that energy longer than 1400nm is generally absorbed by the cornea and lens, it is still energy, and it is not a hard bandpass filter per se. Safety is relative at higher wattages.


NGL I thought sub 1550nm LIDAR had been banned for use in new automotive applications already? I clearly must be mistaken but I had thought that was the case.


Not banned. In addition to the Chinese lidars I mentioned, the Valeo Scala on Audi cars is 905 nm, and then there are also Ouster (865 nm), Innoviz (905 nm), Livox (905 nm) etc. The large spinning lidar on top of the Waymo Jaguar I-Pace is also purportedly 905 nm, although in the past they also had a swivelling 1550 nm lidar in the dome of the Chrysler Pacifica cars (situated just underneath a smaller spinning 905 nm one).

The eye safety threshold for 850/905 nm is a lot lower than 1550 nm, so they output way less power, but the much better sensitivity of silicon sensors makes up for it partially. You can also squeeze out more range using clever signal processing and a large optical aperture (which allows you to output more light, but since the light is spread out across the aperture, the intensity doesn't exceed the threshold). Typically, the range of 850/905 nm lidars is less than that of 1550 nm lidars though.

On the bright side, due to lower power, there hasn't been any instances (to my knowledge) of 850 nm and 905 nm lidars damaging cameras, whereas at least two different 1550 nm lidars have been known to destroy cameras (Luminar and AEye).

On the Luminar lidar website [1] they proudly advertise "1,000,000x pulse energy of 905nm".

[1] https://www.luminartech.com/technology


During the presentation, Rivian speaker specifically said it is safe for your camera sensors. Check the youtube video of their presentation


Ah. Theirs may be then. In which case they are probably using a different wavelength and a different glass.

I was just speaking in terms of the commonplace LIDAR solutions for road use.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: