I avoid using exceptions myself so I wouldn't be surprised if I misunderstand them :) I love to learn and welcome new knowledge and/or correction of misunderstandings if you have them.
I'll add that inspiration for the article came about because It was striking to me how Bjarne's example which was suppose to show a better way to manage resources introduced so many issues. The blog post goes over those issues and talks about possible solutions, all of which aren't great. I think however these problems with exceptions don't manifest into bigger issues because programmers just kinda learn to avoid exceptions. So, the post was trying to go into why we avoid them.
RAISI is always wrong because the whole advantage of the try block is to write a unit of code as if it can't fail so that what said unit is intended to do if no errors occur is very local.
If you really want to handle an error coming from a single operation, you can create a new function, or immediately invoke a lambda. This would remove the need break RAII and making your class more brittle to use.
You can be exhaustive with try/catch if you're willing to lose some information, whether that's catching a base exception, or use the catch all block.
If you know what all the base classes your program throws, you can centralize your catch all and recover some information using a Lippincott function.
I've done my own exploring in the past with the thought experiment of, what would a codebase which only uses exceptions for error handling look like, and can you reason with it? And I concluded you can, there's just a different mentality of how you look at your code.
No offense, but why did you decide to write an instructional article about a topic that you "wouldn't be surprised that you misunderstand"? Why are you trying to teach to others what you admittedly don't have a very solid handle on?
None taken :) I think sharing our thoughts and discussing them is how we learn and grow. The best people in their craft are those who aren't afraid to put themselves out there, even if they're wrong, how else would you find out?
obviously the errno should have been obtained at the time of failure and included in the exception, maybe using a simple subclass of std exception. trying to compute information about the failure at handling time is just stupid.
Note that taking a 'const' by-value parameter is very sensible in some cases, so it is not something that could be detected as a typo by the C++ compiler in general.
Right. Copying is very fast on modern CPUs, at least up to the size of a cache line. Especially if the data being copied was just created and is in the L1 cache.
If something is const, whether to pass it by reference or value is a decision the compiler should make. There's a size threshold, and it varies with the target hardware.
It might be 2 bytes on an Arduino and 16 bytes on a machine with 128-bit arithmetic. Or even as big as a cache line.
That optimization is reportedly made by the Rust compiler. It's an old optimization, first seen in Modula 1, which had strict enough semantics to make it work.
Rust can do this because the strict affine type model prohibits aliasing. So the program can't tell if it got the original or a copy for types that are Copy. C++ does not have strong enough assurances to make that a safe optimization. "-fstrict-aliasing" enables such optimizations, but the language does not actually validate that there is no aliasing.
If you are worried about this, you have either used a profiler to determine that there is a performance problem in a very heavily used inner loop, or you are wasting your time.
> if an argument fits into the size of a register, it's better to pass by value to avoid the extra indirection.
Whether an argument is passed in a register or not is unfortunately much more nuanced than this: it depends on the ABI calling conventions (which vary depending on OS as well as CPU architecture). There are some examples where the argument will not be passed in a register despite being "small enough", and some examples where the argument may be split across two or more registers.
For instance, in the x86-64 ELF ABI spec [0], the type needs to be <= 16 bytes (despite registers only being 8 bytes), and it must not have any nontrivial copy / move constructors. And, of course, only some registers are used in this way, and if those are used up, your value params will be passed on the stack regardless.
> Is it because I made hundreds decisions like that? Yes.
Proof needed. Perhaps your overall program is designed to be fast and avoid silly bottlenecks, and these "hundred decisions" didn't really matter at all.
But do you have actual proof for your first claim? Isn't it possible that the "constant vigilance" is optimizing that ~10% that doesn't really matter in the end?
For example C++ can shoehorn you to a style of programming where 50% of time is spent in allocations and deallocations if your code is otherwise optimal.
The only way to get that back is not to use stl containers in ”typical patterns” but to write your own containers up to a point.
If you didn’t do that, youd see in the profiler that heap operations take 50% of time but there is no obvious hotspot.
Technologies: Modern C++ (C++11/14/17/20/23), and the usual hodgepodge of languages/tools every senior engineer has encountered in ~15 years of real world projects
I'm Vittorio, a passionate C++ expert with over a decade of professional and personal experience. My expertise covers library development, high-performance financial backends, game development, open-source contributions, and active participation in ISO C++ standardization.
As the coauthor of "Embracing Modern C++ Safely" and a speaker at over 25 international conferences, I bring real-world insights to empower you fully utilize C++ to its advantage.
I offer specialized training, mentoring, and consulting services to help companies & individuals leverage the full potential of C++.
I am also open to fully remote C++ development/teaching positions.
I think this is a new paradigm shift only just getting underway within the past decade.
In the past we used to be able to look forward to the future to solve obvious limitations of technology back then. Example is how limited and expensive it was to capture photos on rolls of film. Within the past 20 years we can now take effectively unlimited photos digitally on a device that can do much more than just take photos, and that limit has been abolished forever.
It is this forever that is starting to loom on us. Most of us can't imagine a life without Facebook, smartphones, addictive feeds and the like even if we don't directly use them. It is not possible to go back to a state of life untainted by this technology. So now a fancy new technology that promises to paint your end-products for you comes out and in the span of just a few years threatens to change the whole landscape of art that has been repeated in cycles for thousands of years, forever. It is only natural that some would loudly object.
But the same wheels driving human progress that removed the limitations of the disposable camera will not slow down at the stage of generative AI either. I don't see how this would happen given our intelligence has already gotten us far in many other domains. Progress is like a wildfire that eats up dry bushes. If enough of the medium is there it will spontaneously occur and not much can be done to prevent it. Except with technology, it is not dry timber but "what ifs." "What if art doesn't have to be defined by the journey to get there, but by a satisfying end product?" "What if a computer program could replicate the motions of a paintbrush, and create art indistinguishable from a human's?" Any one of us can come up with the next "what if."
If you belive that art could be generative, think twice. Its not about how is it done. But what is a purpose of it? What is a point of make art? To express yourself? To give observers new point of view? To share experience? Also art is beyond digital pixels.
Paradigm is shifting same as first camera wad invented. Obsession with reproducing reality was abolished and shift to all kinds of *isms. Some artist (ie Mucha) used new technology for improve their creative process. Some believed, that photography stole a part of our soul that was trapped on taken picture. It repeats. Just with different technologies.
I'm honestly very interest how we, as humans, will deal with it and how paradigm evolve.
Most of these features have been used by countless C++ developers for the past decades -- I really don't see the point in adopting a language that's mostly C++ but without some of the warts. Either pick C++ or something like Rust.
I generally like C++, but I could trade anything to make it faster to compile, and most of the time, I just use a small subset of C++ that I feel okay with.
Modules support was added recently, and I don't think most libraries or cmake support it yet, and I don't really see tutorial about good practices for modules, especially when it comes down to speeding up compilation.
Also modules do not really speed up compilation that much, apparently, or I have not seen benchmarks, maybe because modules are not well supported yet?
Modules are great in theory, but I am not sure they are usable in 100% of cases, especially with all the existing code that is out there?
C++ is just slow to compile. With the standard library it is much worse. The problem is that with C++ you're not getting as much encapsulation as you would in C unless you do extra work that also has a performance hit (pimpl). This means that C++ code often has to recompile a whole lot more than C code does when doing incremental compilation in my experience.
This is just not true. There's nothing that makes C++ inherently slow to compile.
PImpl doesn't need to have a performance hit as you can implement it with a local fixed-sized buffer that's not heap-allocated.
You can also design your C++ codebase exactly as you would in C, so there's literally no reason why you'll need to recompile more in one language compared to the other.
A quick google "c++ grammar" will give you clues that C++ is not you average language. Templates are also turing complete, and probably not trivial to parse.
Of course I am not talking about C++98, but C++14, 17, etc, which add significant stuff.
C3 benefits from focusing more on the problem at hand than language complexities.
There are definitely advantages to simpler tools, you can streamline development and make people more productive quicker. Compare that scenario to C++ where you first have to agree the features you're allowing and then have to police that subset throughout on every PR.
Personally when I initially learned C++ back in 1993, with Turbo C++ 1.0 for MS-DOS, I hardly saw a reason to further use C instead C++, other than being required to do so.
What language are you using? For a small number of objects, it should be completely insignificant to performance to recompute the whole A* algorithm every frame without any form of caching. I'm surprised...
And this is coming from someone that dislikes exceptions.