Hacker Newsnew | past | comments | ask | show | jobs | submit | GuestHNUser's commentslogin

Give the demo a try. I compared the CPU usage to the win10 file explorer, and they look identical (both are minuscule); it's super impressive.


Yet these "lower-level" APIs, DX12 and Vulcan, often don't significantly out perform DX11, and in many cases DX11 performs better. I put lower-level in quotes because those APIs bake assumptions about the hardware into their API which shouldn't be there to begin with because they frequently prevent drivers from getting the most out of the hardware.


If there there was an award for post of the year, this would be a contender. Amazing work.


Thank you!


> I use OG vim mainly because I perceive it as more performant.

I feel the same way. I am not sure what performance comparisons have been made between the two projects, but I stick with vanilla vim (gVim typically) over vanilla nvim due to the responsiveness.


I'd be really surprised if there's much difference at all in speed/responsiveness if you run them without plugins.


Function pointers typically


I find it sad that this paper mentions two valid critiques of OOP[0][1], yet makes no effort to engage with their specific criticisms. Also, citing Microsoft's COM object system as _the_ example of OOP's great success must mean that the authors are simply ignorant to how bad COM is and why Microsft has ceased anything beyond legacy support for it.

Ultimately, I find the thesis of the paper fundamentally wrong for presuming that OOP is somehow more beneficial than other approaches to dynamic dispatch. Moreover, the use of interfaces in thr paper are more restrictive to code structure, and in turn, are more likely hurt code reuse than help it.

Interfaces are an inherently weaker form of the age old Operation Code and data packet paradigm. Take the Widget interface the paper gives as an example (page 5). This interface has now been set in stone. Any additions that would be useful for the widget interface, say OnClick(), will require a breaking change. This is because all code using the old interface will have to be updated and recompiled to satisfy the new interface requirements (even if a widget won't functionally change from using said new interface). Meanwhile for code using op codes, a new op code value is defined and nothing in the old code is required to change. (This assumes that this old code will perform a no operation for the new opcode, which historically has been the case for systems that use this method).

In fact, Win32 did exactly this opcode and data packet protocol for its message loop to great long term success. They have regularly extended their existing code without breaking backwards compatibility of older versions. The longevity of code in typical OO systems pales in comparison.

[0] https://harmful.cat-v.org/software/OO_programming/why_oo_suc... [1] http://stlport.org/resources/StepanovUSA.html


> why Microsft has ceased anything beyond legacy support for it

That is not true at all. DirextX is COM, UWP was COM based, WinRT is still COM, WinUI and the Windows App SDK is again based on COM. C++, C#, Python and Rust are all supported programming languages for the Windows App SDK thanks to - you guessed it: COM. The C++, C# and Rust language projections for it are still being constantly updated: https://github.com/microsoft/xlang

When they started Project Reunion (code name for Windows APP SDK) back in 2021, just before Windows 11's release, they decided to double-down on COM. Like hard.

https://github.com/microsoft/WindowsAppSDK/blob/main/docs/fa...

> Practically speaking, any language & runtime that can handle COM objects can support Windows App SDK


And, I believe, the entire .NET CLR infrastructure is essentially an extension of COM technologies.


Not really. You can use and author COM components in .NET, but most developers use the platform more like Java.


> Interfaces are an inherently weaker form of the age old Operation Code and data packet paradigm.

Only if the language in question doesn't allow something like `interface MyNewInterface extends MyOldInterface { ... }`.

I'm not too clued up on state-of-the-art for OO languages, but my understanding is that this is a practical approach in the mainstream languages that support OO.

For your [0] link, I'm familiar with those arguments, but I believe that this is very subjective. For example, in a modern context, I'd probably use interfaces, which makes all those arguments, except for the state argument, moot.

For the [1] link, I've not read that before but it is a very long interview and I am in disagreement with the general thrust that the argument is "generic programming" vs "OO patterns". If all the facts were forced into that argument, then sure, there might be a point to the criticism, but the fact is that it is possible and simple to use both at the same time.

IOW, they are not in competition with each other, so presenting one as superior to the other is pointless.


> This interface has now been set in stone. Any additions that would be useful for the widget interface, say OnClick(), will require a breaking change

This is the expression problem[1]. There is an inherent tension between simplicity of extending the number of operations and simplicity of extending the number of types acted on.

[1] https://en.wikipedia.org/wiki/Expression_problem


This interface has now been set in stone. Any additions that would be useful for the widget interface, say OnClick(), will require a breaking change.

The idiomatic way to deal with this is to add a new interface with the extensions. This not only prevents old code from breaking but also gives you compile-time assurance that your clients and objects know how to communicate with each other.


>> This interface has now been set in stone. Any additions that would be useful for the widget interface, say OnClick(), will require a breaking change.

> The idiomatic way to deal with this is to add a new interface with the extensions.

I also found that particular criticism weird in the GP, considering that the paper itself, with the widget example cited by the GP, performs the `extend` on an interface to add to existing interfaces without breaking existing code.

IOW, it uses the widget example, quoted by GP, to demonstrate how to add to an interface without breaking changes.


> I think that object orientedness is almost as much of a hoax as Artificial Intelligence

Oof, killing two hypes with one stone!


That interview is a couple of decades old, Stepanov is of course referring to GOFAI.


> age old Operation Code and data packet paradigm

Can you provide more info what you mean by this? Something message-passing like, where the receiver ignores unknown opcodes?


You can take a look at this discussion[0] where Casey Muratori designed something like that where communication is done by opcodes, and packets instead of vtables. Note: search for "raw_device_operation" if you want to get to the code, but reading the whole thing is very worth it. [0] https://github.com/cmuratori/misc/blob/main/cleancodeqa-2.md


Hmm...what's your definition of "object oriented"?

I made up the term 'object-oriented', and I can tell you I didn't have C++ in mind

-- Alan Kay, OOPSLA '97

https://www.youtube.com/watch?v=oKg1hTOQXoY&t=634s

While I also don't know what this "packets and opcodes" thing is, it certainly sounds a lot like, for example, Smalltalk-72. And vtables are certainly not a requirement, more of an anti-pattern for object-orientedness.


and the very next sentence Alan Kay said was:

"So, the important thing here is that I have many of the same feelings about Smalltalk

Part of the message of OOP was, that as complexity starts becoming more and more important, architecture's always going to dominate material.

I have apologized profusely over the past 20 years for making up the term object-oriented because as soon as it started to be misapplied I realized I should have used a much more process-oriented term for it.

If you had to pick one cause of both particular difficulty in our field, and also general difficulty in the human race, it's taking single points of view and committing to them like they're religions. This happened with Smalltalk."


Yes, I tend to quote that second part as well. In fact, I keep that link ready as well:

https://www.youtube.com/watch?t=653&v=oKg1hTOQXoY

Which is why I question the "OO means it has a vtable" stance that seemed the promulgated here. While "packets and opcodes" is a bit non-specific, it does not appear to be incompatible with OO, and, for example, Smalltalk-72 seems very similar to "packets and opcodes"...though, again, it's not really specific enough to say with any certainty.


Fitting, because Stroustrup didn't have Smalltalk in mind. C++ descends form Simula (in addition to C of course).


“In fact, Win32 did exactly this opcode and data packet protocol for its message loop to great long term success”

Win32 is not exactly easy to work with though.


> but critically, it needs to adapt to its users – to people!

In principle, I want this to be true. But, in practice, I think products change because the teams that built a product need to justify their continued existence within a corporation. Slack's last few rounds of UI changes for instance, has been a net negative for me, the user. Why can't I split several channels in the same window anymore? Why did they add a thick vertical toolbar that I don't use and can't remove? Not for my benefit, that's for sure.

p.s. Kelley not Kelly is the correct spelling of his name.


You are also sure that those changes were not beneficial for someone else in a way where your single opinion is outvoted?


I'm sure it worked lovely for whatever biased focus group Slack Inc hired to review the changes. But where I work the redesign is pretty much universally hated.

I would've thought a company centered around developer culture would understand the concept of not fixing what isn't broken. I guess pencil pushers have now fully taken them over.


I think that it’s very important to remember that most software, by count, is not some deep-pocketed VC-funded / big tech company’s baby.

If HN people stopped working for HN companies then they’d see that most of the behaviours they complain about, on HN, are not universal inevitabilities.


For those not familiar with the author, this is Daniel Lemire, a high performance software expert. He has done lots of work about squeezing as much performance out of a CPU as possible; see his work on SIMD JSON parsing[0]. I imagine he wrote this out of frustration for the software industry's seeming inability to identify who actual experts are in the field. It doesn't take much work to see how much books like Design Patterns and Clean Code have negatively shaped the industry.

[0] https://lemire.me/blog/2021/09/25/new-release-of-the-simdjso...


I am currently finising my PhD and will start next month as a software architect. Among others I also looked into these books. What is the problem with them and what would you recommend instead?


This lecture is a decent criticism of "clean code" and other non-expert advice.

https://youtu.be/7YpFGkG-u1w


The issue with those books is that they don't have any concrete data that affirms that they are worth following. And, in fact, that style of code is largely to blame for why modern software feels so sluggish. They often reduce performance by 10x if not 100x or worse. I like this video on the topic for reasons not to follow SOLID principles[0]. Muratori also has an excellent talk on writing APIs that are flexible and performant[1]. As for books on understanding the hardware performance, Computer Systems: a Programmer's Perspective, is the best example (not the international edition though, according to the author's recommendation).

I'm not aware of any architecture books recommended by anyone that cares about performance unfortunately. Most high performance software is written iteratively, meaning they aren't assuming a code structure from the start. Andreas Fredriksson, a lead engine programmer at Insomniac Games, has an excellent quote on how he writes high performance software[2]:

> Work backwards from hard constraints and requirements to guide your design. Don’t fall into the trap of designing from a fluffy API frontend before you’ve worked out how the thing will handle a worst case. Don’t be afraid to sketch stuff in while you’re proving out the approach.

> The value is what you learn, never the code. Hack it and then delete the code and implement “clean” or whatever you need. But never start there, it gets in the way of real engineering.

> As an industry we spend millions on APIs, documentation and abstraction wrapping a thing that isn’t very good to start with. Make the thing good first, then worry about fluff.

Casey Muratori also has written blogs about his programming style[3]. (He also runs a great course about performance at computerenhance.com). Abner Coimbre has a great article on how NASA approaches writing software[4]. Of course, there is also Mike Acton's famous CppCon talk about Data-Oriented Design[5].

The standard advice usually boils down to this: focus on the problem you have to solve, and be aware how damaging solving the wrong problem can be. It's a good idea to focus on what data your program receives and focusing on handling worst cases.

Since it is difficult to tell who is worth listening to, I suggest always investigating what actual software the person speaking has written. Those that write real time software or software that must not fail under any condition tend to speak very differently about typical industry practices for good reason.

[0] https://youtu.be/tD5NrevFtbU?si=Jkg6VKBHns32_IU_

[1] https://youtu.be/ZQ5_u8Lgvyk?si=tMuPFxKbrboKrBFr

[2] https://twitter.com/deplinenoise/status/1782133063725826545

[3] https://caseymuratori.com/blog_0015

[4] https://www.codementor.io/@abnercoimbre/a-look-into-nasa-s-c...

[5] https://youtu.be/rX0ItVEVjHc?si=buLbaqoc3Zugfwr7


> There's no character development to speak of, the plot is secondary, and visual spectacle is placed front and center.

I totally agree with this take, but I think Dennis' directing/writing is full of 'Telling' instead of 'Showing', not that he took "Show Don't Tell" to the extreme. The entire movie is full of instances where the audience is told aspects of the characters/world, but isn't shown them in the first Dune movie:

- We are told the Atreides ruled Caladan, but at no point is the audience shown who the Atreides' subjects are, or how their subjects feel about the Atreides. The only shots the film has on Caladan are beautiful yet empty areas of the Scottish Highlands. Where are the people they rule over? What does their way of life look like? None of this is shown, but it should have been.

- On Arrakis, we are only ever told how strong and powerful the Fremen are from characters like Duncan Idaho. In fact, the only time we get to see the Fremen fight is at the end of the movie when Paul, the child _who has never been in a life or death fight before_, makes a fool of a supposedly strong Fremen fighter! Dennis clearly wants the audience to perceive the Fremen as strong, yet he fails to illustrate their strength on screen. I understand that Dennis wants Paul to be seen as powerful too, but the resulting scene undercuts everything that the movie has told us about the Fremen's fighting ability.

If it isn't already clear, I don't think Dennis Villenueve is a particularly good film maker (though this is not to call him a bad one). He likes to have large empty scenery shots which are almost monochrome. I find that it makes his imagery striking but, ultimately, boring. For instance, his shots in Bladerunner 2049 mostly depict an empty wasteland that, though striking in its scale, doesn't drive one's imagination. The original Bladerunner's shots are so cluttered with detail and color that it fills every location with a unique character. This is why I think Bladerunner inspired so much other media after it; the audience's imagination can't help but linger on the sets and one off characters of the original film.


I agree with the original poster. The novel was extremely expositional, with epigraphs, italicized personal thoughts, and shifting points of view. The film is the novel with all the exposition removed. I see the film not just as an adapatation, but as a direct response to Lynch's version which includes all the exposition you could like.

> - We are told the Atreides ruled Caladan, but at no point is the audience shown who the Atreides' subjects are...

I'm not sure why this is particularly needed and it isn't really in the novel. The importance of Caladan in the novel is that it is wet (which we do see); and that the way Caladan is ruled (whatever that is) must necessarily be different than how Arrakis is ruled (air/sea versus desert power). I'll have to rewatch the film to see if this can be gleaned from it; but it seems largely irrelevant what day-to-day life is like for Caladanians not in the Duke's direct employ and if the Duke is particularly loved or hated there.

> - On Arrakis, we are only ever told how strong and powerful the Fremen are...

This is also true of the novel. The main characters know more about the Fremen than anyone and very little at that. The Duke believes the Fremen are his 'desert power' because of the comparison to the Sardaukar on Selusa Secundus and because their estimates of the number of Fremen are greater than those of the Harkonnen or Emperor. Duncan Idaho confirms these suspicions, but the Fremen are largely mysterious even then. We also have the interaction between the Shadout Mapes and Jessica to hint at their capacity for violence.


Thanks for the thoughtful reply.

> I'm not sure why this is particularly needed and it isn't really in the novel...

I know saying this is sacrilegious to some sci-fi fans, but I think that the novel, Dune, could do with improvement. Neither the book nor the movie spend enough time fleshing out the details of their characters, which in my opinion robs both of them their ability to connect more deeply with the audience. The scene where Duke Leto explains to Paul that they will require desert power to rule Arrakis, for instance, did not have to be set such that the characters were alone on an empty cliff above the shoreline. There could have been a city full of culture upon this shoreline with great boat yards and planes over the sea to show the audience the empire that they are leaving behind on Caladan. I want both the novel and the movie to flesh out the details which make the audience engage with the fact that Caladan is a comparative paradise to the harsh, prison-like planet called Arrakis. It would suck to be forced to leave behind all the great work that the Atreides' forefathers put into Caladan. But, ultimately, both the film and the novel fail to fully engage their audience with these facts since they don't flesh out the details about the environment and people that the Atreides rule. At least that is my opinion.

There are a number of things I'd like to change or improve upon if I had the chance to edit Dune (novel or film): the story's allusions to the Cold War fight for oil in the middle east; the poor decision making by the Harkonnens; Dr. Yueh's murder of Duke Leto etc. But I don't want to ramble on too much. My point is that I think the film could have improved upon the novel in a number of places, instead of following the novel to its detriment.


Just want to say I agree about the book. I like the book, but I've never felt connected to any of the characters. There's simply no reason given to particularly care about any of them, any many simply appear to fill a role and then disappear as quickly.

The Villeneuve movie at least gives personality to Stilgar.


> When school districts get rid of advanced offerings in a bid to reduce racial inequality, they end up doing to opposite of what they claim to intend. While wealthier families can move to better school districts or enroll their children in private schools, smart—yet poor—kids end up getting stuck in "equitable" classrooms that leave them under-stimulated and ignored.

I hope the education system pushes for more students in under represented groups to partake in advanced offerings instead of getting rid of them. In my ideal, the school system should strive to help all students push their intellectual ability. Moreover, school sports and extracurriculars are great places to push for greater diversity since they tend to lack under represented groups.


That’s basically what the SPS says they are doing, by embedding gifted programs in every classroom rather than separating gifted kids out into different classrooms. We will see how it works, but I don’t see how one teacher is going to handle gifted, normal, and special needs kids all at the same time. It sounds futile to me, but we will see. My kid is just 7 in a north Seattle public K-8, and I’m not sure where he is going to be on that spectrum yet, I’ll re-evaluate around 6th grade to see if I have to put him into private school or not.


I went through the Seattle program back in the days of the dinosaurs. You can “push for” anything you want, but those groups aren’t excluded, and in fact are sought after. A key problem is that nobody wants their child to be the lone minority in a class full of kids they can’t connect with socially.

All this bloviating by commentators is just chickens and pigs all over again. Much as those on the outside would like to wring their hands and describe high-minded ideals, the parents and children who are most affected by the situation are making rational choices.

But the optics are bad, really bad, and the reality on the ground is bad too. Go see who sits with whom in the lunchroom.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: