That is normal on his blog. He is a brand that he has developed over many years, and he is constantly promoting that brand.
Yes, he has done a lot of good work in the past, but he has put as much effort into self-promotion and landed a series of interesting and well-paying gigs.
I can't blame him for that. It just makes me tired to watch.
What the OP was pointing out is two typical tells for lazy ChatGPT-generated text, right in the intro. (The m-dash, "it's not just X, it's Y").
Of course that kind of heuristic can have false positives, and not every accusation of AI-written content on HN is correct. But given how much stuff Gregg has written over the years, it's easy to spot-check a few previous posts. This clearly isn't his normal style of writing.
Once we know this blog was generated by a chatbot, why would the reader care about any of it? Was there a Mia, or did the prompt ask for a humanizing anecdote? Basically, show us the prompt rather than the slop.
The funny part is that the Tesla doesn't have a basic dumb cruise control. It only has "traffic-aware" cruise control which uses the cameras and doesn't use the new FSD code so it has lots of glitches and phantom braking and decides for itself what speed to use. My wife just wants to set the speed and have the car go that speed.
Do any cars with assisted cruise control have the ability to step down to dumb cruise control? (I know my car doesn't - the lower trims have dumb cruise control, and you lose that in favour of assisted if you bump up to a higher trim.)
This kind of work is really valuable and much needed. There is also often a surprising amount of friction to making contributions to open source projects. This article did a good job and highlighting some of those difficulties and friction. But improvements to the official documentation is so much better than standalone blog posts to explain confusing concepts.
So, thank you Julia & Marie for persevering and making a solid contribution.
Indeed. When I entered "grandma's cookies" into the shared family instance is 'mealie' I was sure to include a copy of the "original" index card. (surely a copy)
https://imgur.com/KMSuUhz
That card was has several fun comments and lots of history to my siblings who added to the crustiness of this card.
The original card I remember said to use lard or you can use margarine if you "haven't slaughtered a hog recently".
The recipe in mealie was modernized and tested more.
Personally I am not not sure the live search on this new page saves me time, but perhaps if you added the ability to only show missing features it could be useful. For example if I could pick that I am interested in c++23 and earlier and that I use gcc-14 and clang-16 it should list the features that won't work for me. That would be useful compared to trying to scan the full list.
Hi, the live search is for developers that want to quickly look up the support for a particular feature. The "missing features" feature and filtering by compiler versions is something I'm currently working on. Any suggestions are welcome, and thanks for your feedback.
Dataview can generate all of the data from Bases (and a whole lot more), but Bases is a lot easier to use as you can build queries in the GUI and the data comes out in nicely formatted table where you can edit the fields directly in the table rather than needing to load each data item one by one to make changes.
And still after using and making changes in the GUI the query is stored in a nicely formatted and editable YAML file.
The important thing to understand about why AVX512 is a big deal is not the width. AVX512 adds new instructions and new instruction encodings. They doubled the number of registers (16->32), and added mask registers that allow you to remove special cases at the end of loops when the array is not a multiple of the vector width. And there is piles for new permutation operations and integer operations that allow it to be useful in more cases.
The part Intel struggles with is that in many places if they had the 256-bit max width but all the new operations then they could build a machine that is faster than the 512-bit version. (assuming the same code was written for both vector widths) The reason is the ALUs could be faster and you could have more of them.
I was pleased to read such positive contagious excitement and someone who can still see the wonder of the American west. And excitement of trying new experiences.
Yes, he has done a lot of good work in the past, but he has put as much effort into self-promotion and landed a series of interesting and well-paying gigs.
I can't blame him for that. It just makes me tired to watch.
reply