See the context I added to that comment; this is not about security issues, it's about the Linux CNA's absurd approach to CVE assignment for things that aren't CVEs.
I don't agree that it's absurd. I would say it reflects a proper understanding of their situation.
You've doubtless heard Tony Hoare's "There are two ways to write code: write code so simple there are obviously no bugs in it, or write code so complex that there are no obvious bugs in it.". Linux is definitely in the latter category, it's now such a sprawling system that determining whether a bug "really" has security implications is no long a reasonable task compared to just fixing the bug.
The other reason is that Linux is so widely used that almost no assumption made to simplify that above task is definitely correct.
I like CVEs, I think Linux approach to CVEs is stupid, but also it was never meaningful to compare CVE count. But I guess it's hard to make people stop doing that, and that's the reason Linux does the thing it does out of spite.
Basically the framework, like the Shield before, is the Commission trying to show "look, we fixed it".
Sadly, for the previous two times, the ECJ pointed out after the fact that no framework can fix the lack of data privacy law in the US, and that as such, the Shield, just like its predecessor, was not allowing what it claimed to do.
The Framework has not been tested in the ECJ so far, but the US has not significantly altered its laws so...
Thanks. So basically the new framework hasn't accomplished anything that can be relied upon when architecting a system to reduce the risk of GDPR compliance issues?
Can I recommend looking at what people studying Experts at work and how they manage to do better than they "should" be able to have found? There are tons of domains looking at this empirically :)
There is work in progress in the high-level inquiry that should manage to produce most of it. Besides that, you can try to check the appeals court reports, which are not too bad.
A lot of the cases were handled in the "Post Office courts" (I don't remember the correct name, not a British), and there are more or less no records of them. So you need to ask people affected to come up and find ways to validate them, then create a case for quashing it. It is a total FUBAR mess.
Private Eye definitely did a lot of work, but they were not the only ones, and multiple freelancers and journalists worked on it "in the shadows" for years.
The Post Office tactics of calling editors to make threats and play down the stories were also quite influential.
Note that there are already some of this stuff in the compiler for AOT. Using these new specs for AOT optimisations is going to be a far taller problem than catching some of the errors
The problem is that your space grow really fast and that your compilers are really not built to extract that information in a few ms.
Even less to regenerate only part of it based on partial input. Even less when the input may not be correct syntax.
Also that linkage being kept is exactly what this post talk about. How to keep it intact through the different steps and transformations in your pipeline in a way adapted to the kind of queries you are going to need is... Actually hard and dependent on the query.
Which means that adding new features to your IDE would regularly need (and actually does need) a new way to store and query that data.
But yes. Reusing part of the rust compiler (or replacing some of them) in rust-analyzer is already something that happens and that maintainers work on.
It is just not that easy. But yes, C# and Roslyn in general was built with that in mind. Typescript too.
If you are interested in writing down more of what problems you have with scripting languages, feel free to shout me an email. Should be in profile.
I have been slowly working on a model of what problems i see in this domain and my own ideas to "fix" (or more like try to) them. So would love to see other perspectives.