Some editing by reply follows, since the editor just ate a lot of work (perhaps my edit straddled the time limit after posted when edits are allowed):
The identification of programs as well or badly coded can take place outside of the OS, and at a very different time and place, just as antivirus programs work. A bloat=identifying program can do the analysis, judge the programs and provide identifying bit sequences or hashes (if necessary to prevent evasion) to the OS, which can then use that information to police who gets what priority.
Other efficiency evaluation, including of script behavior, could be done either in nearly real time, or over days or months, by the OS or another program feeding data to it.
Some of this analysis would be straightforward, looking for known bad patterns, other analyses might requite AI, neural nets or the sort of combination (AI flags, humans confirm/disconfirm) that make Palintir hum.
Note that this doesn't just produce an immediate effect of shielding good programs from the sluggishness of bad ones; the competition that this sets up becomes something of a marketplace promoting efficiency that keeps becoming more demanding; a competition in which efficiency once again makes you look very good to your customers, as used to be the case when users could only run one program at a time. Back then, we knew who was feeding us garbage code and who wasn't. This invention let's us get back to something like that standard while keeping our shiny new hardware and multitasking, too.
Who knows, one edit at a time, I may have put a unicorn patent into the comments section of Hacker News as the idea occurred to me. Sliding down a slippery slope to publication.
The identification of programs as well or badly coded can take place outside of the OS, and at a very different time and place, just as antivirus programs work. A bloat=identifying program can do the analysis, judge the programs and provide identifying bit sequences or hashes (if necessary to prevent evasion) to the OS, which can then use that information to police who gets what priority.
Other efficiency evaluation, including of script behavior, could be done either in nearly real time, or over days or months, by the OS or another program feeding data to it.
Some of this analysis would be straightforward, looking for known bad patterns, other analyses might requite AI, neural nets or the sort of combination (AI flags, humans confirm/disconfirm) that make Palintir hum.
Note that this doesn't just produce an immediate effect of shielding good programs from the sluggishness of bad ones; the competition that this sets up becomes something of a marketplace promoting efficiency that keeps becoming more demanding; a competition in which efficiency once again makes you look very good to your customers, as used to be the case when users could only run one program at a time. Back then, we knew who was feeding us garbage code and who wasn't. This invention let's us get back to something like that standard while keeping our shiny new hardware and multitasking, too.
Who knows, one edit at a time, I may have put a unicorn patent into the comments section of Hacker News as the idea occurred to me. Sliding down a slippery slope to publication.