> I’d like to sit down all university professors who teach compiler courses and teach them a course on what’s relevant.
I don't teach compiler courses, but I'm an academic, and I do keep in touch with industry people to find out what's important for my data analysis course. A couple of big problems with your proposal:
1. Time variation in "what industry wants". This year one thing's hot, the next year it's something else. A bare minimum requirement for any course is that a student taking it in their second year should learn something that's still relevant three years later when they're in the first year of their job.
2. Cross sectional variation. There's no such thing as "what industry wants". Every place wants something different, with only a small subset common to 80% of employers.
My two cents - what’s useful for employers is giving students the confidence to jump into the code and figure it out. By this measure, what’s relevant for a compilers course isn’t the material itself. It’s that it’s structured to force the students to figure certain things for themselves.
There are college hires who need to be spoon fed, and college hires who just need mentorship and direction. Group two are invaluable.
Andy Pavlo of the CMU DB fame echoed this in one of the intro video lectures to one of his courses. He said his contacts in the industry valued people, who can navigate a large codebase productively. Most uni courses seem to involve writing a "toy" solution from scratch, which is seldom the case in professional programming.
He designed the course to be an exercise in working in an existing codebase that one needs to understand before adding features.
I guess that is one of the many reasons CMU grads kick open the door into many companies.
I am currently in this class and can confirm that navigating the codebase is a nightmare. The specifications we are given in the relevant assignments are vague or wrong and the code itself is pretty questionable. I spend more time trying to understand the assignment and current code than actually writing anything myself. If that's what he's going for, I guess he succeeded.
Sounds like you got yourself a high quality code base!
It's normal for a code base to be over a decade old, during which time it was worked on by a team of a dozen positions, each of which turned over on average every 3 years. So now you've had a few dozen people, each with their own ideas of what's a clear way to express a problem, trying to solve evolving business requirements under tight deadlines. And there's a 50/50 chance the project originally started as a 'throwaway' proof of concept.
> Most uni courses seem to involve writing a "toy" solution from scratch, which is seldom the case in professional programming.
Without writing a "toy" from scratch, one would never understand professional programming or codebases either. They wouldn't understand why they oftentimes poorly designed, full of bugs, etc. Tracing own "from scratch" steps, with self-reflection, is the best way to be prepared to "professional" programming. First, make and find mistakes yourself, only then you will be ready to find mistakes in, and improve on, other people's stuff.
>>Without writing a "toy" from scratch, one would never understand professional programming or codebases either. They wouldn't understand why they oftentimes poorly designed, full of bugs, etc. Tracing own "from scratch" steps, with self-reflection, is the best way to be prepared to "professional" programming.
I disagree. All the toy from scratch provides is a way to get the student to go from zero to poorly designed solution during the course, which he can just forget it ever existed as soon as the grade is in.
Meanwhile, you have no experience onboarding to an unfamiliar codebase, have to find a bug and fix it, implement a feature, or refactor it to reduce technical debt and improve it according to some criteria. And that's pretty much at least 3/4ths of a developer's work.
ibains made a specific point about language processing in IDEs being different to that of (traditional) compilers. Presumably there exists a state-of-the-art there just as there's a state-of-the-art for conventional compilers, but academia doesn't give it as much attention.
2. Cross sectional variation. There's no such thing as "what industry wants". Every place wants something different, with only a small subset common to 80% of employers.
I don't think that really holds up. Ultimately you could say this about any small mismatch between desired skill-sets and those available, but at some point we call it hair-splitting.
If you need someone to build a parser for your IDE, you're presumably better off with an expert in traditional compilers, than with someone who knows nothing at all of language processing. You'd be better off still with someone experienced in language processing within IDEs. Less mismatch is better, even if it's impractical to insist on no mismatch at all.
Your IDE parser will be unusable if it goes bananas while you're typing the characters needed to get from one fully, correctly parseable state to the next.
It needs to be able to handle:
printf("hello");
and also:
prin
and also:
printf("He
It also needs to be able to autocomplete function signatures that exist below the current line being edited, so the parser can't simply bail out as soon as it reaches the first incomplete or incorrect line.
> Good error recovery / line blaming is still an active field of development.
True. But let's get terminology straight: that's not a compiler science, that's parsing science. And it's no more compiler science than parsing a natural language is.
What terminology are you talking about? Neither "compiler science" nor "parsing science" are terms I used, or that the industry or academia use.
Parsing - formal theory like taxonomies of grammars, and practical concerns like speed and error recovery - remain a core part of compiler design both inside and outside of academia.
How can you be sure that that } is the end of a certain defined block? This most importantly affects the scoping and in many cases it's ambiguous. IDEs do have rich metadata besides from the source code but then parsers should be aware of them.
Maybe my wording is not accurate, imagine the following (not necessarily idiomatic) C code:
int main() {
int x;
{
int x[];
// <-- caret here
x += 42;
}
This code doesn't compile, so the IDE tries to produce a partial AST. A naive approach will result in the first } matching with the second {, so `x += 42;` will cause a type error. But as noticable from the indentation, it is more believable that there was or will be } matching with the second { at the caret position and `x += 42;` refers to the outer scope.
Yes, of course parsers can account for the indentation in this case. But more generally this kind of parsing is sensitive to a series of edit sequences, not just the current code. This makes incremental parsing a much different problem from ordinary parsing, and also is likely why ibains and folks use packrat parsing (which can be easily made incremental).
Partial parse state and recovery are critical. You don't want the entire bottom half of a file to lose semantic analysis while the programmer figures out what to put before a closing ).
Packrat parsers are notably faster than recursive descent parsers (also critical for IDE use) and by turning them "inside out" (replacing their memoization with dynamic programming) you get a pika parser which has very good recovery ability.
There are varying techniques to improve error recovery for all forms of parsing but hacked-up recursive descent (certainly the most common kind of parser I still write for my hacked-up DSLs!) have poor error recovery unless you put in the work. Most LR parsers are also awful by default.
When I was in university most focus was on LL and LR parsers with no discussion of error recovery and more focus on memory usage/bounds than speed. I also have no idea how common it is to teach combinator-based parser grammers these days; stuff like ANTLR and yacc dominated during my studies. This would add another level of unfamiliarity for students going to work on a "real compiler".
> Packrat parsers are notably faster than recursive descent parsers
I think this needs to be qualified. I don't think a packrat parser is going to beat a deterministic top-down parser. Maybe the packrat parser will win if the recursive descent parser is backtracking.
True - the performance of a top-down parser is going to depend on if/how often it backtracks and how much lookahead it requires. But this requires control over your grammar which you might not have i.e. you are parsing a standardized programming language. Practically speaking, unless you want two separate parsing engines in your IDE, that Lisps are actually LL(1) and Python and Java are "almost LL(1)" doesn't get you closer to parsing C++.
Packrat parsers are equivalent to arbitrary lookahead in either LL or LR grammars.
Tree sitter is based on LR parsing (see 23:30 in above video) extended to GLR parsing (see 38:30).
I've had enough of fools on HN posting unverified crap to make themselves feel cool and knowledgeable (and don't kid yourself that you helped me find the right answer by posting the wrong one). Time I withdrew. Goodbye HN.
There is no such variance and new fashions in compilers every year. There are hardware compilers and tools including language servers primarily. Move to cloud requires source to source compilers for data snd transforms.
> There is no such variance and new fashions in compilers every year.
Indeed. Packrat is exactly an example of passing fad in parsing. I find it telling that you, an "optimizations expert" are so worried about ways to parse. Shows that "new fashions every year" have reached the compiler industry too.
I don't teach compiler courses, but I'm an academic, and I do keep in touch with industry people to find out what's important for my data analysis course. A couple of big problems with your proposal:
1. Time variation in "what industry wants". This year one thing's hot, the next year it's something else. A bare minimum requirement for any course is that a student taking it in their second year should learn something that's still relevant three years later when they're in the first year of their job.
2. Cross sectional variation. There's no such thing as "what industry wants". Every place wants something different, with only a small subset common to 80% of employers.