> I do not believe coding is largely separate from engineering.
I could pick up a cookbook and study how to cook a souffle. I could learn enough about this academically that I could answer basically any questions you might ask. Having this academic knowledge is a good thing, but it doesn't mean I've ever even separated an egg. If you really want to know if I can make a souffle, your best bet is to hand me the stuff and ask me to do it.
> Here I disagree. If you have never used a framework or API, you don't know it. I don't care that you can regurgitate the Javadocs; I care that you know things like its pitfalls, quirks, and runtime oddities.
And here I assert that you're either probing on trivia or probing on things that can be learned without using
the framework (actually, probably both). The pitfalls, quirks, and oddities are well documented in thousands of blogs.
> To be clear, I abhor trivia-based interviewing. What I am talking about is practical engineering tradeoffs between one course of action versus another, preferably supported by direct personal experience in the past.*
So this is an entirely different thing than pitfalls. Now you're talking about designing systems and dealing with tradeoffs. This isn't coding, and I don't believe you can always discover coding gaps by probing on this.
> Exposing the level of competence a five-minute coding problem can is trivial to do in parallel with a deep engineering discussion. In fact, I think some sort of code should be part of that discussion.
I guess I'm confused about what you're asking in an interview, then. Are your candidates coding or not?
> It should be something two-way, more representative of what the job is like on a daily basis.
I think it's unrealistic to try to get the candidate to solve real-world, day-to-day problems in an hour. This isn't what the job is like. "Design and implement this feature" is not a 45-minute task typically. It's typically days or weeks, so asking a "representative" problem in an interview is infeasible unless you're just doing high-level design, in which case it's not predictive for coding ability.
I think I'm not being clear and may be misunderstanding you.
When I was writing signal processing code, we had four levels of engineering.
- The first was an Algorithm Description Document. This document laid out, in mathematical terms, the algorithms used for various signal processing functions in the system. It was purely conceptual.
- The second was an Algorithm Implentation Document. This document mapped the algorithms to specific parts of the hardware and software system, laying out the logical module structure and data flow. It also specified how the algorithms would be realized in code, since parts of a particular algorithm might need to run in different parts of the system. The AID was still primarily mathematical.
- The third was a series of software design documents. These documents specified the details of module interfaces, code and file layouts, and specifics about what algorithms would be implemented in what functions.
- The fourth was the actual translation of the mathematical algorithms in the AID into actual C code. Most of the engineering at this level dealt with function-level optimization and some platform-specific stuff.
Of those four levels, only the last is what I would consider "coding". It is also the least important. Anyone who can really understand the first three levels and is willing to learn can handle the fourth. As I mentioned previously, there will be differences in efficiency, but not in capability. It would take willful ignorance for this not to be the case. I think this is also what you are considering "coding", but I'm not sure; I also think you might be merging #3 and #4.
Now, most commercial projects do not run this way and, as far as I know, few have dedicated systems engineers to manage the system architecture and APIs. Thus, the software engineers building the system generally perform the work in all four of these levels simultaneously as the system builds out. Despite this, I still only consider the work that fits level 4 to be "coding", and still consider it to be the least important.
What I look for when I interview are engineers who operate very well in levels 1 - 3. If I find that, level 4 is pretty much a given absent the rare pathological case or someone with a very "academic" attitude. The questions I personally ask deal with levels 2 and 3. Those questions cover some of what might be considered coding, like API design and class layout. I do not have candidates actually generate code, though. Some of the other interviewers have candidates write small functions in pseudocode or explain existing, uncommented Java code, though.
I also need to note that we have a take-home test that candidates need to pass before they get an on-site.We have also recently added an open coding challenge which may eventually replace the take-home test. This probably covers your requirement for candidates to code, even though it isn't done in front of one of us.
Yes, I'm generally looking for someone who can do all of those things (though I don't work in DSP, so it's not an exact mapping). I would say that at least part of #3 is coding. If you're actually defining code layout etc., you are basically coding. If you dig deep enough here, you can probably rule out the vast majority of people who can't do #4. I still think it's a more efficient use of time to just ask them to do a bit of coding, though. Then I have certainty on the question rather than an implicit answer.
I could pick up a cookbook and study how to cook a souffle. I could learn enough about this academically that I could answer basically any questions you might ask. Having this academic knowledge is a good thing, but it doesn't mean I've ever even separated an egg. If you really want to know if I can make a souffle, your best bet is to hand me the stuff and ask me to do it.
> Here I disagree. If you have never used a framework or API, you don't know it. I don't care that you can regurgitate the Javadocs; I care that you know things like its pitfalls, quirks, and runtime oddities.
And here I assert that you're either probing on trivia or probing on things that can be learned without using the framework (actually, probably both). The pitfalls, quirks, and oddities are well documented in thousands of blogs.
> To be clear, I abhor trivia-based interviewing. What I am talking about is practical engineering tradeoffs between one course of action versus another, preferably supported by direct personal experience in the past.*
So this is an entirely different thing than pitfalls. Now you're talking about designing systems and dealing with tradeoffs. This isn't coding, and I don't believe you can always discover coding gaps by probing on this.
> Exposing the level of competence a five-minute coding problem can is trivial to do in parallel with a deep engineering discussion. In fact, I think some sort of code should be part of that discussion.
I guess I'm confused about what you're asking in an interview, then. Are your candidates coding or not?
> It should be something two-way, more representative of what the job is like on a daily basis.
I think it's unrealistic to try to get the candidate to solve real-world, day-to-day problems in an hour. This isn't what the job is like. "Design and implement this feature" is not a 45-minute task typically. It's typically days or weeks, so asking a "representative" problem in an interview is infeasible unless you're just doing high-level design, in which case it's not predictive for coding ability.