Ah sorry, I guess I was generalizing from what I was taught and what my teachers (at Université de Montréal) were doing back then :). Weren't reasoning/expert systems usually based on trying to model the human thought process? At least early on? I might be totally wrong.
> Weren't reasoning/expert systems usually based on trying to model the human thought process?
Yes, in the sense of using rules / heuristics in the way that human experts were believed to. One classic architecture involved a black-board of facts. Rules were triggered when the facts matched their pre-conditions and could update the blackboard with new facts, and so on. The rules looked like a mass of if-then statements, but the order in which they were fired was driven by the contents of the knowledge base and the behaviour of the inference engine.
In my experience, once you reached a certain number of rules / level of complexity, it became harder and harder to add new rules, and the lack of traditional programmatic approaches to structuring and control compromised the purely 'knowledge based' approach. As a traditional programmer myself (in Lisp), I increasingly encountered situations where I just wanted to call a proper function. There were also more theoretical issues, such as non-monotonic reasoning, where you discover that a previously asserted fact was misleading / incorrect, and you needed to retract subsequent assertions, etc. Comedy example here is where you have knowledge that Tweety is a bird, and use rules to design an aviary for him. You then discover that Tweety is a penguin, so a completely different habitat is required. There were also comedy examples where people used a medical expert system to diagnose their car's problems and it would determine that the rust was a bad case of measles.
I think it did lead to improved understanding of mathematical logic-based systems, but didn't feed back into an understanding of human cognition.