> I simply wanted to see prior work on the subject. ... not based on a single quote/whitepaper or a URL.
For example start with Olin Shivers' doctoral work on k-CFA, and why he needed a k! And then follow the trail of citations from there forward.
Then look at David Van Horn's doctoral work on analysis:
> There is, in the worst case—and plausibly, in practice—no way to tame the cost of the analysis.
> The empirically observed intractability of this analysis can be understood as being inherent in the approximation problem being solved, rather than reflecting unfortunate gaps in our programming abilities.
> As the analyzer works, it must use some form of approximation; knowledge must be given up for the sake of computing within bounded resources.
That approximation is the 'Any' I was talking about.
And he's specifically talking about your idea of symbolic execution and why it doesn't work
> if we place no limit on the resources consumed by the compiler, it can perfectly predict the future—the compiler can simply simulate the running of the program, watching as it goes.
Get rid of that poetry please and give me a single reason why PyCharm’s algorithm that infers types within the functions in Python should not work; but actually works reasonably well in practice.
As a matter of fact all examples given in the papers you provide deal with generalised second-order cases of functions accepting functions which can be resolved by simply allowing to annotate said functions with the types by hand instead of inferring aforementioned types.
For example start with Olin Shivers' doctoral work on k-CFA, and why he needed a k! And then follow the trail of citations from there forward.
Then look at David Van Horn's doctoral work on analysis:
> There is, in the worst case—and plausibly, in practice—no way to tame the cost of the analysis.
> The empirically observed intractability of this analysis can be understood as being inherent in the approximation problem being solved, rather than reflecting unfortunate gaps in our programming abilities.
> As the analyzer works, it must use some form of approximation; knowledge must be given up for the sake of computing within bounded resources.
That approximation is the 'Any' I was talking about.
And he's specifically talking about your idea of symbolic execution and why it doesn't work
> if we place no limit on the resources consumed by the compiler, it can perfectly predict the future—the compiler can simply simulate the running of the program, watching as it goes.