What depth of context are you expecting to have to use to avoid megamorphisation?
The literature tells us more than a couple of levels is likely not tractable. That’s not going to enough is it?
I'm not an expert in Python, but I am in dynamic analysis of other similar languages. Python lets you redefine methods, doesn't it? How do you accomodate the fact that while your routine is running someone could be redefining all your methods on another thread?
Nobody's yet managed what you're suggesting being able to do. Maybe you've got some new idea that nobody's even thought of before?
> But your language semantic model does include redefining methods on a concurrent thread. So you need to handle that.
I don't need to handle that at all if I say that "you can only redefine methods during initial startup which is defined as importing all of the necessary modules". At which point I can run my static analysis tool and it will preserve semantics.
If you're doing meta-programming by redefining methods used in other threads in one thread, then you're clearly redefining the whole idea of your programming language.
> There's no need to be abusive.
People like you (Ph.D in Ruby) have spent too much time doing their doctorate. You went down the rabbit hole of specialisation and now you think you can generalise your knowledge to everything.
I am going to disappoint you:
a. Python doesn't (practically) do multithreading
b. I think you're doing something really wrong if you're trying to do monkey-patching of methods from one thread to another (I don't care which language you're programming in)
c. Plenty of people in this thread have given links to
actual tools resolving the problem you're saying is unsolvable
d. I have an impeccably clear picture of how I am going to solve the given problem - the only reason I have asked the question is because I simply wanted to see prior work on the subject.
Yet you're forcing your opinion. Opinion which is not based on a single quote/whitepaper or a URL.
So I am not being abusive.
>
But to be even more clear: I am not trying to resolve the problem in the general case (it's pretty obvious you can't). I am trying to resolve the problem which fits 99.9% of the scenarios I have to deal with on a daily basis. And sadly there are _no_ tools for that at all.
> I simply wanted to see prior work on the subject. ... not based on a single quote/whitepaper or a URL.
For example start with Olin Shivers' doctoral work on k-CFA, and why he needed a k! And then follow the trail of citations from there forward.
Then look at David Van Horn's doctoral work on analysis:
> There is, in the worst case—and plausibly, in practice—no way to tame the cost of the analysis.
> The empirically observed intractability of this analysis can be understood as being inherent in the approximation problem being solved, rather than reflecting unfortunate gaps in our programming abilities.
> As the analyzer works, it must use some form of approximation; knowledge must be given up for the sake of computing within bounded resources.
That approximation is the 'Any' I was talking about.
And he's specifically talking about your idea of symbolic execution and why it doesn't work
> if we place no limit on the resources consumed by the compiler, it can perfectly predict the future—the compiler can simply simulate the running of the program, watching as it goes.
Get rid of that poetry please and give me a single reason why PyCharm’s algorithm that infers types within the functions in Python should not work; but actually works reasonably well in practice.
As a matter of fact all examples given in the papers you provide deal with generalised second-order cases of functions accepting functions which can be resolved by simply allowing to annotate said functions with the types by hand instead of inferring aforementioned types.
The literature tells us more than a couple of levels is likely not tractable. That’s not going to enough is it?
I'm not an expert in Python, but I am in dynamic analysis of other similar languages. Python lets you redefine methods, doesn't it? How do you accomodate the fact that while your routine is running someone could be redefining all your methods on another thread?
Nobody's yet managed what you're suggesting being able to do. Maybe you've got some new idea that nobody's even thought of before?