Apologies for the misunderstanding. You said "generalizing from a partial sample of the problem space" and I thought you meant generalisation to unseen data from few examples, which is generally what we would all like to get from machine learnig models (but don't).
But, if a neural net can't _extrapolate_ to unseen instances, I don't see how it can solve problems like the one you describe with any useful precision, again unless it's trained with gigantic amounts of examples (which you say is not required). And how is this reducing computational costs with respect to hand-coded solvers?
To be clear - I have absolutely no experience in this domain. I'm just speculating.
In the example I gave, everyone agrees that if you had long enough and enough processing power, you could solve every possible configuration, and store the results. Then you could instantaneously "solve" any problem.
Unfortunately, the problem I describe is a toy problem (too simple to be useful), and yet it would still take way way too long to solve all the possible configurations.
What if you solved some tiny fraction of the configurations though? That would be a sampling of the configuration space. Then a neural network could use that sampling to interpolate to the cases not solved. That would provide a significant speedup over actually solving the problem.
So the real question is what density you need to pre-solve the configuration space to make it work? It definitely depends on what accuracy you need in the solution, as well as how good you can do with the interpolation. If I said previously that gigantic numbers of examples are not needed, then I misspoke. I am sure they would be needed. Gigantic is vague though - is it the kind of number that can be rented from AWS, or is it the kind of number that would require civilization resources?
I have no idea if the math actually works out to make it a useful approach. All I am saying is that conceptually I can see that in some cases, it could be possible.
>> So the real question is what density you need to pre-solve the configuration space to make it work?
Yes, that's the main question. I don't know the answer of course but if we're talking about an engineering problem where precision is required, intuitively the more the merrier.
The thing is, with neural nets you can do lots of things in principle and many things "in the lab". When you try to take them in the real world is the tricky bit. Anyway, another poster here is saying we'll see big things in the next five years so let's hold on to our hats for now.
But, if a neural net can't _extrapolate_ to unseen instances, I don't see how it can solve problems like the one you describe with any useful precision, again unless it's trained with gigantic amounts of examples (which you say is not required). And how is this reducing computational costs with respect to hand-coded solvers?