I had some light contact with Prolog long ago during my studies - I have a rough idea how it is used and what it can be useful for, but only on surface, not deep at all. I keep hearing about Datalog since, as some amazing thing, but I don't seem able to understand what it is - i.e. to grasp an answer to a simple question:
what is that Datalog improves over Prolog?
Just now I tried to skim the Wikipedia page of Datalog; the vague theory I'm getting from it, is that maybe Prolog has relatively poor performance, whereas Datalog dramatically improves performance (presumably allowing much bigger datasets and much more parallelized processing), at the cost of reducing expressiveness and features in some other important ways? (including making it no longer Turing-complete?) Is that what it's about, or am I completely missing the mark?
from what I know, prolog looked declarative, in a way that you just encode relations and it figures out the answers, but it really depended on the order of those rules, and some additional instructions like "cut" which not only prevented waste computations, but could affect the results.
datalog on the other hands is more or less a relation db with a different syntax.
Datalog is simpler, not turing complete , and IIRC uses forward chaining which has knock-on effects in its performance and memory characteristics. Huge search spaces that a trivial in Prolog are impossible to represent in Datalog because it eats too much memory.
Datalog is a commuter car with a CVT. Prolog is an F1 car. Basically, it's not about improvement. It's about lobotomizing Prolog into something people won't blow their legs off with. Something that's also much easier to implement and embed in another application (though Prologs can be very easy to embed.)
If you're used to Prolog, you'll mostly just find Datalog to be claustrophobic. No call/3? No term/goal expansion? Datalog is basically designed to pull out the LCD featureset of Prolog for use as an interactive database search.
It's easier to write fast Datalog code but the ceiling is also way lower. Prolog can be written in a way to allow for concurrency, but that's an intermediate level task that requires understanding of your implementation. Guarded Horn Clauses and their derived languages[2] were developed to formalize some of that, but Japanese advancements over Prolog are extremely esoteric. Prolog performance really depends on the programmer and the implementation being used and where it's being used. Prolog, like a Lisp, can be used to generate native machine code from a DSL at compile-time.
If you understand how the underlying implementation of your Prolog works, and how to write code with the grain of your implementation, it's absolutely "fast enough". Unfortunately, that requires years of writing Prolog code with a single implementation. There's a lot of work on optimizing[3][4] prolog compilers out there, as well as some proprietary examples[5].
what is that Datalog improves over Prolog?
Just now I tried to skim the Wikipedia page of Datalog; the vague theory I'm getting from it, is that maybe Prolog has relatively poor performance, whereas Datalog dramatically improves performance (presumably allowing much bigger datasets and much more parallelized processing), at the cost of reducing expressiveness and features in some other important ways? (including making it no longer Turing-complete?) Is that what it's about, or am I completely missing the mark?