Why? Many Lisp systems and Common Lisp in particular have great hot reloading capability, from redefining functions to UPDATE-INSTANCE-FOR-REDEFINED-CLASS to update the states.
It seems that JIT is overloaded with at least 2 meaning.
The canonical definition of JIT is "compilation during execution of a program". Usually, a program is being interpreted first, then switches to compiled code in the middle of execution. This is not what this article does.
What this article does is sometimes called on-the-fly AOT, or just on-the-fly compilation. I'd prefer not overloading the term "JIT".
> Usually, a program is being interpreted first, then switches to compiled code in the middle of execution.
I agree what they have isn't JIT compilation, but not for that reason. Tiered execution was never a central part of JIT compilation either. It was a fairly new invention in comparison.
The reason what they describe isn't JIT compilation is IMO fairly boring: it's not compiling the input program in any meaningful way, but simply writing hard-coded logic into executable memory that it already knows the program intended to perform. Sure there's a small degree of freedom based on the particular arithmetic operations being mentioned, but that's... very little. When your compiler already knows the high-level source code logic before it's even read the source code, it's... not a compiler. It's just a dynamic code emitter.
As to the actual difference between JIT vs. AOT... it may just come down to accounting. That is, on whether you can always exclude the compilation time/cost from the overall program execution time/cost or not. If so, you're compiling ahead of (execution) time. If not, you're compiling during execution time.
> As to the actual difference between JIT vs. AOT... it may just come down to accounting. That is, on whether you can always exclude the compilation time/cost from the overall program execution time/cost or not. If so, you're compiling ahead of (execution) time. If not, you're compiling during execution time.
Well, this includes what I refer to as "on-the-fly" AOT, like SBCL, CCL, Chez Scheme... Even ECL can be configured to work this way. As I mentioned in another comment, people in those circles do not refer to these as "JIT" at all, instead saying "I wish my implementation was JIT instead of on-the-fly AOT"!
> it's not compiling the input program in any meaningful way, but simply writing hard-coded logic into executable memory that it already knows the program intended to perform.
The program reads the logic from stdin and translates it into machine instructions. I can agree that there is not a lot of a freedom in what can be done, but I think it just means that the source language is not Turing complete. I don't believe that compiler needs to deal with a Turing complete language to claim the title "JIT compiler".
> The program reads the logic from stdin and translates it into machine instructions. I can agree that there is not a lot of a freedom in what can be done, but I think it just means that the source language is not Turing complete. I don't believe that compiler needs to deal with a Turing complete language to claim the title "JIT compiler".
"Not Turing-complete" is quite the understatement.
A "compiler" is software that translates computer code from one programming language into another language. Not just any software that reads input and produces output.
The input language here is... not even a programming language to begin with. Literally all it can express is linear functions. My fixed-function calculator is more powerful than that! If this is a programming language then I guess everyone who ever typed on a calculator is a programmer too.
> A "compiler" is software that translates computer code from one programming language into another language.
Yes, of course.
> Literally all it can express is linear functions.
And why a language to express linear functions can't be a programming language?
> My fixed-function calculator is more powerful than that!
But it isn't compiling, it is interpreting, is it? So your fixed-function calculator is not a compiler, it is an interpreter. It is irrelevant how powerful it is. There are much even more powerful interpreters and less powerful compilers.
The example we see gets computer code in one language and translates it into machine instructions. Talking about 'understatement' you are adding to this definition more constraints and these are very fuzzy constraints on what counts as a programming language.
A compiler takes some language and translates it into something close(r) to the hardware. And that's what the OP does. And since it compiles in process and executed it too, it's JIT, as opposed to AOT.
These terms are not related to the complexity of the problem. The first compilers could only translate for formulas, hence FORTRAN.
I always took the distinction to be the executable memory bit as opposed to writing an executable and then launching it in the usual way. Of course high performance runtimes that contain JITs do typically interpret first in order to get type information and reduce startup latency, but that’s more a function of the language not being efficiently AOT compileable rather than fundamental to the concept of a JIT
Can you explain what mean by a “language not being efficiently AOT compileable”? I am guessing you are referring to languages that emphasize REPL based development or languages that are ‘image’ based (I think Smalltalk is the most common example).
Like the guesses above, I can understand difficulty with AOT compilation in conjunction with certain use cases; however, I can not think of a language that based on its definition would be less amenable to AOT compilation.
The more context is narrowed down, the more optimizations that can applied during compilation.
AOT situations where a lot of context is missing:
• Loosly typed languages. Code can be very general. Much more general than how it is actually used in any given situation, but without knowing what the full situation is, all that generality must be complied.
• Increment AOT compilation. If modules have been compiled separately, useful context wasn't available during optimization.
• Code whose structure is very sensitive to data statistics or other conditional runtime information. This is the prime advantage of JIT over AOT. Unless the AOT compiler is working in conjunction with representative data and a profiler.
Those are all cases where JIT has advantages.
A language where JIT is optimal, is by definition, less amenable to AOT compilation.
I should probably have said, not compilable to efficient code. rather than not efficiently compilable. Basically I was referring to dynamic typing. Typically such languages are interpreted, although there is probably a way to compile the same sequence the interpreter would take as an AOT compiler. When types are not known until runtime, the compiler can't emit efficient code, and instead has to emit code that checks the type on each function invocation which is basically exactly what the interpreter was already doing. This is where JITs really shine, since they can produce one or more compilations of the same function specialized to types that have been observed to be used frequently. the interpreter can then call those specialized versions of the functions directly when appropriate.
What I meant was dynamic typing primarily. Think JavaScript or lua. These are both languages where the choice is between interpreter and JIT rather than JIT or AOT. VM based languages like Java also fall in that category, not because of dynamic typing so much as because of shipping bytecode to the client for portability reasons.
In that sense almost every compiled Lisp/Scheme implementation, GHC, etc. or any other interactive programming system, count as JIT. But virtually nobody in those circles refer to such implementations as JIT. Instead, people says "I wish our implementation was JIT to benefit from all those optimizations it enables"!
> 1. The function and variable namespaces have been collapsed into a single namespace.
Lisp-N, package system and homoiconic macro is a local optimum (IMO practically much better than Scheme, but I digress) for variable capture issue in metaprogramming. Now it's saying let's bring back the footguns and also you have to write lst instead of list. Please, no.
> 2. ...adds a layer on top of CLOS
How about a library? Why a new standard?
> 3. Common Lisp 3 supports case-sensitive symbols.
This I can relate.
> 4. Common Lisp 3 supports native threads.
> 5. Common Lisp 3 supports tail recursion elimination.
Practically not a problem for today's CL. There's nothing to fix.
> Meanwhile proper typing should be introduced out of the box, like in Coalton[3], for example.
Are you saying Coalton as an embedded language should be introduced out of the box? I'm afraid it may quickly earn similar reputation as LOOP and FORMAT. Or are you saying the whole language should adopt Coalton-like typed semantics? Then I don't think it's even possible for large part of the language, especially when you take interactivity into account. What happens when a function gets redefined with different type? Worse, how about CHANGE-CLASS and UPDATE-INSTANCE-FOR-REDEFINED-CLASS?
> Also, pattern matching should be the part of the language, not some external library [4].
Why not? Common Lisp as a living and extensible language now evolves by adopting de-facto standard (trivia for pattern matching, bt for native threads, usocket for network, ASDF for build system, etc). Why need a committee or other form of authority to prescribe what everyone gets to use when we have a maximally democratic process?
> Are you saying Coalton as an embedded language should be introduced out of the box?
Not the whole language as is but proper algebraic types at least. Just like most modern languages do.
> Why not? Common Lisp as a living and extensible language now evolves by adopting de-facto standard (trivia for pattern matching, bt for native threads, usocket for network, ASDF for build system, etc). Why need a committee or other form of authority to prescribe what everyone gets to use when we have a maximally democratic process?
Totally a valid point but then something like Compact Lisp proposal to strip the language to the bare minimum and extract everything out in libraries would make way more sense than the huge and only half-used CL standard we have now.
If you want Scheme, go use Scheme because these are not arguments for Common Lisp. There is tons of value in the CL specification being this big and I'm happy I can still run code I wrote more than 25 years ago (or third party code written more than 50 years ago) without any issues.
Generally, contemporary folks that propose improvements to the CL spec tend to be misinformed / misguided and/or lacking experience to realize why their proposed improvements are bad ideas.
How would algebraic types work with SLIME? If I remove a constructor from my algebraic type, what happens to values of that type that are built with that constructor that're stored in globals?
In the same way that non-hygienic macros in a Lisp-2 with a CL-style package system are a local optimum, many non-obvious design choices in the Common Lisp type system and CLOS make SLIME "just work" in almost every case.
I guess this case is workable similar to struct redefintion. There can be a condition and a CONTINUE restart, which makes instances of the removed constructor obsolete.
In short: I've replace the Common Lisp loop (that works for Hunchentoot since it opens threads, but doesn't for Woo since it blocks) with a deeper integration into the event loop:
> And that was the main change: looking at the innards of it, there are some features available, like woo.ev:evloop. This was not enough, and access to the libev timer was also needed. After some work with lev and CFFI, the SDK now implements a Node.js-style approach using libev timers via woo.ev:evloop and the lev CFFI bindings (check woo-async.lisp).
This is likely (almost surely) not perfect or even ideal, but it does seem to work, and I've been testing the demo app with 1 worker and multiple clients.
I haven't tried Wookie, since adding Clack+Woo was already a substantial change. Reading https://fukamachi.hashnode.dev/woo-a-high-performance-common... , where it compares with Wookie, I'm not sure if it would make a difference: it might be wrong, but "it says:
> Of course, this architecture also has its drawbacks as it works in a single thread, which means only one process can be executed at a time. When a response is being sent to one client, it is not possible to read another client's request.
... which for SSE seems to be similar to what the issue is with Woo. I wrote a bit more on it in https://github.com/fsmunoz/datastar-cl/blob/main/SSE-WOO-LIM... , and it can be more of a "me" problem than anything else, but to keep a SSE stream open, it doesn't play well with async models. That's why I added a with-sse-response macro that, unlike with-sse-connection, sends events without keeping the connection open.
wookie is built on cl-async, so my hope is that it's more tractable to write proper async SSE handler. But I haven't looked at whether it's possible to keep open connection asyncly.
> C++ has method override but it's not the same: you cannot change the behavior of how addition works on two 64-bit integers (such as treating them both as fixed-point numbers).
Wouldn't you just create a 1-field struct/class and override all the arithmetic operators? Or if you're less fixated about using the same operator (like me as a Lisper), invent a method called ADD and use that.
> Changing addition to work on "bignum"s (numbers that have arbitrarily large precision) is a good usecase of overriding the addition operation.
I don't see this as something unique to Forth compared to other languages, even C++.
World class? Then what am I? I frequently work with Copilot and Claude Sonnet, and it can be useful, but trusting it to write code for anything moderately complicated is a bad idea. I am impressed by its ability to generate and analyse code, but its code almost never works the first time, unless it's trivial boilerplate stuff, and its analysis is wrong half the time.
It's very useful if you have the knowledge and experience to tell when it's wrong. That is the absolutely vital skill to work with these systems. In the right circumstances, they can work miracles in a very short time. But if they're wrong, they can easily waste hours or more following the wrong track.
It's fast, it's very well-read, and it's sometimes correct. That's my analysis of it.
> I frequently work with Copilot and Claude Sonnet, and it can be useful, but trusting it to write code for anything moderately complicated is a bad idea
This sentence and the rest of the post reads like an horoscope advice. Like "It can be good if you use it well, it may be bad if you don't". It's pretty much the same as saying a coin may land on head or on tail.
They don’t. I’ve gone from rickety and slow excel sheets and maybe some python functions to automate small things that I can figure out to building out entire data pipelines. It’s incredible how much more efficient we’ve gotten.
> Including how it looks at the surrounding code and patterns.
Citation needed. Even with specific examples, “follow the patterns from the existing tests”, etc copilot (gpt 5) still insists on generating tests using the wrong methods (“describe” and “it” in a codebase that uses “suite” and “test”).
An intern, even an intern with a severe cognitive disability, would not be so bad at pattern following.
Every time a new model or tool comes out, the AI boosters love to say that n-1 was garbage and finally AI vibecoding is the real deal and it will make you 10x more productive.
Except six months ago n-1 was n and the boosters were busy ruining their credibility saying that their garbage tier AI was world class and making them 10x more productive.
Today’s leading world-class agentic model is tomorrow’s horrible garbage tier slop generator that was patently never good enough to be taken seriously.
This has been going on for years, the pattern is obvious and undeniable.
I can obviously only speak for myself, but I've tried AI coding tools from time to time and with Opus 4.5 I have for the first time the impression that it is genuinely helpful for a variety of tasks. I've never previously claimed that I find them useful. And 10x more productive? Certainly not, even if it would improve development speed 10000x I wouldn't be 10x more productive overall since not even half of my time is directed towards development efforts.
Finance people are funny. They are so wrong when you hear their logic and references, but I also realized it doesn't matter. It is trends they try to predict, fuzzy directional signals, not facts of the moment.
Of course not, why would they? They understand making money, and what makes money right now? What would be antithetical to making money? Why might we be doing one thing and not another? The lines are bright and red and flashing.
This seems to be the article's author's own language Bauble[1], "a toy for composing signed distance functions in a high-level language (Janet), compiling them to GLSL, and rendering them via WebGL"[2].
The linked study is utterly unconvincing... textual arguments (am I reading philosophy?) with formula jumping up out of nowhere, figures showing not measurable data but made-up simple linear/inverse proportional curve... is this paper written by LLM?