Hacker Newsnew | past | comments | ask | show | jobs | submit | kscarlet's commentslogin

Why? Many Lisp systems and Common Lisp in particular have great hot reloading capability, from redefining functions to UPDATE-INSTANCE-FOR-REDEFINED-CLASS to update the states.


Very nice.

p.s. (I hate to point this out), there seems to be a name collision with https://lisperator.net/slip/


It seems that JIT is overloaded with at least 2 meaning.

The canonical definition of JIT is "compilation during execution of a program". Usually, a program is being interpreted first, then switches to compiled code in the middle of execution. This is not what this article does.

What this article does is sometimes called on-the-fly AOT, or just on-the-fly compilation. I'd prefer not overloading the term "JIT".


> Usually, a program is being interpreted first, then switches to compiled code in the middle of execution.

I agree what they have isn't JIT compilation, but not for that reason. Tiered execution was never a central part of JIT compilation either. It was a fairly new invention in comparison.

The reason what they describe isn't JIT compilation is IMO fairly boring: it's not compiling the input program in any meaningful way, but simply writing hard-coded logic into executable memory that it already knows the program intended to perform. Sure there's a small degree of freedom based on the particular arithmetic operations being mentioned, but that's... very little. When your compiler already knows the high-level source code logic before it's even read the source code, it's... not a compiler. It's just a dynamic code emitter.

As to the actual difference between JIT vs. AOT... it may just come down to accounting. That is, on whether you can always exclude the compilation time/cost from the overall program execution time/cost or not. If so, you're compiling ahead of (execution) time. If not, you're compiling during execution time.


> As to the actual difference between JIT vs. AOT... it may just come down to accounting. That is, on whether you can always exclude the compilation time/cost from the overall program execution time/cost or not. If so, you're compiling ahead of (execution) time. If not, you're compiling during execution time.

Well, this includes what I refer to as "on-the-fly" AOT, like SBCL, CCL, Chez Scheme... Even ECL can be configured to work this way. As I mentioned in another comment, people in those circles do not refer to these as "JIT" at all, instead saying "I wish my implementation was JIT instead of on-the-fly AOT"!


> it's not compiling the input program in any meaningful way, but simply writing hard-coded logic into executable memory that it already knows the program intended to perform.

The program reads the logic from stdin and translates it into machine instructions. I can agree that there is not a lot of a freedom in what can be done, but I think it just means that the source language is not Turing complete. I don't believe that compiler needs to deal with a Turing complete language to claim the title "JIT compiler".


> The program reads the logic from stdin and translates it into machine instructions. I can agree that there is not a lot of a freedom in what can be done, but I think it just means that the source language is not Turing complete. I don't believe that compiler needs to deal with a Turing complete language to claim the title "JIT compiler".

"Not Turing-complete" is quite the understatement.

A "compiler" is software that translates computer code from one programming language into another language. Not just any software that reads input and produces output.

The input language here is... not even a programming language to begin with. Literally all it can express is linear functions. My fixed-function calculator is more powerful than that! If this is a programming language then I guess everyone who ever typed on a calculator is a programmer too.


> A "compiler" is software that translates computer code from one programming language into another language.

Yes, of course.

> Literally all it can express is linear functions.

And why a language to express linear functions can't be a programming language?

> My fixed-function calculator is more powerful than that!

But it isn't compiling, it is interpreting, is it? So your fixed-function calculator is not a compiler, it is an interpreter. It is irrelevant how powerful it is. There are much even more powerful interpreters and less powerful compilers.

The example we see gets computer code in one language and translates it into machine instructions. Talking about 'understatement' you are adding to this definition more constraints and these are very fuzzy constraints on what counts as a programming language.


A compiler takes some language and translates it into something close(r) to the hardware. And that's what the OP does. And since it compiles in process and executed it too, it's JIT, as opposed to AOT.

These terms are not related to the complexity of the problem. The first compilers could only translate for formulas, hence FORTRAN.


I'm not sure where you got your definition, but I basically copied Wikipedia's.


That's not compilation merely substitution.


I think if your program starts to execute machine code that wasn't present in the binary, then it counts as having a JIT compiler.

Of course, there are edge cases like embedding libtcc, but I think it's a reasonable definition.


> This is not what this article does.

Discounting books, many other well written articles on JIT have been shared on HN over the years [0][1][2]; the one I particularly liked as it introduces the trinity in a concise way: Compiler, Interpreter, JIT https://nickdesaulniers.github.io/blog/2015/05/25/interprete... / https://archive.vn/HaFlQ (2015).

[0] How to JIT - an introduction, https://eli.thegreenplace.net/2013/11/05/how-to-jit-an-intro... (2013).

[1] Bytecode compilers and interpreters, https://bernsteinbear.com/blog/bytecode-interpreters/ (2019).

[2] Let's build a Simple Interp, https://ruslanspivak.com/lsbasi-part1/ (2015).


I always took the distinction to be the executable memory bit as opposed to writing an executable and then launching it in the usual way. Of course high performance runtimes that contain JITs do typically interpret first in order to get type information and reduce startup latency, but that’s more a function of the language not being efficiently AOT compileable rather than fundamental to the concept of a JIT


Can you explain what mean by a “language not being efficiently AOT compileable”? I am guessing you are referring to languages that emphasize REPL based development or languages that are ‘image’ based (I think Smalltalk is the most common example).

Like the guesses above, I can understand difficulty with AOT compilation in conjunction with certain use cases; however, I can not think of a language that based on its definition would be less amenable to AOT compilation.


The more context is narrowed down, the more optimizations that can applied during compilation.

AOT situations where a lot of context is missing:

• Loosly typed languages. Code can be very general. Much more general than how it is actually used in any given situation, but without knowing what the full situation is, all that generality must be complied.

• Increment AOT compilation. If modules have been compiled separately, useful context wasn't available during optimization.

• Code whose structure is very sensitive to data statistics or other conditional runtime information. This is the prime advantage of JIT over AOT. Unless the AOT compiler is working in conjunction with representative data and a profiler.

Those are all cases where JIT has advantages.

A language where JIT is optimal, is by definition, less amenable to AOT compilation.


I should probably have said, not compilable to efficient code. rather than not efficiently compilable. Basically I was referring to dynamic typing. Typically such languages are interpreted, although there is probably a way to compile the same sequence the interpreter would take as an AOT compiler. When types are not known until runtime, the compiler can't emit efficient code, and instead has to emit code that checks the type on each function invocation which is basically exactly what the interpreter was already doing. This is where JITs really shine, since they can produce one or more compilations of the same function specialized to types that have been observed to be used frequently. the interpreter can then call those specialized versions of the functions directly when appropriate.


What I meant was dynamic typing primarily. Think JavaScript or lua. These are both languages where the choice is between interpreter and JIT rather than JIT or AOT. VM based languages like Java also fall in that category, not because of dynamic typing so much as because of shipping bytecode to the client for portability reasons.


In that sense almost every compiled Lisp/Scheme implementation, GHC, etc. or any other interactive programming system, count as JIT. But virtually nobody in those circles refer to such implementations as JIT. Instead, people says "I wish our implementation was JIT to benefit from all those optimizations it enables"!


Do they generate machine code in ram and jump to it? Or do they interpret byte code?

EDIT: at least GHC seems to be a traditional AOT compiler.


They generate native machine code.


I don't find any good idea in [1].

> 1. The function and variable namespaces have been collapsed into a single namespace.

Lisp-N, package system and homoiconic macro is a local optimum (IMO practically much better than Scheme, but I digress) for variable capture issue in metaprogramming. Now it's saying let's bring back the footguns and also you have to write lst instead of list. Please, no.

> 2. ...adds a layer on top of CLOS

How about a library? Why a new standard?

> 3. Common Lisp 3 supports case-sensitive symbols.

This I can relate.

> 4. Common Lisp 3 supports native threads. > 5. Common Lisp 3 supports tail recursion elimination.

Practically not a problem for today's CL. There's nothing to fix.

> Meanwhile proper typing should be introduced out of the box, like in Coalton[3], for example.

Are you saying Coalton as an embedded language should be introduced out of the box? I'm afraid it may quickly earn similar reputation as LOOP and FORMAT. Or are you saying the whole language should adopt Coalton-like typed semantics? Then I don't think it's even possible for large part of the language, especially when you take interactivity into account. What happens when a function gets redefined with different type? Worse, how about CHANGE-CLASS and UPDATE-INSTANCE-FOR-REDEFINED-CLASS?

> Also, pattern matching should be the part of the language, not some external library [4].

Why not? Common Lisp as a living and extensible language now evolves by adopting de-facto standard (trivia for pattern matching, bt for native threads, usocket for network, ASDF for build system, etc). Why need a committee or other form of authority to prescribe what everyone gets to use when we have a maximally democratic process?


> Are you saying Coalton as an embedded language should be introduced out of the box?

Not the whole language as is but proper algebraic types at least. Just like most modern languages do.

> Why not? Common Lisp as a living and extensible language now evolves by adopting de-facto standard (trivia for pattern matching, bt for native threads, usocket for network, ASDF for build system, etc). Why need a committee or other form of authority to prescribe what everyone gets to use when we have a maximally democratic process?

Totally a valid point but then something like Compact Lisp proposal to strip the language to the bare minimum and extract everything out in libraries would make way more sense than the huge and only half-used CL standard we have now.


If you want Scheme, go use Scheme because these are not arguments for Common Lisp. There is tons of value in the CL specification being this big and I'm happy I can still run code I wrote more than 25 years ago (or third party code written more than 50 years ago) without any issues.

Generally, contemporary folks that propose improvements to the CL spec tend to be misinformed / misguided and/or lacking experience to realize why their proposed improvements are bad ideas.


How would algebraic types work with SLIME? If I remove a constructor from my algebraic type, what happens to values of that type that are built with that constructor that're stored in globals?

In the same way that non-hygienic macros in a Lisp-2 with a CL-style package system are a local optimum, many non-obvious design choices in the Common Lisp type system and CLOS make SLIME "just work" in almost every case.


I guess this case is workable similar to struct redefintion. There can be a condition and a CONTINUE restart, which makes instances of the removed constructor obsolete.


> Each SSE connection blocks one worker for its entire duration.

Have you tried wookie? Such extreme case of blocking the event loop... negates any benefit of async processing.


An update: I've spent some time taking a much deeper look, and while I can't guarantee it's perfect, I added a different approach for Clack+Woo, documented here: https://github.com/fsmunoz/datastar-cl/blob/main/SSE-AND-WOO...

In short: I've replace the Common Lisp loop (that works for Hunchentoot since it opens threads, but doesn't for Woo since it blocks) with a deeper integration into the event loop:

> And that was the main change: looking at the innards of it, there are some features available, like woo.ev:evloop. This was not enough, and access to the libev timer was also needed. After some work with lev and CFFI, the SDK now implements a Node.js-style approach using libev timers via woo.ev:evloop and the lev CFFI bindings (check woo-async.lisp).

This is likely (almost surely) not perfect or even ideal, but it does seem to work, and I've been testing the demo app with 1 worker and multiple clients.


I haven't tried Wookie, since adding Clack+Woo was already a substantial change. Reading https://fukamachi.hashnode.dev/woo-a-high-performance-common... , where it compares with Wookie, I'm not sure if it would make a difference: it might be wrong, but "it says:

> Of course, this architecture also has its drawbacks as it works in a single thread, which means only one process can be executed at a time. When a response is being sent to one client, it is not possible to read another client's request.

... which for SSE seems to be similar to what the issue is with Woo. I wrote a bit more on it in https://github.com/fsmunoz/datastar-cl/blob/main/SSE-WOO-LIM... , and it can be more of a "me" problem than anything else, but to keep a SSE stream open, it doesn't play well with async models. That's why I added a with-sse-response macro that, unlike with-sse-connection, sends events without keeping the connection open.


wookie is built on cl-async, so my hope is that it's more tractable to write proper async SSE handler. But I haven't looked at whether it's possible to keep open connection asyncly.


Well written in general. However:

> C++ has method override but it's not the same: you cannot change the behavior of how addition works on two 64-bit integers (such as treating them both as fixed-point numbers).

Wouldn't you just create a 1-field struct/class and override all the arithmetic operators? Or if you're less fixated about using the same operator (like me as a Lisper), invent a method called ADD and use that.

> Changing addition to work on "bignum"s (numbers that have arbitrarily large precision) is a good usecase of overriding the addition operation.

I don't see this as something unique to Forth compared to other languages, even C++.


I find it funny that what you said summarize the publishing process in academia very well. Only except that it's much, much worse.


The line right after this is much worse:

> Coding performed by AI is at a world-class level, something that wasn’t so just a year ago.

Wow, finance people certainly don't understand programming.


World class? Then what am I? I frequently work with Copilot and Claude Sonnet, and it can be useful, but trusting it to write code for anything moderately complicated is a bad idea. I am impressed by its ability to generate and analyse code, but its code almost never works the first time, unless it's trivial boilerplate stuff, and its analysis is wrong half the time.

It's very useful if you have the knowledge and experience to tell when it's wrong. That is the absolutely vital skill to work with these systems. In the right circumstances, they can work miracles in a very short time. But if they're wrong, they can easily waste hours or more following the wrong track.

It's fast, it's very well-read, and it's sometimes correct. That's my analysis of it.


Is this why AI is telling us our every idea is brilliant and great? Because their code doesn't stand up to what we can do?


Whichever PM sold glazing as a core feature should be ejected into space.


Because people who can’t code but now can have zero understanding of the ‘path to production quality code’

Of course it is mind blowing for them.


Copilot is easily the worst (and probably slowest) coding agent. SOTA and Copilot don't even inhabit similar planes of existence.


I've found Opus 4.5 in copilot to be very impressive. Better than codex CLI in my experience. I agree Copilot definitely used to be absolutely awful.


cursor is better than both, i wish this weren’t the case tbph


> I frequently work with Copilot and Claude Sonnet, and it can be useful, but trusting it to write code for anything moderately complicated is a bad idea

This sentence and the rest of the post reads like an horoscope advice. Like "It can be good if you use it well, it may be bad if you don't". It's pretty much the same as saying a coin may land on head or on tail.


saying "a coin may land on head or on tail" is useful when other people are saying "we will soon have coins that always land on heads"


this is doable, you just have to rig the coin


They don’t. I’ve gone from rickety and slow excel sheets and maybe some python functions to automate small things that I can figure out to building out entire data pipelines. It’s incredible how much more efficient we’ve gotten.


[deleted]


> Including how it looks at the surrounding code and patterns.

Citation needed. Even with specific examples, “follow the patterns from the existing tests”, etc copilot (gpt 5) still insists on generating tests using the wrong methods (“describe” and “it” in a codebase that uses “suite” and “test”).

An intern, even an intern with a severe cognitive disability, would not be so bad at pattern following.


Do you think smart companies seeking to leverage AI effectively in their engineering orgs are using the 20$ slopify subscription from Microsoft?

You get what you pay for.


Every time a new model or tool comes out, the AI boosters love to say that n-1 was garbage and finally AI vibecoding is the real deal and it will make you 10x more productive.

Except six months ago n-1 was n and the boosters were busy ruining their credibility saying that their garbage tier AI was world class and making them 10x more productive.

Today’s leading world-class agentic model is tomorrow’s horrible garbage tier slop generator that was patently never good enough to be taken seriously.

This has been going on for years, the pattern is obvious and undeniable.


I can obviously only speak for myself, but I've tried AI coding tools from time to time and with Opus 4.5 I have for the first time the impression that it is genuinely helpful for a variety of tasks. I've never previously claimed that I find them useful. And 10x more productive? Certainly not, even if it would improve development speed 10000x I wouldn't be 10x more productive overall since not even half of my time is directed towards development efforts.


Ask ChatGPT “is AI programming world class?”


Finance people are funny. They are so wrong when you hear their logic and references, but I also realized it doesn't matter. It is trends they try to predict, fuzzy directional signals, not facts of the moment.


Of course not, why would they? They understand making money, and what makes money right now? What would be antithetical to making money? Why might we be doing one thing and not another? The lines are bright and red and flashing.


Cool language! What language and library is this?


This seems to be the article's author's own language Bauble[1], "a toy for composing signed distance functions in a high-level language (Janet), compiling them to GLSL, and rendering them via WebGL"[2].

[1]: https://ianthehenry.com/posts/bauble/building-bauble/ [2]: https://github.com/ianthehenry/bauble


Looks like a lisp? Here's the library I think they're using (and wrote): https://github.com/ianthehenry/bauble


The linked study is utterly unconvincing... textual arguments (am I reading philosophy?) with formula jumping up out of nowhere, figures showing not measurable data but made-up simple linear/inverse proportional curve... is this paper written by LLM?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: