Hot take : the whole LLM craze is fed by a delusion. LLM are good at mimicking human language, capturing some semantics on the way. With a large enough training set, the amount of semantic captured covers a large fraction of what the average human knows. This gives the illusion of intelligence, and the humans extrapolates on LLM capabilities, like actual coding. Because large amounts of code from textbooks and what not is on the training set, the illusion is convincing for people with shallow coding abilities.
And then, while the tech is not mature, running on delusion and sunken costs, it's actually used for production stuffs. Butlerian Jihad when
My sophisticated sentiment analysis (talking to co-workers other professional programmers and IT workers, HN and Reddit comments) seems to indicate a shift--there's a lot less storybook "Ay Eye is gonna take over the world" talk and a lot more distrust and even disdain than you'd see even 6 months ago.
Self-plug here, but very related => Robustness and the Halting Problem for Multicellular Artificial Ontogeny (2011)
Cellular automata where the update rule is a perceptron coupled with a isotropic diffusion. The weights of the neural network are optimized so that the cellular automata can draw a picture, with self-healing (ie. rebuild the picture when perturbed).
Back then, auto-differentiation was not as accessible as it is now, so the weights where optimized with an Evolution Strategy. Of course, using gradient descent is likely to be way better.
I fixed my recurrent back pain with a 6 mn daily morning, ie. plank, side plank, reverse plank, 1mn 30 sec each.
Posture muscles are not very well known in the general public. Loss of strength due to aging and sedentary lifestyle makes standing, seating, etc uncomfortable.
Significantly faster compilation means less friction to iterate ideas, try things, which in the end lead to more polished results.
A nice interface is agreable, but maybe there are diminishing returns when you pay it with large compile time. I remember pondering about that when working with the Eigen math library, which is very nice but such a resource hog when you compile a project using it.
A nodal real-time video processing tool : put together pre-made "processing boxes" to generate interactive video. It runs on pretty much anything, uses a plugin architecture.
Say, plug a camera, and it will blend two videos streams using a silhouette detected on the camera, with various effects. It's very, very early, pre-alpha stuff, but it already was used for a demo by a customer.
GitHub pestacle, be warned, it's undocumented and larval stage
Will it be something like TouchDesigner[1] ? I never used TD myself, but I follow a lot of creative types who do for making music visualizations, art installations, etc.
I can't find it on Github though, maybe repo is private ?
Be warned, zero documentation, because things are at larval stage and change often. Will include a couple of demos this week.
In the spirit, yes, but targeting different hardware, public, and environments.
* It runs on Linux, Mac, Windows. Bare metal on rp2040 and rp2350 is planned.
* It written in C, build with Make.
* It is meant to run on something like a Raspberry Pi, Latte Panda, etc
* A setup is a text file, no fancy UI.
* The plan for live parameter fiddling will be a web server. Web UI will be tailored to each setup, no one size fits all UI. Typically I pay someone to do the UI.
* For now, it's only video, no sound output
It will be used for several large interactive LED displays and object tracking systems. It's a way for me to factories all those projects I was contracted for.
It's using the Sun as a (gravity) lens, with probes at the focal point to gather the image. Because it's a very large lens, that's allow to have a massive zoom on whatever object we are interested in.
But you should be able to use it as a "parabolic" mirror, to make a very directed ray to the planet. (Assuming diffraction is not a problem.) (Assuming no time delay, because to see the planet you should look to were it was many years ago, but to send a message you should aim to where it will be many years in the future.) (Assuming I'm not missing a few more technical problems that are not impossible to solve, but extremely difficult.)
I am curious about what results would be for something like a lexer + parser + abstract machine code generator generation for a made up language
reply