Hacker Newsnew | past | comments | ask | show | jobs | submit | marmakoide's commentslogin

There are lots of C compilers (LCC, TCC, SDCC, an army of hobby projects C compilers) available as open-source.

I am curious about what results would be for something like a lexer + parser + abstract machine code generator generation for a made up language


It's stochastic monkeys, but enhanced with a really good bias towards coherent prose, built upon a gigantic corpus.


So, it's monkeys specifically good at typing Shakespeare.


Hot take : the whole LLM craze is fed by a delusion. LLM are good at mimicking human language, capturing some semantics on the way. With a large enough training set, the amount of semantic captured covers a large fraction of what the average human knows. This gives the illusion of intelligence, and the humans extrapolates on LLM capabilities, like actual coding. Because large amounts of code from textbooks and what not is on the training set, the illusion is convincing for people with shallow coding abilities.

And then, while the tech is not mature, running on delusion and sunken costs, it's actually used for production stuffs. Butlerian Jihad when


I think the bubble is already a bit past peak.

My sophisticated sentiment analysis (talking to co-workers other professional programmers and IT workers, HN and Reddit comments) seems to indicate a shift--there's a lot less storybook "Ay Eye is gonna take over the world" talk and a lot more distrust and even disdain than you'd see even 6 months ago.

Moves like this will not go over well.


AI proponents would say you are witnessing third stage of 'First they ignore you, then they laugh at you, then they fight you, then you win'


> Butlerian Jihad when

I estimate two more years for the bubble to pop.


The plan went from the AI being a force multiplier, to a resource hungry beast that have to be fed in the hope it's good enough to justify its hunger.


Self-plug here, but very related => Robustness and the Halting Problem for Multicellular Artificial Ontogeny (2011)

Cellular automata where the update rule is a perceptron coupled with a isotropic diffusion. The weights of the neural network are optimized so that the cellular automata can draw a picture, with self-healing (ie. rebuild the picture when perturbed).

Back then, auto-differentiation was not as accessible as it is now, so the weights where optimized with an Evolution Strategy. Of course, using gradient descent is likely to be way better.


Regular Sunday 10 miles here, then I had the pleasure to experience plantar fasciitis. I love running, but the injuries can be really annoying


I fixed my recurrent back pain with a 6 mn daily morning, ie. plank, side plank, reverse plank, 1mn 30 sec each.

Posture muscles are not very well known in the general public. Loss of strength due to aging and sedentary lifestyle makes standing, seating, etc uncomfortable.


The best posture is your next posture.

I'd say as long as you use all your muscles meaningfully everyday, and don't spend hours in a single position, you're good.


Significantly faster compilation means less friction to iterate ideas, try things, which in the end lead to more polished results.

A nice interface is agreable, but maybe there are diminishing returns when you pay it with large compile time. I remember pondering about that when working with the Eigen math library, which is very nice but such a resource hog when you compile a project using it.


A nodal real-time video processing tool : put together pre-made "processing boxes" to generate interactive video. It runs on pretty much anything, uses a plugin architecture.

Say, plug a camera, and it will blend two videos streams using a silhouette detected on the camera, with various effects. It's very, very early, pre-alpha stuff, but it already was used for a demo by a customer.

GitHub pestacle, be warned, it's undocumented and larval stage


Will it be something like TouchDesigner[1] ? I never used TD myself, but I follow a lot of creative types who do for making music visualizations, art installations, etc.

I can't find it on Github though, maybe repo is private ?

[1] https://derivative.ca/


https://github.com/marmakoide/pestacle

Be warned, zero documentation, because things are at larval stage and change often. Will include a couple of demos this week.

In the spirit, yes, but targeting different hardware, public, and environments.

* It runs on Linux, Mac, Windows. Bare metal on rp2040 and rp2350 is planned.

* It written in C, build with Make.

* It is meant to run on something like a Raspberry Pi, Latte Panda, etc

* A setup is a text file, no fancy UI.

* The plan for live parameter fiddling will be a web server. Web UI will be tailored to each setup, no one size fits all UI. Typically I pay someone to do the UI.

* For now, it's only video, no sound output

It will be used for several large interactive LED displays and object tracking systems. It's a way for me to factories all those projects I was contracted for.


It's not a way to boost a signal.

It's using the Sun as a (gravity) lens, with probes at the focal point to gather the image. Because it's a very large lens, that's allow to have a massive zoom on whatever object we are interested in.


But you should be able to use it as a "parabolic" mirror, to make a very directed ray to the planet. (Assuming diffraction is not a problem.) (Assuming no time delay, because to see the planet you should look to were it was many years ago, but to send a message you should aim to where it will be many years in the future.) (Assuming I'm not missing a few more technical problems that are not impossible to solve, but extremely difficult.)


Wouldn't you have to have very accurate information about where the planet is going to be when the light arrives?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: