Can't speak for kefir; slimcc has been `make unittest`ing Neovim v0.10.4 with no source modification in a Debian VM (so it's pretty portable, thanks!)
On tcc, the most common dealbreaker is no thread local support, having independent assembler/linker is a wonderful feat but not being feature parity with binutils could lead to build failure, contributors generally less willing to support the correct behavior for ABI, C standards and GCC extensions.
I hope these don't sound like a diss, if I had been better at reading its coding style I would definitely try to contribute to mob branch.
This reminds me of a great adventure. A long time ago I was travelling through Brazil, in the Amazonas state. I was in Porto Velho and needed to get to Manaus quickly to catch a flight. The boat that would take us there (as I had found in the Lonely Planet) was present on the quay. But the captain didn't feel like going. If I remember correctly, he was waiting for more people. We needed an alternative route quickly.
The captain told us that if we took the bus to Humaitá (a smaller provincial city), we could take a smaller boat that would take us to Manaus. But he warned us that the boat only goes once in three days, and that it would leave soon. The last bus to Humaitá would also leave from Porto Velho promptly.
Despite this flimsy instruction, we didn't see any alternative. So we went. With great luck, we caught the bus and made it to Humaitá (I still have a picture of the boat river transfer the bus took: https://bashify.io/i/CdNcLf).
Our time in Humaitá was surreal. When asked about a boat to Manaus. Everyone told us a different story. There was no harbour (hydroviaria) personal. One person told us the boat to Manaus was named "Caçote". Another person said the boat was named something else. Then someone said it had stopped ferrying years ago. No, we heard from someone else, it would come in 5 hours! Yet another one said it was tomorrow. Someone else felt sorry for us because it had just left. I felt like I was in a (difficult) point and click adventure. There weren't a lot of people in town close to the river, so we ran into the same people from time to time. They would often give different answers to the previous time.
No one was willing to tell us they didn't know. Not a single person out of the 20+ we must have talked to.
In the end, a boat arrived. It went in the direction of Manaus. The captain said that he would only go up to Auxiliadora, and that was still a long way from Manaus. Once again without alternatives (going back to Porto Velho would surely mean we'd miss the flight), we chanced it. In the hopes that getting closer was worth it.
When we arrived at Auxiliadora, it was the smallest inhabited place I had ever been to. Perhaps about 13 houses. Some fishermen. No passenger boat would come for days, they told us. Not to take us to Manaus, neither to take us back. The fishermen had boats, I tried to offer them money so they would take us further. But their day was at an end and they wanted to relax, regardless of what I offered them (we were on a tight budget, but I was desperate enough to offer a significant chunk of a monthly wage, no dice).
Then we found out that on the boat with us, a woman had come who was in a similar predicament as us. She was Brazilian, living in MT and wanting to visit family in Manicoré (which was bigger, and closer to Manaus). Exasperated, she ended up convincing her family to come and pick her up with a speedboat. We hitched a ride. We were very thankful.
When we arrived in Manicoré, I felt like exploring the place. It looked so different from anywhere else I had been, like something out of a movie. But I couldn't. The docks were little more than a collection of wooden jettys (trapiche) that ran everywhere in criss-cross fashion. In order to even get to the quayside we would have to pass through many other boats. In the first one we went through, the captain walked past and I asked him whether he knew of a boat going to Manaus. He signalled where to put my bags. We were leaving.
We reached Manaus in the nick of time.
I love this story, and that time. These anecdotes definitely triggered my memory.
I tried btrfs on three different occasions. Three times it managed to corrupt itself. I'll admit I was too enthousiastic the first time, trying it less than a year after it appeared in major distros. But the latter two are unforgiveable (I had to reinstall my mom's laptop).
I've been using ZFS for my NAS-like thing since then. It's been rock solid ().
(): I know about the block cloning bug, and the encryption bug. Luckily I avoided those (I don't tend to enable new features like block cloning, and I didn't have an encrypted dataset at the time). Still, all in all it's been really good in comparison to btrfs.
I've been using btrfs as the primary FS for my laptop for nearly twenty years, and for my desktop and multipurpose box for as long as they've existed (~eight and ~three years, respectively). I haven't had troubles with the laptop FS in like fifteen years, and have never had troubles with the desktop or multipurpose box.
I also used btrfs as the production FS for the volume management in our CI at $DAYJOB, as it was way faster than overlayfs. No problems there, either.
What were your (main) problems with Kodi? AFAIK it is written in C++ with Python plugins. Electron would be (on the face) a downgrade yes. But how is a Lua app much smoother?
(My personal pet peeve is that Kodi still doesn't know how to minimize CPU consumption when one is doing nothing on the UI. It should just stop rendering. This means I have to turn Kodi off on my HTPC+server setup to stop it from pushing my CPU in a higher power consumption mode.)
Kodi is super complex. The last straw was me wanting to launch Dolphin games from the UI and not being able to figure it out.
My custom media center is basically just a glorified 10ft-UI file browser. Opens media files in mpv (with some extra GUI to download subtitles and select audio tracks), Wii games in Dolphin, runs shell scripts (I have ones launch Steam Link etc.)
I realize that this might be a case of "simplify by limiting use cases" but I made it for me so it's fine.
You also have the problem that if the both the ultimate answer to life the universe and everything, and the ultimate question to life the universe and everything, are know at the same time in the same universe. The universe is spontaneously replaced with a slightly more absurd universe to ensure that both the question and answer become meaningless.
To quote the message from the universes creators to its creation “We apologise for the inconvenience”. Does seem to sum up Douglas Adam’s views on absurdity of life.
I've heard this before. Is this just to add another hop in the chain to make it harder for someone to track the user down? Apart from someone needing to order Amazon to pony up the details ("Which credit card was this Amazon item bought with?")
The gift card is a scratch off and has a number that is used to fund your Mullvad balance. So Amazon doesn't know which instance of the gift card you ordered, meaning there's no link to your specific Mullvad account payment.
The authorities might know you ordered a gift card, but not which Mullvad account you funded it with.
Any LLM-based code review tooling I've tried has been lackluster (most comments not too helpful). Prose review is usually better.
> So we run dozens of parallel CLI agents that can review the code in excruciating detail. This has completely replaced human code review for anything that isn't functional correctness but is near the same order of magnitude of price. Much better than humans and beats every commercial tool.
Sure, you could make multiple LLM invocations (different temporature, different prompts, ...). But how does one separate the good comments from the bad comments? Another meta-LLM? [1] Do you know of anyone who summarizes the approach?
[1]: I suppose you could shard that out for as much compute you want to spend, with one LLM invocation judging/collating the results of (say) 10 child reviewers.
I have attempted to replicate the "workflow" LLM process where several LLMs come up with different variations of a way to solve a problem and a "judge" LLM reviews them and the go through different verification processes to see if this workflow increased the accuracy of the LLM's ability to solve the problem. For me, in my experiments, it didn't really make much difference but at the time I was using LLMs significantly dumber than current frontier models. HOWEVER...When I enable "Thinking Mode" on frontier LLM's like ChatGPT it DOES tend to solve problems that the non-thinking mode isn't able to solve so perhaps it's just a matter of throwing enough iterations at it for the LLM to be able to solve a particular complex problem.
You need human alignment on what constitutes a "good" comment. That means consistent rules.
Otherwise, some people feel review is too harsh, other people feel it is not harsh enough. AI does not fix inconsistent expectations.
> But how does one separate the good comments from the bad comments?
If the AI took a valid interpretation of the coding guidelines, it is a legitimate comment. If the AI is being overly pedantic, it is a documentation bug and we change the rules.
I once tried learning how to RE with radare2 but got very frustrated by frequent project file corruption (meaning radare2 could no longer open it). The way these project files work(ed?) in radare2 at the time was that it just saved all the commands you executed, instead of the state. This was brittle, in my experience.
I don't have a lot of free time, so I have to leave projects for long periods of time, not being able to restart from a previous checkpoints meant I never actually got further.
IIUC, one of the first things Rizin did was focus on saving the actual state, and backwards/forwards-compatibility. This fact alone made me switch to Rizin. To its credit, my 3-year old project file still works!
Now for the downside: there is apparently a gap in Windows (32-bit) PE support, causing stack variables to be poorly discovered: https://github.com/rizinorg/rizin/issues/4608. I tested this on radare2, which does not have this bug. I'm hoping this gets fixed in Rizin at some point, at which point I'll continue my RE adventure. Or maybe I should give an AI reverse engineer a try... (https://news.ycombinator.com/item?id=46846101).
I tried radare2 with the official GUI Iaito. Iaito saves the project in a git repo, so whenever I got corruption (and I got it a lot, like every 4-5 saves) I was just a `git reset --hard` away from restoring a good state. Not the most efficient way of operation, but for me it was better this than tolerating Ghidra's tiny Courier New font.
Your corruption frequency anecdote matches mine. I don't have the mental werewithal to deal with that. I won't go back to radare2 until they change their project file stability somehow.
How are slimcc/kefir different/easier to drop in?
reply