Hacker Newsnew | past | comments | ask | show | jobs | submit | vrotaru's commentslogin

Here is my experience.

Claude did generated a rather good template for what I needed. It did not compile at first but I copy-pasted the errors and it fixed them.

Not all was good, though. It used literal bullets instead of `-` required for lists, but on whole the experience was positive.

It had taken me less time to fix the template than it would been taken to write it from scratch.

Something which Claude was good at. I throw him a crude ASCII "art" representation of what I want and get the right Typst code back.


Sorry for harping on it, but I think this clearly reflects the difference between 2 approaches to storing knowledge, lossy but humongous, and lossless but limited.

LLMs - Lossy highly compressed knowledge which when prompted "hallucinates" facts. LLMs hallucinations are simply how the stored information is retrieved.

Memory (human in this case) - Extremely limited, but almost always correct.

Just an observation. No morals.


honestly humans are nowhere near as lossless as you think, look up any study on eye-witness acounts of crimes and you will see how fallible to hallucination the human mind is as well .... at least when it comes to one-shot learning.

I feel from my own experience teaching, that it's repetition and pruning of information that really makes human memory and learning much more effective and not the act of storing the information the first time.


There is a podman-compose which works almost as drop-in replacement.

Almost because most common commands work, but I have not check all.

And almost, because for some docker-compose.yaml which you downloaded/LLM generated you may need to prepend `docker.io/` to the image name


To some degree *all* LLM's answers are made up facts. For stuff that is abundantly present in training data those are almost always correct. For topics which are not common knowledge (allow for a great variability) you should always check.

I've started to think of LLM's as a form lossy compression of available knowledge which when prompted produces "facts".


> I've started to think of LLM's as a form lossy compression of available knowledge which when prompted produces "facts".

That is almost exactly what they are and what you should treat them as.

A lossy compressed corpus of publicly available information with a weight of randomness. The most fervent skeptics like to call LLMs "autocorrect on steroids" and they are not really wrong.


An LLM is an autocorrect in as much as humans are replicators. Something seriously gets lost in this "explanation".


Humans do much more than replicate, that is one function we have of many.

What does an LLM do, other than output a weighted prediction of tokens based on its training database? Everything you can use an LLM for is a manipulation of that functionality.


> An LLM is an autocorrect in as much as humans are replicators.

an autocorrect... on steroids.


What are humans, fundamentally, then ?


That is a good questions and I guess we have good progress since Plato whose definition was - A man is a featherless biped.

But I think we still do not know.


Old Sci-Fi AI used to be an entity which have a hard facts database and was able to instantly search it.

I think that's the right direction for modern AI to move. ChatGPT uses Google searches often. So replace Google with curated knowledge database, train LLM to consult this database for every fact and hallucinations will be gone.


You should always check. I've seen LLM's being wrong (and obstinate) on topics which are one step separated from common knowledge.

I had to post the source code to win the dispute, so to speak.


Now think of all the times you didn't already know enough to go and find the real answer.

Ever read mainstream news reporting on something you actually know about? Notice how it's always wrong? I'm sure there's a name for this phenomenon. It sounds like exactly the same thing.


Why would you try to convince an LLM of anything?


Often you want to proceed further based on a common understanding, so it’s an attempt to establish that common understanding.


Well, not exactly convince. I was curious what will happen.

If you are curious it was a question about the behavior of Kafka producer interceptors when an exception is thrown.

But I agree that it is hard to resist the temptation to treat LLM's as a pear.


So what is your pick?

* AI is the next electric screwdriver * AI is THE steam engine.

My pick is that the AI is not THE steam engine.


> Oberon also doesn't seem to be actively developed anymore

That's pretty much it, for maybe 10+ years now. There was a successor project BlueBottle with some promise, but it did not deliver. Later it was renamed to A2. Surprisingly, it did not help.

https://en.wikipedia.org/wiki/A2_(operating_system)

IMO the authors of BB/A2 bet heavily on XML/Java hype, and were trying to make Oberon more like Java. The result was something without much internal consistency and not very usable.

Not being able to use a major browser and not having the resources to write one from scratch did not help either.

Then some of the major figures of this project left. And that was it.

There are some hobbyists and some small businesses which use it for niche projects and that is all


It happens. It shows you the right suggestion, and if you keep typing it assumes that you meant something else and displays other suggestion.

Good for slow typers, not so much for quick ones.


Maybe web.skype.com will be better? Just a guess.

Anyway, that's how I use Skype when I still have to use it. Which is about once a month.


I didn’t know that existed. I’ll have to check it out. Thanks for the tip!


The are LLVM Caleidoscope (toy compiler) in both Haskell and OCaml

https://github.com/sdiehl/kaleidoscope https://github.com/arbipher/llvm-ocaml-tutorial

The Haskell one is a nice one. Can say nothing about the OCaml one since I found it using a google search.

I've had a try at implementing an Caleidoscope compiler in OCaml but did not finish it. But it was fun to write.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: