Hacker Newsnew | past | comments | ask | show | jobs | submit | thisismyswamp's commentslogin

To be fair, so did a lobotomy. I believe close attention should be paid to any unintended outcomes of a therapy that the patient themselves would no longer be able to identify due to the nature of the treatment itself.


Psilocybin is about the 180-degree opposite of a lobotomy, just from a purely mechanical perspective. And it certainly feels that way qualitatively as well.


organic systems seek points of equilibrium, with veering too much off in any axis being detrimental


Yes, although—within a specific range—mild "hormetic" stress or departure from baseline can lead to adaptive and beneficial effects in organic systems.

Hormesis is characterized by a biphasic dose-response: low-level exposures to stressors (toxins, temperature, exercise, dietary restriction, etc.) are those which stimulate adaptive beneficial responses, eg exercise, ischemic preconditioning (short bouts of reduced blood flow improving tissue resilience), and dietary energy restriction.

Rather than negating homeostasis, we can say that hormesis "refines" it: mild, intermittent stress can make us resilient through larger future perturbations.


A patient doesn’t metabolize a lobotomy.


they don't have to as there's no ingestion of the therapeutic agent


this is another interesting comment on the issue: https://news.ycombinator.com/item?id=41977430

but crucially, it assumes that variations in submissions and voting activity are strongly correlated, which I imagine is not necessarily the case


fluid pressure pushes particles outwards


AI safety people worrying that basketball players don't have a perfectly balanced ethnical representation while mega corporations are trying to establish a monopoly on intelligence


Please do not mistake "AI ethics" for "AI safety", where the latter is about "Studying and reducing the existential risks posed by advanced artificial intelligence" [1, 2].

[1]: https://forum.effectivealtruism.org/topics/ai-safety [2]: https://www.safe.ai/work/statement-on-ai-risk


>AI safety people worrying [...]

"AI safety people" are hardly a monolith. This is probably the first time I've seen "racism" being cited as a "AI safety" issue in months.



exception handling in GO makes it unusable IMO


> Learn golang

Having to "if err != nil" every single function call is a big put off - imagine having to "try catch" everything in a language like C#!


Two functions solve this (in Go):

    func Must(err error) {
        if err != nil {
            panic(err)
        }
    }

    func Panic[T any](v T, err error) T {
        if err != nil {
            panic(err)
        }
        return v
    }


I love it.

It forces me to think about how my programs can fail and what I should do when they fail.


Being forced to think about every possible failure state is a sure way to spend a lot of time on not creating value for the end user


Not to mention the odds of dreaming up an entire existence out of the blue, with no base reality for training data


I wonder how that compares to the odds of all the things that had to happen in order for you and I to exist and be here, today, thinking about this?


A Boltzmann brain requires a similar underlying reality except with no evolution traceability for the brain's origin.

Not requiring a world around the brain does not make it more likely, but less, as the probability of brains given that worlds exist multiplied by the probability of worlds is still much higher than the probability of brains regardless of the existence of worlds


You can’t even be certain that you aren’t a Boltzmann brain. If you could prove it (either way), it would be a publishable paper and could probably be the basis of a PhD for yourself.


Boltzmann brain is just one of the theoretically infinite combinations of matter that can be randomly conjured out of the quantum chaos (with a very low probability). Our entire visible universe could be a "Boltzmann Universe", and we'd have no way of knowing it.


If the Wikipedia page on Boltzmann brain is accurate, then the probability is surprisingly (but relatively) high.

> The Boltzmann brain gained new relevance around 2002, when some cosmologists started to become concerned that, in many theories about the universe, human brains are vastly more likely to arise from random fluctuations; this leads to the conclusion that, statistically, humans are likely to be wrong about their memories of the past and in fact be Boltzmann brains. When applied to more recent theories about the multiverse, Boltzmann brain arguments are part of the unsolved measure problem of cosmology.


Playing chess & go is also search in a large tree of moves leading to particular game states


But AlphaGo etc don’t use any kind of language-based AI, so LLMs (which this thread was about) are no good.


The next step seems to be applying past advances in reinforcement learning with modern transformer based models


Which multiple teams are working on - OpenAI (Q*), and Meta just released a reinforcement learning framework


Could you point me towards Meta's reinforcement learning framework? I'd like to see how it stacks up against the OpenAI gym.



Thank you!


The final state in chess is a single* state which yes, then branches out to N checkmate configurations and then N*M one-move-from-checkmates, and so on. (*Technically it's won/lost/draw.)

The equivalent final state in theorem proving is unique to each theorem so such a system would need to handle an additional layer-of-generalization.


Is this how some of the more advanced chess engines work, or even the not so advanced ones, where there's a point at which it stops searching the forward move tree in greatest depth, and instead starts searching backwards from a handful of plausible (gross move limit-bound) checkmate states looking for an intersection with a shallow forward search state?


Kind of, but it's calculated offline and then just accessed during the game: https://www.chessprogramming.org/Endgame_Tablebases


SEEKING WORK | Porto, Portugal | Remote

https://marcospereira.me/resume

[email protected]

Background in web, machine learning, and games. Deep learning research at MLC under Rosanne Liu (Google DeepMind). Taken multiple products from zero to market, working with teams ranging from early-stage startups to established corporations. Working remotely for 10+ years.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: