Hacker Newsnew | past | comments | ask | show | jobs | submit | Spkeat's commentslogin

This can further reinforce the idea that the unobservable universe was always there and that the big bang is only a pattern of movement of the cluster of galaxies and it is only the information that has managed to arrive from our position.I want to see how big bang theory falls into an optical illusion.


I don't know which idea is harder for me to conceptualize.

That something has always existed or that it came to exist out of nowhere-ish.


I suspect this is why simulation theory is popular.


Much like god, that just pushes the question up one level


While I agree that simulation theory pushes the question up a level, that is not the case for the God of classical theism.


Ok, I'm curious. How does the explanation for everything's existence being "God did it" not simply push the question up a level to wondering how to explain God's existence?


The general idea is that the set of all contingent things can only be explained by something non-contingent (ie necessary).

A necessary thing, by definition, is its own explanation. "I am what I am" etc.

From there, classical theists attempt to connect that necessary thing to what you'd commonly understand as God.


I just noticed that if you replace "God" with "Big Bang", it's pretty much the same question.

The answer to Big Bang is that before it, there was no time itself, so there's no notion of "before" (or if you wish, it's an error in the question itself that assumes there was "before").


Once you factor time out of your pre-universal reality, these are compatible.


Irrelevant, All comments independent of the message fail and force you to restart the page, now at least those messages can be recovered and continue the conversation.


Sometimes restarting the page isn't enough. I forget exactly what I did, whether I logged out or whatever, but I know the third conversation when I got the error, the first level of restarting didn't revive it.

I interrogated ChatGPT itself on the topic, and it claimed that such errors are probably a problem with the language model or the training data.

I also asked it how to tell the difference between canned answers generated by a specific filter and answers from the model. It gave me a decent, but generic and non-actionable answer.

[Edit: I realized there is a distinction between an error condition that was achieved through user prompts, and an error condition that can be exited through user prompts - but it appears to allow editing and resubmitting the last prompt, so why an error should be "sticky" isn't altogether clear. If it's in the front end (by some definition), why would it be so rare? Anyway, the dull explanation...is dull. I'd prefer to believe that at least I found a strange corner in its model - could it be that it underflows in some sense?]


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: