In my experience, curiosity and intelligence are very strongly correlated. There is a real gap between people with the curiosity and ability to explore and learn, and people without. This is often handwaved as "motivation" but it's more than just that.
In fact, the gap is so large that it can be really hard for a person on one side of it to understand how people on the other side think.
I think part of it is that geniuses gets (or at least feels) rewarded whenever they try learning, while other people might not. For the same amount of effort, the amount of new knowledge gained by other people is fewer than what geniuses can get. Overtime, leaning no longer feels worth it. Thus normal people no longer feels curious while geniuses still do.
I thought of it from the other end. Curious folks end up engaging in cycles of formative learning that the less curious do not. The perceived intelligence follows. Its the process that makes the difference.
Why is what? The post is fully self explanitory. If the OP chooses to use Imgur, then a large chunk of the readers will not know what they are talking about.
As a counterpoint, I found GPT 4.5 by far the most interesting model from OpenAI in terms of depth and width of knowledge, ability to make connections and inferences and apply those in novel ways.
It didn't bench well against the other benchmaxxed models, and it was too expensive to run, but it was a glimpse of the future where more capable hardware will lead to appreciably smarter models.
I wonder why, too. Now, you can give me a reply for why you think that is, which you could have done without this comment, or you can just keep adding noise to this thread. Up to you.
Cargo only recompiles the crates that you edit, after the first build of the dependent crates it's quick to iterate.
Compilation is not the bottleneck, it's the human (me) in the loop that's doing the thinking and typing.
The productivity boost comes from Rust's strong type system and ownership (much better than MyPy) which practically ensures that if it compiles, it will work. There's a lot less troubleshooting/debugging of my Rust "scripts" than when I wrote Python.
> I very much think that AIs with minds are possible
The real question here is how would _we_ be able to recognize that? And would we even have the intellectual honesty to be able to recognize that, when at large we seem to be inclined to discard everything non-human as self-evidently non-intelligent and incapable of feeling emotion?
Let's take emotions as a thought experiment. We know that plants are able to transmit chemical and electrical signals in response to various stimuli and environmental conditions, triggering effects in themselves and other plants. Can we therefore say that plants feel emotions, just in a way that is unique to them and not necessarily identical to a human embodiment?
The answer to that question depends on one's worldview, rather than any objective definition of the concept of emotion. One could say plants cannot feel emotions because emotions are a human (or at least animal) construct; or one could say that plants can feel emotions, just not exactly identical to human emotions.
Now substitute plants with LLMs and try the thought experiment again.
In the end, where one draws the line between `human | animal | plant | computer` minds and emotions is primarily a subjective philosophical opinion rather than rooted in any sort of objective evidence. Not too long ago, Descartes was arguing that animals do not possess a mind and cannot feel emotions, they are merely mimicry machines.[1] More recently, doctors were saying similar things about babies and adults, leading to horrifying medical malpractice.[2][3]
Because in the most abstract sense, what is an emotion if not a set of electrochemical stimuli linking a certain input to a certain output? And how can we tell what does and what does not possess a mind if we are so undeniably bad at recognize those attributes even within our own species?
No True Scotsman fallacy. Just because that interests you doesn't mean that it's "the real question".
> would we even have the intellectual honesty
Who is "we"? Some would and some wouldn't. And you're saying this in an environment where many people are attributing consciousness to LLMs. Blake Lemoine insisted that LaMDA was sentient and deserved legal protection, from his dialogs with it in which it talked about its friends and family -- neither of which it had. So don't talk to me about intellectual honesty.
> Can we therefore say that plants feel emotions
Only if you redefine emotions so broadly--contrary to normal usage--as to be able to make that claim. In the case of Strong AI there is no need to redefine terms.
> Now substitute plants with LLMs and try the thought experiment again.
Ok:
"We know that [LLMs] are able to transmit chemical and electrical signals in response to various stimuli and environmental conditions, triggering effects in themselves and other [LLMs]."
Nope.
"In the end, where one draws the line between `human | animal | plant | computer` minds and emotions is primarily a subjective philosophical opinion rather than rooted in any sort of objective evidence."
That's clearly your choice. I make a more scientific one.
"Because in the most abstract sense, what is an emotion if not a set of electrochemical stimuli linking a certain input to a certain output?"
It's something much more specific than that, obviously. By that definition, all sorts of things that any rational person would want to distinguish from emotions qualify as emotions.
Bowing out of this discussion on grounds of intellectual honesty.
Or when you enable optimizations and since accessing an object with invalid state is UB, the compiler helpfully decides it is now permitted to format your hard drive.
Almost all open-source software comes with a version of the following license terms:
"THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE."
To use the software you have to accept the license, which means you explicitly confirm that they are not your supplier. Pretty clear cut, no?
Edit: EULA-loving companies don't want to accept the license terms for the _free_ software they themselves are using - the hypocrisy is nothing short of staggering.
In fact, the gap is so large that it can be really hard for a person on one side of it to understand how people on the other side think.
reply