Hacker Newsnew | past | comments | ask | show | jobs | submit | xigency's commentslogin

This happened to me and I tried to recover the last licensed version I had used but mixed up my shortcuts or something and, after the 100th time I saw the nagware screen, I gave up and uninstalled and went with something simple and free: Notepad++.


Troll / flamebait


Lol / no


I used to be at Amazon on the B2B retail side and at the tail end we got automated A.I. spamming our TT's with completely wrong summaries. Some great "progress." Internal search had similar "improvements" that tipped the balance of 'good enough' toward 'non-functional.'


That's a nice thought but I've only had one physical in the last 15 years. I'm sure others are sitting in the same boat.


These are obviously aggregate statistics and individual experiences may vary. It shouldn't be necessary to point that out.


So that's why they dropped "Don't Be Evil."


The problem I see with A.I. research is that its spearheaded by individuals who think that intelligence is a total order. In all my experience, intelligence and creativity are partial orders at best; there is no uniquely "smartest" person, there are a variety of people who are better at different things in different ways.


This came up in a discussion between Stephen Wolfram and Eliezer Yudkowsky I saw recently. I generally think Wolfram is a bit of a hack but it was one of his first points that there is no single "smartness" metric and that LLMs are "just getting smarter" all the time. They perform better at some tasks, sure, but we have no definition of abstract "smartness" that would allow for such ranking.


You're good at some things because there is only one copy of you and limited time and bounded storage.

What could you be intelligent at if you could just copy yourself a myriad number of times? What could you be good at if you were a world spanning set of sensors instead of a single body of them?

Body doesn't need to mean something like a human body nor one that exists in a single place.


Humans all have similar brains. Different hardware and algorithms have way more variance in strengths and weaknesses. At some points you bump up against the theoretical trade-offs of different approaches. It is possible that systems will be better than humans in every way but they will still have different scaling behavior.


Why would we think that intelligence would increase in response to universality, rather than in response to resource constraints?


At a certain point intelligence is a loop that improves itself.

"Hmm, oral traditions are a pain in the ass lets write stuff down"

"Hmm, if I specialize in doing particular things and not having to worry about hunting my own food I get much better at it"

"Hmm, if I modify my own genes to increase intelligence..."

Also note that intelligence applies resource constraints against itself. Humans are a huge risk to other humans, hence the lack of intelligence over a smarter human can constrain ones resources.

Lastly, AI is in competition with itself. The best 'most intelligent' AI will get the most resources.


Thanks for the comment, it triggerred a few thought experiments for me.

For example, if you focus on oral traditions you experiment and create more poems, songs, etc. If you focus on preserving food you discover jams, dried meat, etc.

Is it useful to focus on everything, or global optimal? Is it possible?

Also regarding competition and evolution, what stopped humans to get more capable brains? Is it just resource constraints, like not having enough calories(not having mini nuclear reactor with us)? Or are there other, more interesting causes?


I don't agree with your premise at all so I don't think that the rest of it follows from it either. What evidence or reason do you have to bring me to accept that premise?


Huh? Can you cite _one_ major AI researcher who believes intelligence is a total ordering?

They'll definitely be aligned on partial ordering. There's no "smartest" person, but there are a lot of people who are consistently worse at most things. But "smartest" is really not a concept that I see bandied about.


Sure but there’s nothing that says you can’t have all of those in one “body”


Have you tried creating your own programming language? How about solving unsolved frontier problems in mathematics? Ever written a book that won a Pulitzer prize? How many languages do you know?

As someone who was born ambitious I find this technology tepid at best.


I agree. As someone with libertarian ideals I dislike both of these parties (almost) equally.


As a former libertarian I have a hard time squaring that philosophically with not saving most of your dislike for the people attacking habeus corpus, the rule of law, ignoring judicial rulings, etc. There is plenty to dislike about the Democrats but this is a completely new level of assault on core libertarian principles.


Oh for sure, I've gone as far as trying to get people to change their votes to avoid what we're currently putting up with. But you can't argue that each party paves the way for the other's abuses. Consider the average age of congress and the amount of power the executive branch now holds.


We do need more reforms on unchecked power, but in general this boils down to a clear hierarchy of failure where the primary criticism of the Democratic Party is that they didn’t do more to stop the outright crimes committed by the Republican Party. Neither is laudable but they’re not equivalent.


Yeah, that's a very good question. I kind of doubt it since we've moved beyond a world of common sense.

If this ruling holds the best thing would have been to have never paid the tariffs.


> You're trading correctness for speed.

That's AI in a nutshell.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: