Hacker Newsnew | past | comments | ask | show | jobs | submit | smrtinsert's commentslogin

Seems like something that will add to their billable hours

From the third party perspective, it feels like gambling to me. I can't imagine being ok with knowing everything keystroke you make is being tracked to train a model, most likely to replace you. I can't imagine ever wanting to work there, thought that for a long time. "It's full of brilliant people". Well, the profession is quite honestly, there are plenty of places to work.

That's changing quickly, and Meta pays extremely well.

Not so easy for 3-4 year kids out of school to make $500K-$600K.

The supergenius quanty ones go to Jane Street and the smart product-y ones jump ship to OpenAI or Anthropic (e.g. Boris) but there just aren't 20,000 high paying roles out there.

Anyone saying otherwise is kidding themselves.


It did this for during one of the recent outrage periods. It was unjarring deps left and right instead of googling for it. What an easy way for me to own the tokenmaxxing leaderboard I remember thinking

I'm probably very out of date here, but I thought domains weren't allowed to be purchased programmatically due to misuse, crime, fraud etc. Why is it allowed now just because of agents? This is bonkers to me.

Yeah absolutely embarassing take. If I had a nickle for every time someone sent me some AI garbage that was supposedly "thoroughly vetted and cross checked agent output", I'd be at least a thousandaire (gotta keep it real).

There are strengths, but if you think its writing stream of code and just using it as is, I would LOVE to compete against you.


Can I push to production anytime I want? I can run 10000 agents then no problem. I'll just move fast and break things and I'll get massive cheers because its AI.

You joke, but in a way this is the natural trajectory technology has been heading. AI has just increased the magnitude of it

Did the author say which model and harness handled the first attempts? Codex at the end ok but what did he try the rest with?

They specifically refused to do it, which is very meh and in retrospect reads like Codex astroturfing because of that ("all AI bad except for Codex"?

On what grounds is there a lawsuit? Hasn't scraping been classified as legal?

Calling someone’s apartment an opium den is potentially libel, and if it results in a material financial impact, you’ve got a lawsuit.

Is it someone's apartment or Airbnb's apartment?

classifying people's businesses as an "opium den" using a shitty LLM prompt seems like a pretty good way to piss some people off.

I don't necessarily agree with labeling them drug dens. But certainly the hosts showed zero or negative effort in keeping the room clean and suitable to rent. They do deserve some shaming.

At least Google pretended to not be evil for a few years

> The problem is millions of years of evolutionary wiring makes us see it as alive

Maybe for laymen, but I would think most technologists should understand that we're working with the output of what is effectively a massive spreadsheet which is creating a prediction.


The thing with evolutionary wiring is that it doesn't matter if you're layman or "technologist". The technologist part is just a small layer on top of very thick caveman/animal insticts and programming.

That's why a technologist can, just as easily as any layman, get addicted to gambling, or do crazy behaviors when attracted by the opposite sex.


>small layer on top of very thick caveman/animal insticts and programming.

Which is also why marketing and advertising works on EVERYONE. When AI puts out the phrase "Prompt engineering", everyone instinctively treat it as something deterministic, despite them having some idea of how an LLM works...


The same could be said for your brain.

LLMs are highly intelligent. Comparing them to spreadsheets is reductionist and highly misleading.


>LLMs are highly intelligen

I will tell you why it is not.

Intelligence is understanding low level stuff and using it to reason about and understand high level stuff.

When LLMs demonstrate "highly intelligent" behavior, like solving a complex math problem (high level stuff), but also simultaneously demonstrate that it does not know how to count (low level stuff that the high level stuff depends on), it proves that it is not actually "intelligent" and is not "reasoning".


You just invented you own definition of intelligence. I'm pretty sure that strategy could also support the opposite conclusion.


So your problem with the definition is that "I invented it"?

Do you have any rational objection to the definition? If you don't have, then I am afraid that you don't have a point.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: