Hacker Newsnew | past | comments | ask | show | jobs | submit | ralusek's commentslogin

Only thing I use grok for is if there is a current event/meme that I keep seeing referenced and I don't understand, it's good at pulling from tweets

Going to war with AI is pointless. It’s not going anywhere.

Acknowledging that the world has already been turned upside down however, rather than burying our head in the sand (present company excluded), is necessary.


OK, but what does that mean in terms of concrete action?

I think this tech is going to continue to make the world, on the whole, much worse both globally and personally. I'm not willing to just bend over and take it. So what's the real alternative?


Invest in your local community.

One sliver of optimism I have is that this era of social media rot can finally come to and end if enough people lose trust with online content.

Media has been compromised. Your neighbors are not AI and are probably not part of a billionaires influence campaign


That would be the first thing to do: stop buying stuff online. Entirely.

It genuinely makes me see the value in private companies. Public companies must grow. They're accountable to so many different interests. Private companies can be happy sitting at whatever profit level they want. They can take time to tinker on something that they care about. If it doesn't pay off, that's fine.

I think I would say it this way: private companies can be good or bad, but public companies must ultimately become bad.


Comments are kind of embarrassing how many people seem to derive a sense of identity from not using AI. Before LLMs, I didn’t use them to code. Then there were LLMs, and I used them a little to code. Then they got better at code, and now I use them a little more.

Probably 20% of the code I produce is generated by LLMs, but all of the code I produce at this point is sanity checked by them. They’re insanely useful.

Zero of my identity is tied to how much of the code I write involves AI.


The irony is that by asserting how much you don’t identify your identify with AI, you, in turn, identify yourself in a certain way.

I’m reminded of that South Park episode with the goths. “I’m so much of a non-conformist I’m going to non-confirm with the non-conformists”.

In the end it all doesn’t matter.


When not having Claude feels like you left your phone at home, I'd say no, using AI is very much a part of our identities.


The thief who stole the car is always a little bit more chatty about the stolen car.

Who are you trying to convince here?


I think you’ve put your finger on it. This isn’t about AI, it’s about the threat to people’s identity presented by AI. For a while now “writing code” has been a high status profession, with a certain amount of impenetrable mystique that “normies” can’t hurdle. AI has the potential to quite quickly shift “writing code” from a high status profession that people respect to commodity that those same normies can access.

For people whose identities and self of sense have been bolstered by being a member of that high status group AI is a big threat - not because of the impact on their work, but because of the potential to remove their status, and if their status slips away then they may realise they have nothing much else left.

When people feel threatened by new technology they shout loud and proud about how they don’t use it and everything is just fine. Quite often that becomes a new identity. Let them rail and rage against the storm.

“Blow winds, and crack your cheeks! Rage! Blow!”

The image of Lear “a poor, infatuated, despised old man” seems curiously apt here.


It's a bit odd to say, but another big clue identifying something as AI-generated is that it simply looks "too good" for what it is being used for. If I see a little info graphic demonstrating something relatively mundane, and it has nice 3D rendered characters or graphical elements, at this point it's basically guaranteed to be AI, because you just sort of intuitively know when something would've justified the human labor necessary to produce that.


Funny enough that had crossed my mind with the woodchuck example, because at a glance I can't see any weird artifacts, but I felt confident I could tell it was AI generated immediately if I saw it in the wild, and I couldn't really explain why. My immediate guess was "well, who the hell would actually bother to make something like this?"


It's not odd to say. It was one of the first telling signs to identify AI artists[0] on Twitter: overly detailed backgrounds.

Of course now a lot of them have learned the lesson and it's much harder to tell.

[0]: I know, I know...


It appears to be a law that is simply adding restrictions to what the state can do (like the first amendment, the best sorts of laws IMO). It’s not granting people limited rights. Any existing rights people had under the fourth or first example, for example, are still in place, this just sounds like further restrictions on the state.


What are rights besides restrictions on the state?


> toward LLM-based AI, moving away from more traditional PyTorch use cases

Wait, are LLMs not built with PyTorch?


GP is likely saying that “building with AI” these days is mostly prompting pretrained models rather than training your own (using PyTorch).


Everyone is fine-tuning constantly though. Training an entire model in excess of a few billion parameters. It’s pretty much on nobody’s personal radar, you have a handful of well fundedgroups using pytorch to do that. The masses are still using pytorch, just on small training jobs.

Building AI, and building with AI.


Fine-tuning is great for known, concrete use cases where you have the data in hand already, but how much of the industry does that actually cover? Managers have hated those use cases since the beginning of the deep learning era — huge upfront cost for data collection, high latency cycles for training and validation, slow reaction speed to new requirements and conditions.


Llama and Candle are a lot more modern for these things than PyTorch/libtorch, though libtorch is still the de-facto standard.


That's wrong. Llama.cpp / Candle doesn't offer anything on the table that PyTorch cannot do (design wise). What they offer is smaller deployment footprint.

What's modern about LLM is the training infrastructure and single coordinator pattern, which PyTorch just started and inferior to many internal implementations: https://pytorch.org/blog/integration-idea-monarch/


Pytorch is still pretty dominant in cloud hosting. I’m not aware of anyone not using it (usually by way of vLLM or similar). It’s also completely dominant for training. I’m not aware of anyone using anything else.

It’s not dominant in terms of self-hosted where llama.cpp wins but there’s also not really that much self-hosting going on (at least compared with the amount of requests that hosted models are serving)


Using HTML Tables doesn’t just make your data sortable


And it’ll be so good and cheap that you’ll figure “hell, I could sell our excess compute resources for a fraction of AWS.” And then I’ll buy them, you’ll be the new cloud. And then more people will, and eventually this server infrastructure business will dwarf your actual business. And then some person in 10 years will complain about your IOPS pricing, and start their own server room.


YouTube shorts now eat up like 2/3 of my UI. I hate them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: