Hacker Newsnew | past | comments | ask | show | jobs | submit | CM30's commentslogin

The issue is that even if you're not writing your own code, you're reliant on your CMS or framework, its plugins and imports, any advertising networks you use, etc not to break your site. There are already issues where those things cause server errors or end up being incompatible with other additions when upgraded, but giving them another way to break your entire site just makes such things even more of a hassle.

My theory is that the pandemic and lockdowns had a big effect on this. When those were in play, social media was basically the only way to communicate. People were stuck on these platforms almost 24/7, since there was no legitimate way to meet up with anyone in person.

But this burnt a lot of people out on these services. They grew tired of using social media as their only form of communication, and took the chance to get back into real world activities the minute it became available.

Social media was fun when the amount of time you could spend there was limited by other factors and other alternatives were available, but got tiresome when you were basically stuck on it all the time.

In addition to that, I think a few other factors to note here might be:

1. More and more people seem to have realised how unhealthy these sites are, and how using them too much destroys your mental health. I suspect at least a few people realised how bad these services were for them, and decided to mostly quit cold turkey.

2. LLMs meant that spammy and rage-bait inducing content flooded them to a ridiculous degree, and drowned out legitimate discussions in favour of automated slop.

3. The ever more extreme political situation in many countries meant that those not looking for a fight were put off from posting on sites like Twitter or Reddit.


For a lot of companies, probably shut down or drastically limit their AI usage due to rising costs. A small or medium sized business dependent on ever growing AI expenses is in a real bad position, and could well go under.

I heard a few companies ended up going back to hiring actual employees for work that was previous done by LLMs, so there's a chance we could see some more of that too. Might also see a few try to make it work with outdated or local ones too.


There's a lot of focus on tech projects here, but it's not just vibe written projects that are ruining communities now.

No, it's a problem with art, text and videos too. Reddit was already becoming a creative writing exercise in many ways, with infamous subs like 'Am I the Asshole?' seemingly being about 80% fiction labelled as fact. But now you don't even need to know how to write to flood the site with useless 'content'.

YouTube is arguably even worse, since AI led content farms are not just spamming the hell out of every topic under the sun, but giving outright dangerous advice and misinformation on top of that. I saw this video about medical misinformation by these 'creators' earlier, and it genuinely made me want to see them crack down on this junk:

https://www.youtube.com/watch?v=UEfCTCBDKIU

And there's just this feeling of distrust everywhere too. Is anyone on Hacker News human anymore? Is that Reddit poster I'm responding to human? Are the folks on Twitter, Threads or Bluesky human?

The scary part is that you basically can't tell anymore. Any project you find could be AI generated slop, any account could be a bot using stolen images or deepfakes, any article or video could be blatant misinformation put together as a cash grab...

If something doesn't improve, pretty much every platform under the sun is going to be completely useless, as is a lot of the internet as a whole.


I think people like the blog author need to realize that this problem can't be dealt with content moderation or users trying their best to be honest. You just get a firehose with an on/off switch, you don't get free filtering or moderation with it.

That's an interesting point. A lot of the tech being used for AI right now could definitely be repurposed in future.

That said, I feel like the comparison isn't exactly perfect here. Both AI and crypto do have some valid use cases, even if the majority of interest is completely pointless and makes no sense. While something like NFTs are beyond worthless, the tech behind a blockchain or LLM is something that can be used for a bunch of other purposes.

So, I don't think it's accurate to compare them to Enron. Enron was a scam first and foremost, and delivered nothing. Both crypto and AI are potentially useful technologies pumped up to an absurd degree by a broken market, sorta similar to the dotcom boom in the 90s.


Have to admit, my feelings are mixed here.

On the one hand, yeah it's risky that people are relying on a chatbot as if it's an actual doctor, and people might indeed take bad advice from it if they don't realise it's only a fictional character.

At the same time though, this feels kinda like criminalising roleplaying to some extent, and that's not really a direction I'd support. People on an RP forum or Discord server could also pretend to be a doctor in-universe/for the purpose of a story, and people could also ask them about medical issues and get (likely inaccurate) information in return. Should that be illegal? Should it really be illegal for someone to pretend to be part of a licensed profession for the purpose of entertainment?

I guess you could say it should be illegal to make up a license number in a fictional work or RP setting, but even then I feel like people should be able to separate fiction from reality. Entertainment shouldn't be limited because some people might be delusional/might rely on it in place of actual professionals.


I'm no fan of caning or physical punishment for crimes, but isn't that how a lot of bullying ends? The victim snaps, the bully gets beaten up or injured in some way and the latter finds an easier target to go after?

At the end of the day, a bully picks on those they perceive to not be a threat, whether that's a school bully using physical violence or a copyright/patent troll harassing individual creators and small companies. Being forced to go against someone with more resources or who can inflict serious damage against the aggressor is how a lot of bullies get shut down.


I would suspect that the vast majority of bullying ends when the victim is able to escape from the bully -- by changing schools, etc.

We hear about victims snapping and beating up their bullies because that makes a good story. How about victims who snap but then are beaten up (because the bullies are often bigger and more used to violence) even more? Probably much more common.


That's a fair point. The challenge is that a lot of the time, it's hard to escape in that way. The ideal would be that a bully is expelled or forced to change schools to get their victims away from them, but the system seems very reluctant to do that. Same with letting the victims find a new school to replace the old one.

It works really well for bullying in workplaces and communities though.

And true, the bully might win. But the thing is that it puts the victim from an easy target to a slightly harder target, and a bully may decide it's not worth the hassle/risk when others aren't going to fight back at all. It's like that old joke about outrunning a bear; you're not trying to outrun the bear, you're trying to outrun the people next to you. Or perhaps the old adage about home burglaries. A lock won't stop a determined thief, but they'd usually rather find an easy to break into house than go through the effort of defeating a security system.


> The victim snaps, the bully gets beaten up

The unspoken rule is that the victim must only do hand-combat. They cannot use weapon in any way. If the victim uses weapon to defend themselves, they will be in the wrong.

Life is hard for victims. They are often bullied because they are weaker. And the only way out is to do hand-combat.


I mean he's right, the old internet and the technology that underlies it still exists, and there's nothing stopping you from building and using sites that work independently of the big social media platforms/centralised services.

That said, I do wish this essay was a bit better contrast wise. Had to highlight some of the tables to read them at all, which isn't exactly ideal.


The components heavily give Claude Code vibes. I use CC to build internal tools and, given free reign over the design, this exactly what it will produce.

Won't comment on the writing other than that the punchlines do feel a bit pretentious in an AI kinda way. I've seen the author's blog posts and I much prefer their natural writing to this essay-style output, but to each their own.


The writing is definitely AI.

I see this often in HN posts and I’m not sure whether to comment. Because it seems most people don’t care; and are only discussing the title, which the LLM post is a predictable extrapolation of, so human effort on the article would be wasted.

I wish people would discuss more interesting topics and less repeats. But probably most of the unique posts just aren’t interesting to me, and I spend too long here so I see repeats more than the average user.


LLMs seem hellbent on generating Tailwind interfaces. So much of the internet was already like this, so I’m not sure it’s a Claude thing (Google Stitch doesn’t seem to know how to make anything else).

Somewhat. If you open port 22 up on an ip, you're going to get hit by bots scanning the Internet, trying to find an open server to ssh into. If you open port 80 or 443, you're going to get bots looking for /wp-admin.php just as soon as the domain name for it hits certificate transparency logs. The Internet's not a friendly place to be. It once was, but the default now is that someone is going to try and abuse anything you put up. Makes it hard to want to set up a new platform outside of the big centralized ones.

In ham radio - we have a 'Q code' (abbreviation) for man-made noise: QRM (QRN is naturally occurring: thunderstorms and such). This is used mainly to refer to electrically noisy transformers, vehicles, misconfigured transmitters etc. Always been there, gets worse and/or better over time - but gotta figure out how to deal with it as part of the hobby.

When doing stuff on the internet, I've just decided to stop worrying and treat these scans like that above mentioned QRM. You can filter it a bit if you like [1], but really, a sensibly configured and maintained SSH server is as secure as it gets as far as I can see.

[1] https://alastairbarber.com/Building-Anycast-Network/#securit...


> If you open port 22 up on an ip, you're going to get hit by bots scanning the Internet, trying to find an open server to ssh into

This has been the case for years. I can remember this from logs for port 22, more than 20 yeas ago, I saw this.


Eh, as someone who runs a bunch of smaller sites and forums, I've not had any issues with scammers or hackers gaining access to them. Most of them are looking for obvious vulnerabilities via some sort of script, and usually assume the file names and database structure are the same for every site they target.

It's plenty possible to run an independent site with no issues if you keep things up to date and change a few things to thwart the most common attack attempts.


Those scanners are low effort. Don't run vulnerable software and you're fine (this mostly means not running any website you didn't write, but wasn't that the point anyway?) Run it in a container and you're double-fine.

If you don't have a wp-admin.php who cares if someone is trying to access it? If you have one but it correctly validates your admin credentials, again who cares?

You can turn it into a fun project of making a honeypot.


Oh hey, it's the game I remember from the cameos in Link's Awakening and the Wario Land series. Honestly, I don't think anyone associates Mad Scienstein with this game anymore, given his appearances in Wario Land 3, 4 and Dr Mario 64.

Yeah, security through obscurity as part of securing a system is good. Security through obscurity as the only way of securing a system is not.

Like, a lot of it comes down to 'high friction' vs 'low friction'. Obscurity means high friction. It means that the attacker needs to craft a specific solution for your site or system in particular rather than relying on an off-the-shelf solution to handle it all for them.

For example, the article's point about changing the WordPress database prefix fits into this category perfectly. Does it really make things that much more 'secure'? No, of course not. But it does mean that automated scripts that just assume tables like wp_posts exist will fail. It means that an attacker can't just run any old WordPress hacking toolkit and watch it do its thing, they have to figure out what database prefix you're using first.

Same with antispam solutions. The best solution to stop spam is to make your site unique in some way. To add some sort of challenge that a new user has to overcome to use the site, like a question related to the topic, a honeypot field they can't fill in, a script that detects how quickly they register, etc.

This won't stop a determined spammer, but it will stop or delay bots and automated scripts that rely on the target system having the same behaviour across the board. The spammer has to specifically target your site in particular, not just every forum script running the same software.

And much of society works this way to a degree. A federated or decentralised system (whether a social network or political movement) isn't technically harder to attack than a centralised one might be.

But it is more work to attack it. If a government or company wants to censor Reddit or Discord or YouTube, they have one target they can force to censor information across the board. If they want to target the Fediverse or some sort of torrent based system, then they have to track down dozens of people and deal with at least some of those people refusing or taking it to court or being in countries that aren't under their control or whatever else.

That's kinda what a good security through obscurity setup can be. You can't mass target everyone at once, you have to target different systems individually and spend more time and resources in the process.

However, you still need real security measures there. Security through obscurity is like hiding a safe behind a painting. It'll stop casual attackers from finding it, but it won't stop a targeted attack on its own. You need a strong lock, materials that are difficult to drill through and the safe itself being difficult to remove from the wall too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: