I'd like to think that modern centers are water cooled so it'd be more quiet these days unless you are implying that this application of theirs is running on legacy hardware? :P
The whole "browser game" industry is built on this phenomenon. It's about getting kids on school laptops mindlessly looping on something while shoving ads in their face.
Honestly, get the tech out of classrooms. A few 8 bit machines that can run LOGO are far more genuinely educational than all the gunk they have today.
I grew up in the 2000s and I remember almost everyone in the computer lab would be playing Flash games, until someone came in and yelled at us because it wasn't "educational" enough.
They almost let us play RuneScape (something something medieval history?) until they saw me firebolt a rat and declared it unacceptably violent.
I visited my old school once, a few years after graduating, and was startled to see many people on their laptops in the hallways. I guess they had become required. I had graduated right around the time smartphones came out, and we didn't have laptops either. (You'd see a laptop at school occasionally but it was a rare sight.)
I'm glad the fanciest thing I had was a TI-84, because it got me to spend most of my time socializing, which I think was pretty good for my development.
Gotta get schools back to using paper homework. There's so many of those awful online classroom portals for homework. Absolutely trash software, technically speaking.
TurnItIn.com was starting to be a thing when I was in high school. I found out it didn’t sanitize the papers you upload and had no CSRF protection, so I could upload a doc with inline JavaScript to hit the change-password and logout APIs.
Was pretty impactful for my education, just not in the intended way
The irony of the TBL quotes there being the entire problem with the semantic web is the ontological tarpit that results due to the excessive expressive power of a general triple store.
Well, I’d argue that many things in the semweb are not expressive enough and lead to the misunderstandings we have.
People think, for instance, that RDFS and OWL are meant to SHACL people into bad an over engineered ontologies. The problem is these standards add facts and don’t subtract facts. At risk of sounding like ChatGPT: it’s a data transformation system not a validation system.
That is, you’re supposed to use RDFS to say something like
The point of the namespace system is not to harass you, it is to be able to suck in data from unlimited sources and transform it. Trouble is it can’t do the simple math required to do that for real, like
?s :lengthInFeet ?o -> ?s :lengthInInches 12*?o .
Because if you were trying OWL-style reasoning over arithmetic you would run into Kurt Gödel kinds of problems. Meanwhile you can’t subtract facts that fail validation, you can’t subtract facts that you just don’t need in the next round of processing. It would have made sense to promote SHACL first instead of OWL because garbage-in-garbage out, you are not going to reason successfully unless you have clean data… but what the hell do I know, I’m just an applications programmer who models business processes enough to automate them.
Similarly the problem of ordered collections has never been dealt with properly in that world. PostgreSQL, N1QL and other post-relational and document DB languages can write queries involving ordered collections easily. I can write rather unobvious queries by hand to handle a lot of cases (wrote a paper about it) but I can’t cover all the cases and I know back in the day I could write SPAQL queries much better than the average RDF postdoc or professor.
As for underengineering, Dublin Core came out when I worked at a research library and it just doesn’t come close in capability to MARC from 1970. Larry Masinter over at Adobe had to hack the standard to handle ordered collections because… the authors of a paper sure as hell care what order you write their names in. And it is all like that: RDF standards neglect basic requirements that they need to be useful and then all the complex/complicated stuff really stands out. If you could get the basics done maybe people would use them but they don’t.
We can probably add to that the fact that the nominated titles almost certainly started out with spaghetti code that had to be refactored, reworked, and gave maintainers nightmares in their sleep
Roblox didn't have server functions until 2014 and they weren't mandatory until 2018. Anything the client did automatically replicated to every other client.
That meant I could attach Cheat Engine to Roblox, edit memory addresses to give myself in-game cash, and inject whatever code I wanted to everyone else.
Thankfully it was sandboxed so I couldn't give people viruses.
This was probably a good decision because most of Roblox's best games from that period weren't updated to use server functions. It's too difficult.
I agree. The animation on the site lost me when it placed a button. IMO, buttons are not part of TUIs. Those are just low-resolution GUIs, IMO, and that’s sort of the worst of all worlds. The first good TUIs were things like top and elm.
ENSHT comes for everyone. This is sexual selection over natural selection. Claude Code also gets this wrong, they got way to fancy and ruined what a good tui is by being an uncanny combo between a scrolling log and a completely rewritten canvas.
The only question is is the entity interesting and/or correct. Those properties are in the eye of the beholder. If they're human or not is beside the point.
No, those properties are tied to the state of mind and experiences of the human, dog, or LLM behind any given comment.
When someone posts:
> You could use Redis for that, sure, I've run it and it wasn't as hard as some people seem to fear, but in hindsight I'd prefer some good hardware and a Postgres server: that can scale to several million daily users with your workload, and is much easier to design around at this stage of your site.
then the beholder is trusting not just the correctness of that one sentence but all of the experiences and insights from the author. You can't know whether that's good advice or not without being the author, and if that's posted by someone you trust it has value.
An LLM could be prompted to pretend they're an experienced DBA and to comment on a thread, and might produce that sentence, or if the temperature is a little different it might just say that you should start with Redis because then you don't have to redesign your whole business when Postgres won't scale anymore.
> then the beholder is trusting not just the correctness of that one sentence but all of the experiences and insights from the author.
This implies they know the author and can trust them. If they don't know the author then there is no trust to break and they are only relying on the collective intelligence which could be reflected by the AI.
That is to say that trusting a known human author is very different from trusting any human author and trusting any human author is not that much different from trusting an AI.
> then the beholder is trusting not just the correctness of that one sentence but all of the experiences and insights from the author.
This is my point.
There is no sane endgame here that doesn't end up with each user effectively declaring who they do and don't care to hear, and possibly transitively extend that relationship n steps into the graph. For example you might trust all humans vetted by the German government but distrust HN commenters.
For now HN and others are free to do as they will (and the current AI situation has been intolerable), however, I suspect in the near future governments will attempt to impose their own version of it on to ever less significant forums, and as a tech community we need to be thinking more clearly about where this goes before we lose all choice in the matter.
> The only question is is the entity interesting and/or correct.
This already falls apart though. There are while categories of things which I find "incorrect" and would take up as an argument with a fellow human. But trying to change the mind of an LLM just feels like a waste of my time.
Indeed it is, and there are often times I choose not to engage with my fellow humans. But the exceptions are valuable to me and to others. With an LLM I don't feel there would be any exception, that's the difference.
Instead of wanting to change the mind of the other entity, how about focusing on coming to a mutual understanding of what is "correct"? That way it shouldn't matter much if said entity is human, LLM or dog. Unless you're just arguing to push your "correct" on other humans, with little care about their "correct".
It feels like you've loaded quite a lot on a way that feels unfair: "pushing" and "little care" etc. I maybe should have used a term like "discuss" target than the more loaded "argue."
Look, I'll give you a loose example: It's not uncommon to see a post making an "error" I know from experience. I might take the time to help someone more quickly learn what I felt I learnt to help me get out of that mistaken line of thought. If it's an LLM why would I care? There's thousands of other people, even other LLMs, that I could be talking to instead.
You've set up a framework here where "mutual understanding" is the end goal but that's just not always what's on the line.
Arguing for the sake of convincing the other person is doomed to inevitable failure, even without the possibility of that person being an LLM.
Arguing for the sake of convincing onlookers reading the conversation is more likely to be effective, and in that case it doesn't matter if the other person is an LLM.
Not necessarily. Using AI you can trivially perform astroturfing campaigns to influence public perception. That doesn't really fall on the interesting or correctness spectrums. For example, if 90% of the comments online are claiming birds aren't real with a serious tone, you might convince people to fall into that delusion. It becomes "common knowledge" rather than a fringe theory. But if comments reflect reality then only a tiny portion of people have learned the truth about birds, so people will read those claims with more skepticism.
(naturally "birds aren't real" is a correct vs not correct thing, but the same can be applied to many less-objective things like the best mechanical keyboard or the morality of a war)
The purpose of hiring them is to make them come to the conclusion you already have, so when it goes well you get the credit for doing it, or if it goes sideways you can pin the blame on them.
Most companies are not _just_ tech companies and don't have business analysts, consulting analysts, solutions consultants, software engineers and DBA's on staff.
Many, many, many companies are very happy with the consulting firms they hire.
Of course, those are the consulting firms that aren't publicly traded and in the news all the time (for all the wrong reasons).
Having done some work with these F500 companies, this is part of it. These legacy companies have long seen tech as a cost center, haven't invested in it, and are unable to attract talent. And, for whatever reason, these companies insist on working with large consulting firms, when a dedicated software or tech consulting firm that is smaller would be way better.
Ultimately, why would a large company hire a consultancy company that is bad at tech and has a lot of bad processes to do their tech for them? Because the company itself is even worse and doesn't know what good looks like. If you are hiring McKinsey or Deloitte to do your tech, it's because you are completely lost and don't have the slightest clue how to become unlost. And you have no concept of what good looks like.
If you think the actual tech talent and systems are bad, when you work with these consulting firms, they are going to do the most heavy SAFe process you have ever seen. For me, the worst part is not the tech talent, but rather the most by-the-book, heavy-handed agile process possible. Everything moves way slower because of this "agile" rot, and there is almost no concept of doing proper ideation and prototyping work.
These legacy F500 companies try to do everything cheaply with consultants and offshoring, and yet it always ends up costing way more than it would if they just had proper in-house tech talent.
There is no meaningful distinction, which is why a civil case in a foreign country is picked up by the BBC.
Furthermore, the Gov are trying to deflect on to OpenAI (and the internet) because a huge part of the failings in that case involve them seizing the weapons from the perpetrator, and then giving them back.
Western governments have been looking enviously at China's authoritarianism (notoriously Trudeau blurted out he admired their "basic dictatorship" back in 2013) while completely ignoring any elements that might actually improve the lives of the citizens.
Our politicians are determined to implement the worst of our respective systems.
reply