Hacker Newsnew | past | comments | ask | show | jobs | submit | Sparkyte's commentslogin

I am not in the sams context. As we shift in job roles lots of people will get uprooted and it will have a negative impact on life in a general sense.

Similar to any industrial advancement in human history.


AI cough LLMs don't discover things they simply surface information that already existed.

You're assuming there aren't "new things" latent inside currently existing information. That's definitely false, particulary for math/physics.

But it's worth thinking more about this. What gives humans the ability to discover "new things"? I would say it's due to our interaction with the universe via our senses, and not due to some special powers intrinsic to our brains that LLMs lack. And the thing is, we can feed novel measurements to LLMs (or, eventually, hook them up to camera feeds to "give them senses")


No it isn't false. If it is new it is novel, novel because it is known to some degree and two other abstracted known things prove the third. Just pattern matching connecting dots.

The vast majority of work by mathematicians uses n abstracted known things to prove something that is unproven. In fact, there is a view in philosophy that all math consists only of this.

Between you and me telnet is not dead. Sometimes I use it to probe a port to verify it is working.

You might wanna use netcat for that instead [1]. Or, for example, socat [2]. Netcat has been around for a long, long time now.

[1] nc (1) - arbitrary TCP and UDP connections and listens

[2] socat (1) - Multipurpose relay (SOcket CAT)


That's not really telnet. Yeah, it's using the same client, but the server and underlying protocol are what's relevant here.

The modern replacement for telnet used in the "probe a port" fashion is nc/netcat.


Yep it is why the work getting over the threshold is just as long as it was without AI.

Someone mentioned it is a force multiplier I don't disagree with this, it is a force multiplier in the mundane and ordinary execution of tasks. Complex ones get harder and hard for it where humans visualize the final result where AI can't. It is predicting from input but it can't know the destination output if the destination isn't part of the input.


My opinion use the database that is the most compatible with the software you are currently using. Don't shoehorn in a database that is less compatible.

I use it for scaffolding and often correct it for the layour I prefer. Then I use to check my code, and then scaffold in some more modules. I then connect them together.

Long as you review the code and correct it, it is no more different than using stackoverflow. A stack overflow that reads your code and helps stitch the context.


"Stack Overflow that reads your codebase" — perfect. But Stack Overflow is stateless. Agent sessions aren't.

One session's scaffold assumes one pattern. Second session scaffold contradicts it. You reviewed both in isolation. Both looked fine. Neither knows about the other.

Reviewing AI code per-session is like proofreading individual chapters of a novel nobody's reading front to back. Each chapter is fine. The plot makes no sense.


Wasn't there something about moltbook being fake?


I/O has been the bottleneck for many things especially databases.

So as someone who has seen a long spread of technological advancements over the years I can confidently tell you that chips have far surpassed any peripheral components.

Kind of that scenario where compute has to be fast enough anyway to support I/O. So really it always has to be faster, but I am saying that it has exceeded those expectations.


As an SRE I can tell you AI can't do everything. I have done a little software development, even AI can't do everything. What we are likely to see is operational engineering become the consolidated role between the two. Knows enough about software development and knows enough about site reliability... blamo operational engineer.


"As an SRE I can tell you AI can't do everything."

That's what they used to say about software engineering and yet this is becoming less and less obvious as capabilities increase.

There are no hiding places for any of us.


Not the person you are replying to but, even if the technical skills of AI increase (and stuff like Codex and Claude Code is indeed insanely good), you still need someone to make risky decisions that could take down prod.

Not sure management is eager to give permission to software owned by other companies (inference providers) the permission to delete prod DBs.

Also these roles usually involve talking to other teams and stakeholder more often than with a traditional SWE role.

Though

> There are no hiding places for any of us.

I agree with this statement. While the timeline is unclear (LLM use is heavily subsidized), I think this will translate into less demand for engineers, overall.


I think it is important to know that AI needs to be maintained. You can't reasonably expect it to have a 99.9% reliability rate. As long as this remains true work will exist in the foreseeable future.


Indeed, however the amount of "someone" is going to be way less.


It's still perfectly obvious as AI can't remotely write software if you want it to actually, you know, work.


Paraphrase: "As an SRE I can tell you that the undetermined and unknowable potential of AI definitely won't involve my job being replaced."


Actually it is more that my role will transform and I have no say in it.


It is important to write the code yourself so you understand how it functions. I tried vibe coding a little bit. I totally felt like I was reading someone else's code base.

Sanitization practices of AI are bad too.

Let me be clear nothing wrong with AI in your workflow, just be an active participator in your code. Code is not meant to be one and done.

You will go through iteration after iteration, security fix after fix. This is how development is.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: