You're assuming there aren't "new things" latent inside currently existing information. That's definitely false, particulary for math/physics.
But it's worth thinking more about this. What gives humans the ability to discover "new things"? I would say it's due to our interaction with the universe via our senses, and not due to some special powers intrinsic to our brains that LLMs lack. And the thing is, we can feed novel measurements to LLMs (or, eventually, hook them up to camera feeds to "give them senses")
No it isn't false. If it is new it is novel, novel because it is known to some degree and two other abstracted known things prove the third. Just pattern matching connecting dots.
The vast majority of work by mathematicians uses n abstracted known things to prove something that is unproven. In fact, there is a view in philosophy that all math consists only of this.
Yep it is why the work getting over the threshold is just as long as it was without AI.
Someone mentioned it is a force multiplier I don't disagree with this, it is a force multiplier in the mundane and ordinary execution of tasks. Complex ones get harder and hard for it where humans visualize the final result where AI can't. It is predicting from input but it can't know the destination output if the destination isn't part of the input.
My opinion use the database that is the most compatible with the software you are currently using. Don't shoehorn in a database that is less compatible.
I use it for scaffolding and often correct it for the layour I prefer. Then I use to check my code, and then scaffold in some more modules. I then connect them together.
Long as you review the code and correct it, it is no more different than using stackoverflow. A stack overflow that reads your code and helps stitch the context.
"Stack Overflow that reads your codebase" — perfect. But Stack Overflow is
stateless. Agent sessions aren't.
One session's scaffold assumes one pattern. Second session scaffold contradicts it. You reviewed both in isolation. Both looked fine. Neither knows about the other.
Reviewing AI code per-session is like proofreading individual chapters of a novel nobody's reading front to back. Each chapter is fine. The plot makes no sense.
I/O has been the bottleneck for many things especially databases.
So as someone who has seen a long spread of technological advancements over the years I can confidently tell you that chips have far surpassed any peripheral components.
Kind of that scenario where compute has to be fast enough anyway to support I/O. So really it always has to be faster, but I am saying that it has exceeded those expectations.
As an SRE I can tell you AI can't do everything. I have done a little software development, even AI can't do everything. What we are likely to see is operational engineering become the consolidated role between the two. Knows enough about software development and knows enough about site reliability... blamo operational engineer.
Not the person you are replying to but, even if the technical skills of AI increase (and stuff like Codex and Claude Code is indeed insanely good), you still need someone to make risky decisions that could take down prod.
Not sure management is eager to give permission to software owned by other companies (inference providers) the permission to delete prod DBs.
Also these roles usually involve talking to other teams and stakeholder more often than with a traditional SWE role.
Though
> There are no hiding places for any of us.
I agree with this statement. While the timeline is unclear (LLM use is heavily subsidized), I think this will translate into less demand for engineers, overall.
I think it is important to know that AI needs to be maintained. You can't reasonably expect it to have a 99.9% reliability rate. As long as this remains true work will exist in the foreseeable future.
It is important to write the code yourself so you understand how it functions. I tried vibe coding a little bit. I totally felt like I was reading someone else's code base.
Sanitization practices of AI are bad too.
Let me be clear nothing wrong with AI in your workflow, just be an active participator in your code. Code is not meant to be one and done.
You will go through iteration after iteration, security fix after fix. This is how development is.
Similar to any industrial advancement in human history.
reply