While I can get behind the sentiment I hope bad writing doesn't become the standard for anti AI. A simple grammar check would have greatly improved this post.
AI has plenty of training data on poor writing. If people start looking for bad grammar and typos to identify human articles, generative AI is certainly capable of spitting out prose that looks poorly edited.
I kind of hope the anti-AI-writing stuff passes and we can focus on what makes writing good or bad again instead of “this is clearly AI” posted in response to every blog. I actually don’t care if it’s AI but I do care if it’s worth reading and pleasant to read.
When most of the AI written blog posts are just written via a "write a blog post about X" prompt. I don't see any value in reading any of it. If I want to know what ChatGPT thinks of a given subject I'd just ask it directly.
Good code has always been written with a reader in mind. The compiler understanding it was assumed. The real audience was other engineers. We optimized for readability because it made change easier and delivered business value faster.
That audience is changing. Increasingly, the primary reader is an agent, not a human. Good code now means code that lets agents make changes quickly and safely to create value.
Humans and agents have very different constraints. Humans have limited working memory and rely on abstraction to compress complexity. Agents are comfortable with hundreds of thousands of tokens and can brute-force pattern recognition and generation where humans cannot.
We are still at the start of this shift. Our languages and tools were designed for humans. The next phase is optimizing them for agents, and it likely will not be humans doing that optimization. LLMs themselves will design tools, representations, and workflows that suit agent cognition rather than human intuition.
Just as high-level languages bent machine code toward human needs, LLMs let us specify intent at a much higher level. From there, agents can shape the underlying systems to better serve their own strengths.
For now, engineers are still needed to provide rigor and clearly specify intent. As feedback loops shorten, we will see more imperfect systems refined through use rather than upfront design. The iteration looks less like careful planning and more like saying “I expected you to do ABC, not XYZ,” then correcting from there.
Given how precious the main context is would it not make sense to have the skill index and skill runner occur in a subagent? e.g. "run this query against the dev db" the skills index subagent finds the db skill, runs the query then returns the result to the main context.
The big limitation is that you have to approve/disapprove at every step. With Cursor you can iterate on changes and it updates the diffs until you approve the whole batch.
Yes, the user I'm replying to is suggesting that taking on a dependency of a shared software repository makes the service no longer a microservice.
That is fundamentally incorrect. As presented in my other post you can correctly use the shared repository as a dependency and refer to a stable version vs a dynamic version which is where the problem is presented.
The problem with having a shared library which multiple microservices depend on isn’t on the microservice side.
As long as the microservice owners are free to choose what dependencies to take and when to bump dependency versions, it’s fine - and microservice owners who take dependencies like that know that they are obliged to take security patch releases and need to plan for that. External library dependencies work like that and are absolutely fine for microservices to take.
The problem comes when you have a team in the company that owns a shared library, and where that team needs, in order to get their code into production, to prevail upon the various microservices that consume their code to bump versions and redeploy.
That is the path to a distributed monolith situation and one you want to avoid.
Yes we are in agreement. A dependency on an external software repository does not make a microservice no longer a microservice. It's the deployment configuration around said dependency that matters.
Sure, but that hides most of the facts about how it works. There are a lot of parties involved in this, including people paying for it and being paid for it, and those paying probably out number those getting it for free at point of use. Sweeping that under the rug is just a sales ploy, which shows what the outlet wants you to believe about this program.
I wouldn't call using the most commonly accepted (and concise) terminology a "sales ploy". If you want every service to be accompanied by a wordy explanation of how it works, then every article would need to mention that the current status quo involves complicated taxpayer subsidy in the form of dependent care FSA accounts and a host of state-level programs.
This is a good example, because a "freeway" is free at point of use, but obviously understood to not be free of construction and maintenance cost. It is called "freeway" because "free-to-drive-on highway" would be too wordy.
Poll a random subset of people with the question "are you in favor of free childcare?". X% will say yes.
Poll another set with the question "are you in favor of taxpayer funded childcare?". Y% will say yes.
I would bet any amount of money that X>Y, and (X-Y)% of people did not think about the fact that a free government service is not actually free.
Exactly how big X and Y are, I couldn't say. But identifying propaganda and deceptive language is never something that should be discouraged, even when it's advocating for a cause you agree with.
Me? I'm not opposed to it. I think tax-payer funded attempts to increase birthrates in our own population are a good idea. We should be doing more to encourage people to have and raise children.
I'm opposed to weasel words and intentionally misleading people about how economies and governments work. I'm also not particularly confident encouraging people to have their children raised by strangers is a good idea.
While I do find the new iOS a little more awkward to use than the previous version I haven't given up hope on the concept yet. It's a big change and I can see v2 making some big improvements. Whether it'll be worth it in the long run I'm not sure but I can't be too upset about them trying something new.
The vscode integration does feel far tighter now. The one killer feature that Cursor has over it is the ability to track changes across multiple edits. With Claude you have to either accept or reject the changes after every prompt. With Cursor you can accumulate changes until you're ready to accept. You can use git of course but it isn't anywhere near as ergonomic.
reply