I don't see why AI would be able to help you solve all your legacy code problems.
It still struggles making changes to large code bases, but it doesn't have any problems explaining those code bases to you helping you research or troubleshoot functionality 10x faster, especially if you're knowledgable enough not to take it at its responses as gospel but willing to have the conversation. A simple layman prompt of "are you sure X does Y for Z reason? Then what about Q?" will quickly get to them bottom of any functionality. 1 million token context window is very capable if you manage that context window properly with high level information and not just your raw code base.
And once you understand the problem and required solution, AI won't have any problems producing high quality working code for you, be it in RUST or COBOL.
In my experience with Legacy Code projects the problem is very rarely "what is this code doing?" Some languages like VB6 (or even COBOL) are just full of very simple "what" answers. Obfuscation is rare and the language itself is easy to read. Reading the code with my own eyes gives me plenty of easy enough answers for the "what". LLMs can help with that, sure, but that's almost never the real skill in working with "legacy code".
The problem with working with legacy code, and where most of the hardest won skills are, is investigating the "how" and the "why" over the "what". I haven't seen LLMs be very successful at that. I haven't seen very many people including myself always be very successful at that. A lot of the "how" and the "why" becomes a mystery of the catacombs of ancient commit messages and mind reading seance with developers no longer around to question directly. "Why is this code doing what it is doing?" and "How did this code come to use this particular algorithm or data structure?" are frighteningly, deeply existential questions in almost any codebase, but especially as code falls into "legacy" modes of existence.
Some of that becomes actual physical archeology that LLMs can't even think to automate: the document you need is trapped in a binder in closet in a hallway that the company sealed up and forgot about for 30 years.
Usually the answers, especially these days, were never written down on anything truly permanent. There was a Trello board that no one bothered to archive when the project switched to Jira. Some of the # references seem to be to BitBucket Issues and Pull Requests numbers, was the project ever hosted on Bitbucket? No one archived that either. (This is an old CVS ID. I didn't even realize this project pre-dated git.) The original specs at the time of the MVP were a whiteboard and a pizza party. One of the former PMs preferred "hands on" micro-management and only ever communicated requirements changes in person to the lead dev in a one hour "coffee" meeting every Wednesday and sometimes the third Thursday of a month. The team believed in a physical Kanban board at the time and it was all Post-It Notes on the glass window in the conference room named "Cactus Joe". I heard from Paul who was on a different project at the time that Cathy's cube was right next to that window and though she was only an Executive Assistant at the time she moved a lot of those Post-It Notes around and might be able to tell you stories about what some of them said if you treat her to a nice lunch.
Software code is poetry written by people. The "what" is sometimes just the boring stuff like does every other line rhyme and are the right syllables stressed. The "how" and "why" are the stories that poetry was meant to tell, the reasons for it to exist, and the lessons it was meant to impart. Sometimes you can still even read some of that story in the names of variables and the allegories in its abstractions, when a person or two last shaped it, as you start to pick up their cultural references and build up an empathy for their thought processes ("mind reading", frighteningly literally).
That's also why I fear for LLMs only accelerating that process: a hallway with closets getting bricked up takes time and creates certain kinds of civic paperwork. (You'll discover it eventually, if only because the company will renovate again, eventually.) Whereas, a prompt file for a requirements change never getting saved anywhere is easy to do (and generally the default). That prompt file probably wasn't kicked up and down a change management process nor debated by an entire team in a conference room for days, human memory of it will be just as nonexistent as the file no one saved. LLMs aren't even always given the "how" or "why" as they are from top to bottom "what machines", that stuff likely isn't even in the lost prompts. If a team is smaller or using a "Dark Software Factory" is there even reason to document the "how" or "why" of a spec or a requirement?
In further generalization, with no human writing the poetry the allegories and cultural references disappear, the abstractions become just abstractions and not illuminating metaphors. LLMs are a blender of the poetry of many other people, there's no single mind to try to "read" meaning from. There's no clear thought process. There's no hope that a ranty monologue in a commit message unlocks the debate that explains why a thing was chosen despite the developer thinking it a bad idea. LLMs don't write ranty monologues about how the PM is an idiot and the users are fools and the regulatory agency is going to miss the obvious loophole until the inevitable class action suit. Most of those are concepts outside of the scope of an LLM "thought process" altogether.
The "what is this code doing" is the "easy" part, it is everything else that is hard, and it is everything else that matters more. But I know I'm cynical and you don't have to take my word for it that LLMs with "legacy code" mostly just speed up the already easy parts.
More so I meant to think of oil, copper and now silver. All follow demand for the price. All have had varying prices at different times. Compute should not really be that different.
But yes. Cisco's value dropped when there was not same amount to spend on networking gear. Nvidia's value will drop as there is not same amount of spend on their gear.
Other impacted players in actual economic downturn could be Amazon with AWS, MS with Azure. And even more so those now betting on AI computing. At least general purpose computing can run web servers.
You can sell the old, less efficient GPUs to folks who will be running them with markedly lower duty cycles (so, less emphasis on direct operational costs), e.g. for on-prem inference or even just typical workstation/consumer use. It ends up being a win-win trade.
Building a new data center and getting power takes years to double your capacity. Swapping out out a rack that is twice as fast takes very little time in comparison.
Depends at the rate of growth of the hardware. If your data center is full and fully booked, and hardware is doubling in speed every year it's cheaper to switch it out every couple of years.
Both companies bought a set of taxis in the past. Presumably at the same time if we want this comparison to be easy to understand.
If company A still has debt from that, company B has that much debt plus more debt from buying a new set of taxis.
Refreshing your equipment more often means that you're spending more per year on equipment. If you do it too often, then even if the new equipment is better you lose money overall.
If company B wants to undercut company A, their advantage from better equipment has to overcome the cost of switching.
> They both refresh their equipment at the same rate.
I wish you'd said that upfront. Especially because the comment you replied to was talking about replacing at different rates.
So your version, if company A and B are refreshing at the same rate, then that means six months before B's refresh company A had the newer taxis. You implied they were charging similar amounts at that point, so company A was making bigger profits, and had been making bigger profits for a significant time. So when company B is able to cut prices 5%, company A can survive just fine. They don't need to rush into a premature upgrade that costs a ton of money, they can upgrade on their normal schedule.
TL;DR: six months ago company B was "no longer competitive" and they survived. The companies are taking turns having the best tech. It's fine.
Property taxes do not directly translate into rent, the % of the tax that is on the land value of the property can't be passed on, because the supply of land is inelastic.
Yes & no. Higher costs can obviously be passed onto consumers, but higher taxes make things a less attractive investment, too. The higher your costs regardless of whether a unit is occupied or not, the less interesting it is an an investment.
It still struggles making changes to large code bases, but it doesn't have any problems explaining those code bases to you helping you research or troubleshoot functionality 10x faster, especially if you're knowledgable enough not to take it at its responses as gospel but willing to have the conversation. A simple layman prompt of "are you sure X does Y for Z reason? Then what about Q?" will quickly get to them bottom of any functionality. 1 million token context window is very capable if you manage that context window properly with high level information and not just your raw code base.
And once you understand the problem and required solution, AI won't have any problems producing high quality working code for you, be it in RUST or COBOL.
reply