Hacker Newsnew | past | comments | ask | show | jobs | submit | tsvetkov's commentslogin

Have you lived in one of those rent controlled “paradises”? In Europe, yes, there are sizeable populations living in subsidized housing, and often there are restrictions on rent increases, but new tenants pay way higher prices and have to compete for every available unit with dozens of other potential tenants. New tenants frantically overbidding each other, while old tenants pay pennies compared to today’s market prices, mmm, what a life.

“it can work” in some way of course. People are surprisingly adaptable to living in semi-dysfunctional environments. But it reality the only thing that truly works is building a lot of housing.


> new tenants pay way higher prices and have to compete for every available unit with dozens of other potential tenants.

Rent control isn't the cause of that, though, it's lack of housing supply to meet demand. If there was no rent control, competition would be just as fierce, and prices still high.


They are not independent. Rent control discourages housing development.


Not something I've seen in montreal


> Claude's API is still running on zero-margin, if not even subsidized, AWS prices for GPUs; combined with Anthropic still lighting money on fire and presumably losing money on the API pricing.

Source? Dario claims API inference is already “fairly profitable”. They have been optimizing models and inference, while keeping prices fairly high.

> dario recently told alex kantrowitz the quiet part out loud: "we make improvements all the time that make the models, like, 50% more efficient than they are before. we are just the beginning of optimizing inference... for every dollar the model makes, it costs a certain amount. that is actually already fairly profitable."

https://ethanding.substack.com/p/openai-burns-the-boats


Most of these “we’re profitable on inference” comments are glossing over the depreciation cost of developing the model, which is essentially a capital expense. Given the short lifespan of models it seems unlikely that fully loaded cost looks pretty. If you can sweat a model for 5 years then the financials would likely look decent. With new models every few months, it’s likely really ugly.


Interesting. But it would depend on how much of model X is salvaged in creating model X+1.

I suspect that the answer is almost all of the training data, and none of the weights (because the new model has a different architecture, rather than some new pieces bolted on to the existing architecture).

So then the question becomes, what is the relative cost of the training data vs. actually training to derive the weights? I don't know the answer to that; can anyone give a definitive answer?


There are some transferable assets but the challenge is the commoditization of everything that means others have easy access to “good enough” assets to build upon. There’s very little moat to build in this business and that’s making all the money dumped into it looking a bit froth and ready to implode.

GPT-5 is a bellwether there. OpenAI had a huge head start and basically access to whatever money and resources they needed and after a ton of hype released a pile of underwhelming meh. With the pace of advances slowing rapidly the pressure will be on to make money from what’s there now (which is well short of what the hype had promised).

In the language of Gartner’s hype curve, we’re about to rapidly fall into the “trough of disillusionment.”


How do you jump from “not attributable to AI“ to “must be a recession”? I think it would be true for jobs that are not separable from companies economic activity, but it isn’t true for a good portion of tech jobs. A car manufacturer can’t sell the same amount of cars while reducing a half of assembly workers, but most tech giants can maintain profitable parts of their business with a fraction of their workforce (if not indefinitely then at least for some time). Some work can be eliminated altogether, some might be outsourced to other countries, some split among existing workers. I think that’s what’s happening. Why it is happening, though? Hard to say for sure, but I don’t see why it couldn’t be a combination of tighter availability of capital, shrinking addressable market (due to deglobalization & demographics) and AI competition requiring huge capex


Sure, but monitoring, reviewing and steering does not really require modern IDEs in their current form. Also, I'm sure agents can benefit from parts of IDE functionality (navigation, static analysis, integration with build tools, codebase indexing, ...), but they sure don't need the UI. And without UI those parts can become simpler, more composable and more portable (being compatible with multiple agent tools). IMO another way to think about CLI agentic coding tools as of new form of IDEs.


As was already mentioned elsewhere, Emacs + Magit to monitor incoming changes is a great combo.


yes i am rejigging my whole vim setup

Following are now stars of my workflow

* Git plugins - Diffview, gitsigns, fugitive

* Claude Code plugin / Terminals with claude code

* Neovim sessions

* Git worktrees

Editing focused workflows have taken an backseat

* LSP

* Vim motion and editing efficiency

* File navigation

* Layouts


why even use vim at this point? the LLM ecosystem there is decent, but definitely less polished than using a modern IDE


Vim motions are nice.


Fascinating to see how agents are redefining what IDEs are. This was not really the case in the chat AI era. But as autonomy increases, the traditional IDE UI becomes less important form of interaction. I think those CLI tools have pretty good chance to create a new dev tools ecosystem. Creating a full featured language plugin (let alone a full IDE) for VSCode or Intellij is not for a faint-hearted, and cross IDE portability is limited. CLI tools + MCP can be a lot simpler, more composable and more portable.


IDE UI should shift to focusing on catching agentic problems early and obviously, and providing drop dead simple rollback strategies, parallel survival-of-the-fittest solution generation, etc


My fundamental worry with this technology is that you all are going to seriously fuck up the development experience for those of us who feel the technology at the core of this stuff is not sufficient. Development efforts will focus on this work flow at the expense of good software.


Yep, that's why my priority would be in pushing the weaknesses to the forefront and enabling us with maximum control over those weaknesses.

Old IDEs were built for the same purpose generally, but prioritized different weaknesses.


How does maximizing AI use prevents developers from reading their code? Especially if bonuses are not tied to productivity as you say. Just treat AI as a higher level IDE/editor.


There's more code to read as unskilled or sleepy developers push tons of sloppy changes. The code works, mostly, So either one loses more time chasing subtle issues or one yolos the approvals to have time for one's own coding workload.


I don't understand how your comment relates to what I've been responding to.

>> I know many who have it on from high that they must use AI. One place even has bonuses tied not to productivity, but how much they use AI.

> How does maximizing AI use prevents developers from reading their code?

In my mind developers are responsible for the code they push, no matter whether it was copy pasted or generated by AI. The comment I responded to specifically said "bonuses tied not to productivity, but how much they use AI". I don't see that using AI for everything automatically implies having no standards or not holding responsibility for code you push.

If managers force developers to purposefully lower standards just to increase PRs per unit of time, that's another story. And in my opinion that's a problem of engeneering & organisational culture, not necessarily a problem with maximizing AI usage. If an org is OK with pushing AI slop no one understands, it will be OK with pushing handwritten slop as well.


> If managers force developers to purposefully lower standards just to increase PRs per unit of time

That's basically what I'm referring to.


The market price is supposed to account for future growth, not just for current revenue. Predicting future is speculative by definition, but it's not completely detached from reality to bet that Nvidia has the potential to grow significantly for some time (at some point either the market cap or the multiple will correct of course).

I also see where the reasoning here contradicts the reality. If we assume Nvidia only sells $1000 gpus and moves a few millions a year, then how did it received $137B in FY2025? In reality they don't just sell GPUs, they sell systems for AI training and inference at insane margins (I've seen 90% estimates) and also some GPUs at decent margins (30-40%). These margins may be enough to stimulate competition at some point, but so far those risks have not materialized.


It’s not unreasonable to bet that their 60% margin on data center products disappears either though. It only takes one competitor to get their act together and those margins will be cut in half.


The fun thing is that their R&D cost is dwarfed by expenses on Sales&Marketing and General&Administrative. So, if I understand their financial statements for 2022FY correctly, a 7% cut on R&D could lower their total expenses by 2% at best https://ir.gitlab.com/news-releases/news-release-details/git... Even cutting their R&D 100% would not make GitLab profitable, if other expenses are kept the same, so economics is clearly not the reason for layoffs.


How do we know they're laying-off people that belong to "R&D" category? Maybe they're cutting people from Sales&Marketing or General&Administrative.

There's not too much detail in that press release.

Thanks for sharing the link to the report!


Yeah, I can't be sure. However, the "tech" part of the layoff most likely falls under the R&D expenses, which are relatively small compared to their overall costs. So I don't see, how cutting any number of core development workforce would make a significant difference. At least in the financial sense.

Also, I looked at the wrong year. Currently they are in Q4 2023FY, the statements for the last quarter are here https://ir.gitlab.com/news-releases/news-release-details/git...


Fleet does not use Compose, but it does use Skiko[1], which also provides binding for Skia[2] (the native graphics library also used by Chrome & Flutter).

The main difference between the libraries is that Skija provides Java/JVM bindings for Skia, whereas Skiko provides Kotlin bindings for Kotlin/JVM, Kotlin/JS, and Kotlin/Native targets. Of course Skiko's Kotlin/JVM bindings can be used with other JVM languages, not just with Kotlin.

[1] https://github.com/JetBrains/skiko

[2] https://skia.org/


The existing options for monitors, suitable for use as a TV, are extremely limited. For TVs 55" and 65" are common, for monitors there were just a few, which were dumb TVs basically. And then there are bigger sizes, different types of panels (there are no 4K QD-OLED monitors for example) that are not available for monitors. I think, if telemetry and ads are concern, buying TV without connecting it to the internet (using Apple TV or a homelab media server) is a better choice for many TV buyers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: