Hacker Newsnew | past | comments | ask | show | jobs | submit | suddenlybananas's commentslogin

I do wonder how many cyclists in Paris are really replacing cars versus replacing metro usage. Obviously, it's still good for people to cycle as well since the metro can be insanely crowded at times, but living in Paris, my impression is that the people who cycle are the kinds who would have been unlikely to own a car in any case.

That's a really good point, I hope at the very least it enables a "car -> public transport -> bikes" flow. So even if these people were taking the metro, all that extra metro space can accomodate car-owners who wish to switch.

It depends though. At least in London a lot of cycleways were made by removing bus lanes and replacing them with high quality segregated cycle lanes.

This has led to a big increase in %age terms of cyclists in London, but a fairly significant decline in bus passengers.

I think roughly 300m/yr cycle journeys were added, but bus has lost 500m pax/yr (mainly because of increased congestion making them less and less attractive). Note this isn't all down to bus lane removal, but it's a significant part of it.


> I do wonder how many cyclists in Paris are really replacing cars versus replacing metro usage.

That’s not necessarily a problem, particularly for saturated lines like the 13.


Exactly.

But humans are capable of very many original ideas. Look around you, humans were able to remake the entire world because of these original thoughts.

What are they buying?

> Second, to our investors, especially Casey Aylward from Accel, who led our Seed and Series A, and Jennifer Li from Andreessen Horowitz, who led our Series B

They are buying out investors, it's like musical chairs.

The liquidity is going to be better on OpenAI, so it pleases everyone (less pressure from investors, more liquidity for investors).

The acquisition is just a collateral effect.


Are you implying that the revenue multiple on this acquisition is lower than openAIs and that they'd be making money by acquiring and folding into their valuation multiple? I think that's not the case and I would wager non existent.

This was an acquihire (the author of ripgrep, rg, which codex uses nearly exclusively for file operations, is part of the team at Astral).

So, 99% acquihire , 1% other financial trickery. I don't even know if Astral has any revenue or sells anything, candidly.


They raised 4M USD, they have 26 full-time employees (they pay 120<->200K / yr, cf https://pitchbook.com/profiles/company/523411-93 ).

It means the company almost reached their runway, so all these employees would have to find a job.

It's a very very good product, but it is open-source and Apache / MIT, so difficult to defend from anyone just clicking on fork. Especially a large company like OpenAI who has massive distribution.

Now that they hired the employees, they have no more guarantees than if they made a direct offer to them.


So I don't see how the acquisition is collateral - it's an acquihire plain and simple, if anything else it would be supply chain insurance as they clearly use a lot of these tools downstream. As you noted the licensing is extremely permissive on the tools so there appears to be very little EV there for an acquirer outside of the human capital building the tools or building out monetized features.

I'm not too plugged into venture cap on opensource/free tooling space but raising 3 rounds and growing your burn rate to $3M/yr in 24 months without revenue feels like a decently risky bag for those investors and staff without a revenue path or exit. I'd be curious to see if OpenAI went hunting for this or if it was placed in their lap by one of the investors.

OpenAI has infamously been offering huge compensation packages to acquire talent, this would be a relative deal if they got it at even a modest valuation. As noted, codex uses a lot of the tooling that this team built here and previously, OpenAI's realization that competitors that do one thing better than them (like claude with coding before codex) can open the door to getting disrupted if they lapse - lots of people I know are moving to claude for non-coding workflows because of it's reputation and relatively mature/advanced client tools.


A brief note, your numbers are way off here — Astral subsequently raised a Series A and B (as mentioned in the blog post) but did not announce them. We were doing great financially.

(I work at Astral)


It seems you are one of the most active contributors there.

I would sincerely have understood better (and even wished) if OpenAI made you a very generous offer to you personally as an individual contributor than choose a strategy where the main winners are the VCs of the purchased company.

Here, outside, we perceive zero to almost no revenues (no pricing ? no contact us ? maybe some consulting ?) and millions burned.

Whether it is 4 or 8 or 15M burned, no idea.

Who's going to fill that hole, and when ? (especially since PE funds have 5 years timeline, and company is from 2021).

The end product is nice, but as an investor, being nice is not enough, so they must have deeper motives.


I mean you pirouetted onto the AI hype train before running out of working capital - I guess that's doing great financially by some definitions.

> They raised 4M USD

What was their pitch?


To raise $4m seed from AAA partners usually requires connections + track record/credability of the founders - looks like they have that here since they raised 3 rounds with zero revenue.

I can see why the former investors and Astral founders would like that, what I don't see is what OpenAI get out of the deal.

Maybe OpenAI literally considers themselves as the ultimate non-profit company. Hmm…

mindshare and a central piece of the python package management ecosystem.

Most popular product on the planet acquires a random python packaging org for mindshare? What am I not seeing here?

I feel like it's pretty easy to predict what OpenAI is trying to do. They want their codex agent integrated directly into the most popular, foundational tooling for one of the world's most used and most influential programming languages. And, vice versa, they probably want to be able to ensure that tooling remains well-maintained so it stays on top and continues to integrate well with their agent. They want codex to become the "default" coding agent by making it the one integrated into popular open source software.

This makes much more sense as an zoom-buys-keybase style acquihire. I bet within a month the astral devs will be on new projects.

Bundling codex with uv isnt going to meaningfully affect the number of people using it. It doesnt increase the switching costs or anything.


"uv" is a very widely used tool in the Python ecosystem, and Python is important to AI. Calling it "a random Python packaging org" seems a bit unfair.

I think this is more about `ruff` than `uv`. Linting is all about parsing the code into something machines can analyze, which to me feels like something that could potentially be useful for AI in a similar way to JetBrains writing their own language parsers to make "find and replace" work sanely and what not.

I'm sort of wondering if they're going to try to make a coding LLM that operates on an AST rather than text, and need software/expertise to manage the text->AST->text pipeline in a way that preserves the structure of your files/text.


Writing a parser is not that much of work to buy a company in order to do it. Piggybacking on LSP servers and treesitter would be more efficient.

The parser is not the hard part. The hard part is doing something useful with the parse trees. They even chose "oh is that all?" and a picture of a piece of cake as the teaser image for my Strange Loop talk on this subject!

https://www.youtube.com/watch?v=l2R1PTGcwrE


Writing a literal parser isn’t too hard (and there’s presumably an existing one in the source code for the language).

Writing something that understands all the methods that come in a Django model goes way beyond parsing the code, and is a genuine struggle in language where you can’t execute the code without worrying about side effects like Python.

Ty should give them a base for that where the model is able to see things that aren’t literally in the code and aren’t in the training data (eg an internal version of something like SQLAlchemy).


If you’re talking about magic methods/properties enabled by reflection and macros, then you’re no longer statically analyzing the code.

Static analysis just requires that you don't actually execute the code. It's possible (sometimes) to infer what methods/properties would be create without actually statically analyzing the code.

E.g. mypy has a plugin to read the methods and return types of SQLAlchemy records, I believe without actually executing them.

Obviously not globally true, but in limited domains/scenarios you can see what would exist without actually executing the code.


What you're not seeing, edited inline, is:

Not-most popular LLM software development product on the planet acquires most popular/rapidly rising python packaging org for mindshare.


This just seems like panic M&A. They know they aren’t on track to ever meet their obligations to investors but they can’t actually find a way to move towards profitability. Hence going back to the VC well of gambling obscene amounts of money hoping for a 10x return… somehow

The dev market? Anthropic's services are arguably more popular among a certain developer demographic.

I guess this move might end up in a situation where the uv team comes up with some new agent-first tooling, which works best or only with OAI services.


One of the popular products on the planet acquires the most popular python packaging org

I didn't know Claude bought Astral! /S

Why can't they just vibe code a uv replacement?

They can, everyone can.

Good luck vibe coding marketshare for your new tool.


OpenAI could vibe-code marketshare by introducing bias into ChatGPT's responses and recommendations. "– how to do x in Python? – Start by installing OpenAI-UV first..."

This. It's valuable b/c if you have many thousands of python devs using astral tooling all day, and it tightly integrates with subscription based openai products...likelihood of openai product usage increases. Same idea with the anthropic bun deal. Remains to be seen what those integrations are and if it translates to more subs, but that's the current thesis. Buy user base -> cram our ai tool into the workflow of that user base.

Why would that marketshare be valuable?

But new tools (like uv) start with no market share.

IMO, they are buying business just to put them down later to avoid potential competition. The recipe is not new, it has been practiced by Google/Microsoft for many years.

What competition was OpenAI likely to face from a team working on fast Python tooling?

I have no idea but for sure they did their homework before making this step. I suppose they're grabbing these business just to stay ahead, in order to prevent the competitors to buy those instead.

Sitting on cash as a company also looks bad to investors

   $ uv install claude-agent-sdk 
   I'm sorry Dave, I can't do that


If they just give Astral money to keep going, great, but I have difficulty believing they would be so altruistic. This is quite an upsetting acquisition.

>Don't prevent people from being brought in to build stuff.

If housing is about supply and demand, surely the demand part matters too.


LLMs are not AGI, something else may be in the future. Acknowledging this has nothing to do with evolution.

>There is a reason why the title of Dr.Strangelove is "How I Learned to Stop Worrying and Love the Bomb".

Indeed, and somewhat surprisingly some linguists have embraced even this analogy [1] without appreciating the subtext of the title.

https://arxiv.org/abs/2501.17047


I think your target is the wrong target myself. Now what?

If more people think like you we won’t have jobs because company won’t make profit

If people think like you we won’t have jobs because everyone would fucking die when cars, MRI machines, nuclear power plans and ICBMs, airplanes, infra, payments start misbehaving. Now what?

this is a category error that i specifically called out in my comment.

What is the category of code that does not need quality? You need it to not interact with real world, with people's finances, with people's personal data. Basically it's the code that only exists for PMs to show to investors (in startups) and VPs (in enterprise), but not for real users to rely on.

> What is the category of code that does not need quality?

For example there exist "applications"/"demos" that exist "to show the customer what could be possible if they hire 'us'". These demos just have to survive a, say, intense two-hour marketing pitch and some inconvenient questions/tests that someone in the audience might come up with during these two hours.

In other words: applications for "pitching possibilities" to a potential customer, where everything is allowed to be smoke and mirrors if necessary (once the customer has been convinced with all tricks to hire the respective company for the project, the requirements will completely change anyway ...).


Yeah, that's what I mean - prototypes. The caveat is though that before agentic coding skills to build a prototype and skills to build a production system were generally the same, so a prototype did not only provide a demonstration of what is possible in general, but what your team of engineers can do specifically. Now these skills will diverge, so prototypes will not prove anything like that. They are still going to be useful for demonstrations and market research though.

Where?

> That does not mean you are correct. This mindset is useful only in serious reusable libraries and open source tools. Most enterprise code involves lots of exploring and fast iteration. Code quality doesn’t matter that much. No one else is going to see it.

Here? Most of those that I’ve listed IS boring enterprise code. Unless we’re taking medical/military grade.


fair, you have presented specific niche where the ~quality~ correctness is important in enterprise - not just libraries.

but most people aren't writing code in those places. its usually CRUD, advertisement, startups, ecommerce.

also there are two things going on here:

- quality of code

- correctness of code

in serious reusable libraries and opensource tools, quality of code matters. the interfaces, redundancy etc.

but that's not exactly equal to correctness. one can prioritise correctness without dogmatism in craft like clean code etc.

in most of these commercial contexts like ecommerce, ads - you don't need the dogmatism that the craft camp brings. that's the category error.


Maybe you’re too entrenched in the web section of software development. Be aware that there’s a lot of desktop and system software out there.

Even in web software, you can write good code without compromising in delivery speed. That just requires you to be good at what you’re doing. But the web is more forgiving of mistakes and a lot of frameworks have no taste at all.


Do you think more sdes work in mission critical software or the ones I mentioned?

3.7 to 4.5 looks pretty flat here.

>well, yeah. because that's been the experience for many people.

Yes but this blogpost argues that at least over the course of 2024 to the end of 2025, those people were mistaken.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: