Hacker Newsnew | past | comments | ask | show | jobs | submit | gravypod's commentslogin

I think the arguments for this would possibly be, if AI continues to be useful (generation demand skyrockets): Meta would possibly have a positive ROI for these investments which would lead to others copying the investment strategy and building more nuclear. If that happens a large portion of AI demand would become green(-ish) energy.

If AI demand lowers (generation demand plummets): Meta would have subsidized a bunch of nuclear reactors which would likely continue to produce power for 10 years - 50 years.

A big reason I have heard for lack of nuclear build out is the lack of starting capital but after they are built they are generally stable and maintenance is predictable.

An example of this: https://en.wikipedia.org/wiki/Beznau_Nuclear_Power_Plant. It may be turned down eventually but a 60 year runtime is pretty impressive for 60s engineering!


> A big reason I have heard for lack of nuclear build out is the lack of starting capital but after they are built they are generally stable and maintenance is predictable.

I have also heard this, but given Meta's announcement is mostly in funding and extending the useful lifespan, doesn't that indicate without an infusion of capital, the ongoing operations are not cost effective?


Does anyone have any ideas for if some of the original seats home assembly manuals exist? I'd love to take a look at those plans and can't find any records.


If these ai companies had 100x dev output, why would you acquire a company? Why not just show screenshots to your agent and get it to implement everything?

Is it market share? Because I don't know who has a bigger user base that cursor.


The claims are clearly exaggerated or as you say, we'd have AI companies pumping out new AI focused IDEs left and right, crazy features, yet they all are Vs code forks that roughly do the same shit

A VSCode fork with AI, like 10 other competitors doing the same, including Microsoft and Copilot, MCPs, Vs code limitations, IDEs catching up. What do these AI VsCode forks have going for them? Why would I use one?


I am validating and testing these for the company and myself. Each has a personality with quirks and deficiencies. Sometimes the magic sauce is the prompting or at times it is the agentic undercurrent that changes the wave of code.

More specific models with faster tools is the better shovel. We are not there yet.


Heyo, disclosure that I work for graphite, and opinions expressed are my own, etc.

Graphite is a really complicated suite of software with many moving pieces and a couple more levels of abstraction than your typical B2B SaaS.

It would be incredibly challenging for any group of people to build a peer-level Graphite replacement any faster than it took Graphite to build Graphite, no matter what AI assistance you have.


It’s always faster and easier to copy than create(AI or not). There is lot of thought and effort in doing it first, which the second team(to an extent) can skip.

Much respect to what have you have achieved in a short time with graphite.

A lot of B2B SaaS is about tones of integrations to poorly designed and documented enterprise apps or security theatre, compliance, fine grained permissions, a11y, i18n, air gapped deployments or useless features to keep largest customers happy and so on and on.

Graphite (as yet) does not any of these problems - GitHub, Slack and Linear are easy as integrations go, and there is limited features for enterprises in graphite.

Enterprise SaaS is hard to do just for different type of complexity


I think trivial GH integrations are easy.

If you've used Graphite as a customer for any reasonable period of time or as part of a bigger enterprise/org and still think our app's particular integration with GH is easy... I think that's more a testament to the work we've done to hide how hard it is :)

Most of the "hard" problems we're solving (which I'm referencing in my original comment) are not visually present in the CLI or web application. It's actually subtle failure-states or unavailability that you would only see if I'm doing my job poorly.

I'm not talking about just our CLI tool or stacking, to clarify. I'm talking about our whole suite, especially the review page and merge queue.

What kind of enterprise SaaS features do you wish you had in Graphite? (We have multiple orgs with 100s-1,000s of engineers using us today!)


The Graphite review UI/UX is at least 3x better than GitHub, and also somehow loads faster. Same with the customizable PR inbox. Love it! Appreciate your work on the platform!


My guess is the purchase captures the 'lessons learned' based upon production use and user feedback.

What I do not understand is that if a high level staff with capacity can produce an 80% replacement why not assign the required staff to complete that last 10% to bring it to production readiness? That last 10% is unnecessary features and excess outside of the requirements.


> If these ai companies had 100x dev output,

I hate the unrealistic AI claims about 100X output as much as anyone, but to be fair Cursor hasn't been pushing these claims. It's mostly me-too players and LinkedIn superstars pushing the crazy claims because they know triggering people is an easy ticket to more engagement.

The claims I've seen out of the Cursor team have been more subtle and backed by actual research, like their analysis of PR count and acceptance rate: https://cursor.com/blog/productivity

So I don't think Cursor would have ever claimed they could duplicate a SaaS company like Graphite with their tools. I can think of a few other companies who would make that claim while their CEO was on their latest podcast tour, though.


Existing users, distribution, and brand are a big part of acquisition. Graphite is used mainly by larger orgs.

Also, graphite isn't just "screenshots"; it's a pretty complicated product.


Who has claimed to have 100x productivity?


Why build if you can buy? Money is not a scarce resource in AI economy. Time is.


Perhaps the company you are acquiring has the product of 100x dev output?


I've heard this mentioned a few times. Here is a summarized version of the abstract:

    > ... We conduct a randomized controlled trial (RCT)
    > ... AI tools ... affect the productivity of experienced
    > open-source developers. 16 developers with moderate AI
    > experience complete 246 tasks in mature projects on which they
    > have an average of 5 years of prior experience. Each task is
    > randomly assigned to allow or disallow usage of early-2025 AI
    > tools. ... developers primarily use Cursor Pro ... and
    > Claude 3.5/3.7 Sonnet. Before starting tasks, developers forecast that allowing
    > AI will reduce completion time by 24%. After completing the
    > study, developers estimate that allowing AI reduced completion time by 20%.
    > Surprisingly, we find that allowing AI actually increases
    > completion time by 19%—AI tooling slowed developers down. This
    > slowdown also contradicts predictions from experts in economics
    > (39% shorter) and ML (38% shorter). To understand this result,
    > we collect and evaluate evidence for 21 properties of our setting
    > that a priori could contribute to the observed slowdown effect—for
    > example, the size and quality standards of projects, or prior
    > developer experience with AI tooling. Although the influence of
    > experimental artifacts cannot be entirely ruled out, the robustness
    > of the slowdown effect across our analyses suggests it is unlikely
    > to primarily be a function of our experimental design.
So what we can gather:

1. 16 people were randomly given tasks to do

2. They knew the codebase they worked on pretty well

3. They said AI would help them work 24% faster (before starting tasks)

4. They said AI made them ~20% faster (after completion of tasks)

5. ML Experts claim that they think programmers will be ~38% faster

6. Economists say ~39% faster.

7. We measured that people were actually 19% slower

This seems to be done on Cursor, with big models, on codebases people know. There are definitely problems with industry-wide statements like this but I feel like the biggest area AI tools help me is if I'm working on something I know nothing about. For example: I am really bad at web development so CSS / HTML is easier to edit through prompts. I don't have trouble believing that I would be slower trying to make an edit to code that I already know how to make.

Maybe they would see the speedups by allowing the engineer to select when to use the AI assistance and when not to.


it doesnt control for skill using models/experience using models. this looks VERY different at hour 1000 and hour 5000 than hour 100.


Lazy from me to not check if I remember well or not, but the dev that got productivity gains was a regular user of cursor.


I don't necessarily believe this is the best explanation but: We know the economy is doing pretty poorly and tech companies are consolidating. Amazon is losing it's two main drivers of revenue: Irresponsible startups with huge AWS spend and no pressure to optimize their stack and consumers buying treats online. Regardless if people are spending on AI, the only thing businesses are investing on is AI and analysts at AWS are likely signaling that many AI companies are not seeing a large ROI and model developers will likely build their own versions of successful products (Claude Code). AWS doesn't want to scale up it's GPU fleet and be left holding the hardware bag. Amazon can't juice numbers for consumer purchases since the rest of the economy is contracting, most people are losing jobs, etc. So the easiest way to for Amazon to juice their metrics is to offshore office work that can be done anywhere. They can claim they are using AI - but from conversations with friends who are working at Amazon this does not sound very realistic - and ride the AI bubble with no liabilities.


I would love having some first class support for monorepos at bigger organizations (ex: silos, vfs, etc).


No promises but noted for sure…


Looking at the site there are comparisons to features between WordPress and other non-ai site builders. How does this compare to things like Lovable?


To build off of what artf said, the biggest thing against WP is really pricing. From speaking to folks, they get nickled and dimed for plugins. They also cant migrate to less expensive options.

I think we've taken the best parts of what folks like Lovable have created (one click deployment and chat to do anything), but built the drag and drop functionality into it-- which is something people have come to depend on. From what I've seen, the uptake of AI into the non-ai site builders has been very slow because they all have proprietary JSON formats.


Tools like Lovable are great for spinning up apps, but our focus is different: we’re mainly aiming at websites. Instead of generating a full React app, the editor outputs HTML/CSS and gives you both visual editing and AI assistance, so you’re not stuck relying only on prompts for small changes.


> The PhiCode runtime for example - a complete programming language with code conversion, performance optimization, and security validation. It was built in 14 days. The commit history provides trackable evidence; manual development of comparable functionality would require months of work as a solo developer.

I've been looking at the docs and something I don't fully understand is what PhiCode Runtime does? It seems like:

1. Mapping of ligatures -> keywords (ex: ƒ -> def).

2. Caching of 3 types (source content, python parsing, module imports, and python bytecode).

3. Call into phirust-transpiler which seems to try and convert things into rust code?

4. An http api for requesting these operations.

A lot of this seems to be done with regexs. Was there a motivation for doing string replace instead of python -> ast -> conversion -> new ast -> source? What is this code being used for?


Claude Code (and claude in general, which was 99% used here) likes regexes for this sort of thing. You have to tell it to use tree sitter, or it'll make a brittle solution by default.


Your four points are correct:

1. Symbol mapping: Yes - ƒ → def, ∀ → for, λ → lambda, π → print, etc. Custom mappings are configurable.

2. Multi-layer caching: Confirmed - source content cache, transpiled Python cache, module import specs, and optimized bytecode with batch writes.

3. PhiRust acceleration: Clarification - it's a Rust-based transpiler that handles the symbol-to-Python conversion for performance, not converting Python to Rust. When files exceed 300KB, the system delegates transpilation to the Rust binary instead of using Python regex processing.

4. HTTP API: Yes - provides endpoints for transpilation, symbol mapping queries, and engine info to enable IDE integration.

The technical decision to use string replacement over AST manipulation came down to measured performance differences.

The benchmarks show 3,000,000+ chars/sec throughput on extreme stress tests and 1,200,000+ chars/sec on typical workloads. Where AST parsing, transformation, and regeneration introduces overhead that makes real-time symbol conversion impractical for large codebases.

The string replacement preserves exact formatting, comments, and whitespace while maintaining compatibility with any Python syntax. Including future language features that AST parsers might not support yet. Each symbol maps directly to its Python equivalent without intermediate representations that can introduce transformation errors.

The cache system includes integrity validation to detect corrupted cache entries and automatic cleanup of temporary files. Cache invalidation occurs when source files change, preventing stale transpilation results. Batch write operations with atomic file replacement ensure cache consistency under concurrent access.

The runtime serves cognitive improvements for domain-specific development. Mathematical algorithms become more readable when written with actual mathematical notation rather than verbose keywords. It can help in game development, where certain functions can benefit from different naming (eg.: def → skill, def → special, def → equipment).

The gradual adoption path matters for production environments. Teams can introduce custom syntax incrementally without rewriting existing codebases since the transpiled output remains standard Python. The multi-layer caching system ensures that symbol conversion overhead doesn't impact execution performance.

Domain-specific languages for mathematics, finance, education, or any field where visual clarity improves comprehension. The system maintains full Python compatibility while enabling cognitive improvements through customizable syntax.


> Where AST parsing, transformation, and regeneration introduces overhead that makes real-time symbol conversion impractical for large codebases.

I don't really understand why you need to do anything different when using a parser than the regex method, there's no real reason to have to parse to an AST (with all the python goodness involved with that) at all when the parser can just do the string replacement the same as whatever PhiRust is doing.

I have this peg VM (based on the lpeg papers) I've been poking at for a little while now that, while admittedly I haven't actually tested its speed, I'd be amazed if it couldn't do 3Mb/s...in fact, the main limiting factor seems to be getting bytes off the disk and the parser runtime is just noise compared to that with all the 'musttail' shenanigans going on.

And even that is overkill for simple keyword replacement with all the work done over the years on macro systems needing to be blazing fast -- which is not something I've looked into at all to see how they do their magic except a brief peek at C's macro rules which are, let's just say, complicated.


I was just doing research and landed on this exact page last night! I was wondering if anyone knows how someone could mic a room and record audio from only a specific area. For my use case I want to record a couch so I can watch TV with my friends online and remove their speech + show noise from the audio. Setting up some array of mics and using them for beam steering would probably work but there's not a lot of examples I could find on GitHub with code that works in real time.


You might look into OBS and/or VoiceMeeter to see how streamers selectively route audio while livestreaming/recording video/audio streams.

https://obsproject.com/

https://voicemeeter.com/


Loud show noise and your online friends' nearby audio is going to be reflected around the room as well as off of your bodies.

What you want isn't microphone or beamforming tech, it's echo cancellation the same as every videoconferencing software uses.

You just need to feed the show audio and friend audio in, and apply echo cancellation to each.


From the article "The simplest method of beamforming is delay-and-sum (DAS)". Measure distance from a point (couch) to each microphone, delay the signal in time domain by the time the sound takes to travel from point (couch) to microphone, and add up the signals. Pretty trivial. Basically you want the microphones receive the couch signal at the same time, even though they are different distances away.

Make sure there is enough variation in microphone distances for this method to be effective.


I really want to try MathAcademy.com. How quickly do you think someone doing light study could move from a Calc 1 -> advanced stuff using that site? In my case I could put in at least 30 minutes to an hour a day.


I can't speak to the advanced stuff but here's my stats on Fundamentals I:

Total time on site (gathered from a web extension): 40h 30m Total days since start: 32

Total XP earned: 1881

Since "1 XP is roughly equivalent to 1 minute of focused work", I "should have" only spent 31 hours. I did the placement test and started at ~30%, and now I'm at 76%. I'd say 75% is stuff I learned in HS but never had a great handle on, 25% I never knew before.

Overall, I'm quite happy with the course. I'm learning a lot every day and feel like I have stronger fundamentals than I did when I was in school. The spaced review is good but I do worry I'll lose it again, so I'm thinking of ways I can integrate this sort of math into my development projects. It's no Duolingo, you really do have to put in effort and aim for a certain number of Xp per day (I try for 60 XP rather than time).


Hard to say but this should give you an idea [0].

At that rate, less than a year is reasonable.

[0] https://www.justinmath.com/what-is-the-highest-sustainable-d...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: