Hacker Newsnew | past | comments | ask | show | jobs | submit | seer's commentslogin

I don’t think that is the only solution.

I like what Singapore is doing - having a government built “base level” of housing that is both abundant and readily available - it can anchor the price where deep excesses are harder to end up with.

It’s like a market where a very significant player keeps the price law, because of its own reasons.

In such a scenario the price will not go up as sharply, so there would be less incentive for people to buy real estate just as a financial vehicle.

And the government can also prioritise who it sells the units it builds to - e.g. not investors.

I honestly am surprised why western governments are not trying this.


Yes - in the UK we had a strong social housing sector, with award winning architecture (and some terrible mistakes too).

Then along came 'right-to-buy', allowing tenants to buy their social housing for knock down prices (and so become a natural Tory [right of centre party] voter).

If councils had been allowed to use the money to build more social housing, then maybe this was fine. But they were not. So now we have affordability issues in the UK too.


Singapore is very special in that 80% of the population lives in "public" housing.

Because the government builds that much housing, and subsidizes the cost, to ensure their citizens (and PRs) have affordable housing.

It's special because they made it that way.


I still remember the joy of using the flagship rails application - basecamp. Minimal JS, at least compared to now, mostly backend rendering, everything felt really fast and magical to use.

Now they accomplished this by imposing a lot of constraints on what you could do, but honestly it was solid UX at the time so it was fine.

Like the things you could do were just sane things to do in the first place, thus it felt quite ok as a dev.

React apps, _especially_ ones hosted on Next.js rarely feel as snappy, and that is with the benefit of 15 years of engineering and a few order of magnitude perf improvement to most of the tech pieces of the stack.

It’s just wild to me that we had faster web apps, with better organizarion, better dev ex, faster to build and easier to maintain.

The only “wins” I can see for a nextjs project is flexibility, animation (though this is also debatable), and maybe deployment cost, but again I’m comparing to deploying rails 15 years ago, things have improved there as well I’m sure.

I know react can accomplish _a ton_ more on the front end but few projects actually need that power.


Isn’t that what Phoenix (Elixir) is? All server side, small js lib for partial loads, each individual website user gets their own thread on the backend with its own state and everything is tied together with websockets.

Basically you write only backend code, with all the tools available there, and a thin library makes sure to stich the user input to your backend functions and output to the front end code.

Honestly it is kinda nice.


Also what https://anycable.io/ does in Rails (with a server written in Go)

Websockets+thin JS are best for real time stuff more than standard CRUD forms. It will fill in for a ton of high-interactivity usecases where people often reach for React/Vue (then end up pushing absolutely everything needlessly into JS). While keeping most important logic on the server with far less duplication.

For simple forms personally I find the server-by-default solution of https://turbo.hotwired.dev/ to be far better where the server just sends HTML over the wire and a JS library morph-replaces a subset of the DOM, instead of doing full page reloads (ie, clicking edit to in-place change a small form, instead of redirecting to one big form).


Idk about Phoenix, but having tried Blazor, the DX is really nice. It's just a terrible technical solution, and network latency / spotty wifi makes the page feel laggy. Not to mention it eats up server resources to do what could be done on the client instead with way fewer moving parts. Really the only advantage is you don't have to write JS.

It's basically what Phoenix LiveView specifically is. That's only one way to do it, and Phoenix is completely capable of traditional server rendering and SPA style development as well.

LiveView does provide the tools to simulate latency and move some interactions to be purely client side, but it's the developers' responsibility to take advantage of those and we know how that usually goes...


> Honestly it is kinda nice.

It's extremely nice! Coming from the React and Next.js world there is very little that I miss. I prefer to obsess over tests, business logic, scale and maintainability, but the price I pay is that I am no longer able to obsess over frontend micro-interactions.

Not the right platform for every product obviously, but I am starting to believe it is a very good choice for most.


The idea came from linking salt to heart failure, but last I checked the link was a confounding variable - e.g. bad diet leads to problems that themselves lead to high cholesterol. It was not the salt in the food but the quality of the nutrition itself.

However blaming salt was quick and easy so that’s what the people with money did.

Historically speaking salt has been such a scarce and valuable resource. I have read accounts how in the balkans people would resort to selling kids to slavery just so the family could have enough salt to survive (sacrificing one kid to save the rest).

When I started reading about how salt was bad for you it never made any sense.


No, excessive salt causes high blood pressure. It is definitely a problem. Limit your intake to 6g a day or less. That's plenty for flavour.

Source: https://www.nhs.uk/live-well/eat-well/food-types/salt-in-you...


Yikes. It's so disappointing to see public health agencies pushing medical misinformation but that's nothing new for the NHS, I guess. In reality if you look at this from an evidence-based medicine perspective what really matters is not the quantity but rather the osmolality. And the optimal level depends on multiple factors including genetics and activity level.

https://doi.org/10.1111/jch.13374


Ha! My parents also thought this was the case, I was a child before the internet and phones, my favourite hobby was playing around in the trees next to our soviet commie block.

I had an incredible childhood with building hidden dwellings in the woods, unsupervised fires and bicycle journeys, football, building ice castles etc, swimming and martial arts lessons. My parents even limited my TV time to 2h a day.

But I still had —1 myopia for every grade until 7th.

My analysis is that by that time I got into reading books - both science and fantasy, and then boom my eyesight was fucked.

Thank god for LASIK.


>My analysis is that by that time I got into reading books - both science and fantasy, and then boom my eyesight was fucked.

When I became a heavy reader I speedran long sighted to short sighted. I think 4th grade I got my long sighted diagnosis. 5th grade I started lifting heavier books. By the end of 7th I had more or less the prescription for Myopia I have now.


Isn't this compiled languages vs writing pure machine code argument all over again?

The compiler produces a metric shit ton of code that I don't see when I'm writing C++ code. And don't get me started on TypeScript/Clojure - the amount of code that gets written underneath is mindbogglingly staggering, yet I don't see it, for me the code is "clean".

And I'm old enough to remember the tail end of the MachineCode -> CompiledCode transition, and have certainly lived through CompiledCode -> InterpretedCode -> TranspiledCode ones.

There were certainly people who knew the ins and outs of the underlying technology who produced some stunningly fast and beautiful code, but the march of progress was inevitable and they were gradually driven to obscurity.

This recent LLM step just feels like more of the same. *I* know how to write an optimized routine that the LLM will stumble to do cleanly, but back in the day lots of assembler wizards were doing some crazy stuff, stuff that I admired but didn't have the time to replicate.

I imagine in the next 10-20 years we will have Devs that _only_ know English, are trained in classical logic and have flame wars about what code exactly would their tools generate given various sentence invocations. And people would benchmark and investigate the way we currently do about JIT compilation and CPU caching - very few know how it actually works but the rest don't have to, as long as the machine produces the results we want.

Just one more step on the abstraction ladder.

The "Mars" trilogy by Kim Stanley Robinson had very cool extrapolations where this all could lead, technologically, politically, social and morally. LLMs didn't exists when he was writing it, but he predicted it anyway.


You don't have to review the compiler output because it's deterministic, thoroughly tested, predictable, consistent and reliable.

You have to review all the LLM output carefully because it could decide to bullshit anything at any given time so you must always be on high alert.


Ha! That’s what is actually happening under the hood, but is definitely not the experience of using it. If you are not into CS or you haven’t coded in the abstraction below, it can be very tough to figure out what exactly is going on, and reactions to your high level code become random.

A lot of people (me included) would have a model of what is going on when I wrote some particular code, but sometimes the compiler just doesn’t do what you think it would do - the jit will not run, some data would not be mapped in the correct format, and your code will magically not do what you wanted it to.

Things do “stabilise” - before TypeScript there was a slew of transpiled languages and with some of them you really had nasty bugs that you didn’t know how they are being triggered.

With ruby, there was so many memory leaks that you just gave up and periodically restarted the whole thing cause there was no chance of figuring it out.

Yes things were “deterministic” but sometimes less so and we built patterns and processes around that uncertainty. We still do for a lot of things.

While things are very very different, the emotion of “reigning in” an agent gone off the rails feels kinda familiar, on a superficial level.


A stronger plausible interpretation of their comment is understanding meant evaluating correctness. Not performance.

Higher level languages did not hinder evaluating correctness.

Formal languages exist because natural languages are ambiguous inevitably.


Exactly, understanding correctness of the code. But also understanding of a codebase what its purpose is and what it should be doing. Add to that how the codebase is layed out. By adding more cruft the details fade away in the background making it harder to understand the Crux of the application.

Measuring performance is relatively easy regardless of whether the code was generated by AI or not.


If you somehow manage to get magnetic fields involved, so you are not afraid of friction with the cable itself, at 1.3 max apparent acceleration/deceleration (after a turnover) and including earth’s gravity you get 116min to geostationary.

If you account for various inefficiencies like taking it slow in the lower atmosphere Ant whatnot, it still should be in the matter of hours. So totally feasible and even comfortable.


> If you somehow manage to get magnetic fields involved, so you are not afraid of friction with the cable itself, at 1.3 max apparent acceleration […]

This means that half-way after 58 minutes, the climber is traveling at 0.3 * 9.81 m/s² * 60 * 58 ~= 10.2 km/s ~= 36,720 km/h (!!!) relative to the cable. A tiny imperfection or wobble is going to make the climber crash into the cable, destroying both.

A climber with a mass of 10 tonnes requires 10^4 kg * 1.3 * 9.81 m/s² ~= 127.5 kN of force to accelerate at 1.3 g. At the ~56 minute mark, the climber reaches a speed of ~9,888 m/s. This means it requires a power output of 127.5 kN * 9888 m/s = 1.26 GW (!!!) to achieve this acceleration, plus overhead for the power electronics and transmission. Even at a voltage of 1 kV, that's around 1,500,000 A (!!!) of current that you have to transmit and invert.

If you have a way to reliably transfer that amount of power without touching the cable which is moving at 10 km/s relative speed, or with touching but without immediately melting the cable or the collector, let me know :-)

> So totally feasible

lol no


> A tiny imperfection or wobble is going to make the climber crash into the cable, destroying both.

A maglev train is several centimeters from the rail; if someone made the carbon nanostructures (the only known material strong enough are atomically precise carbon nanotubes or graphene, but the entire length has to be atomically precise you can't splice together the shorter tubes we can build today) this badly wrong, the cable didn't survive construction.

> Even at a voltage of 1 kV, that's around 1,500,000 A

Why on earth would you do one kilovolt? We already have megavolt powerlines. That reduces the current needed to 1500 A. 1500 A on a powerline is… by necessity, standard for a power station.

We even already have superconductor cables and tapes that do 1500 A, they're a few square millimeters cross section.


> A maglev train is several centimeters from the rail […]

No maglev train I ever heard of travels at 36,000 km/h. This is about two orders of magnitude faster.

> We already have megavolt powerlines.

That's transmission over long distances, but you need to handle and transform all that power in a relatively small enclosure. Have you seen the length of isolators on high-voltage powerlines? What do you think is going to happen to your circuit if you have an electrical potential difference of 1 MV over a few centimeters?

Yes, you can handle large voltages with the right power electronics, but you need the space to do so. For comparison, light rail typically uses around 1 kV, while mainline trains use something like 15 kV. But a train is also 10 to 100 times as heavy as the 10t climber in my calculation, so you need to multiply the power (and therefore the electric current) by 10 to 100 as well.


> No maglev train I ever heard of travels at 36,000 km/h. This is about two orders of magnitude faster.

You think the problem is the speed itself, and not the fact that trains are close to sea level and at that speed would immediately explode from compressing the air in front of them so hard it can't get out of the way before superheating to plasma, i.e. what we see on rocket re-entry only much much worse because the air at the altitude of peak re-entry heating is 0.00004% the density at sea level?

> What do you think is going to happen to your circuit if you have an electrical potential difference of 1 MV over a few centimeters?

1) In space? Very little. Pylons that you see around the countryside aren't running in a vacuum, their isolators are irrelevant.

2) Why "a few centimetres"? You've pulled the 10 tons mass out of thin air, likewise that it's supposed to use "one kilovolt" potential differences, and now also that the electromagnets have to be "a few centimetres" in size? Were you taking that number from what I said about the gap between the train and the rails? Obviously you scale the size of your EM source to whatever works for your other constraints. And, for that matter, the peak velocity of the cargo container, peak acceleration, mass, dimensions, everything.

> For comparison, light rail typically uses around 1 kV, while mainline trains use something like 15 kV.

Hang on a minute. I was already wondering this on your previous comment, but now it matters: do you think the climber itself needs to internally route any of this power at all?

What you need for this is switches and coils on one side, a Halbach array on the other. Coils aren't that heavy, especially if they're superconducting. Halbach array on the cargo pod, all the rest on the tether.

Right now, the hardest part is — by a huge margin — making the tether. Like, "nobody could do it today for any money" hard. But if we could make the tether, then actually making things go up it is really not a big deal, it's of a complexity that overlaps with a science faire project.

(Also, I grew up with 25kV, but British train engineering is hardly worth taking inspiration from for other rail systems, let alone a space elevator).


Dielectric strength of vacuum is 20kv/inch. Thus your megavolt needs 50 inches of separation at an absolute minimum. And you're operating this in space where you have ionizing radiation. Free electrons with a big voltage differential? You're describing a vacuum tube.


> 20kv/inch

Breakdown voltage is pressure dependent, not a constant.

Your figure is for (eyeballing a graph) approximately 2e-2 torr and 150 torr, less between, rapidly increasing with harder vacuum. The extreme limit even in a perfect vacuum is ~1.32e18 volts per meter due to pair production.

For a sense of "perfect" vacuum: if I used Wolfram Alpha right just now, the mean free path of particles at the Kármán line is about 15 cm, becomes hundreds of meters at 200km.

Though this assumes a free floating measurement, the practical results from https://en.wikipedia.org/wiki/Wake_Shield_Facility would also matter here.

> And you're operating this in space where you have ionizing radiation. Free electrons with a big voltage differential?

Mm.

Possibly. But see previous about mean free paths, not much actual stuff up there. From an (admittedly quick) perusal of the literature, the particle density of the Van Allen belts is order-of 1e4-1e5 per cubic meter, so the entire mass of the structure is only order-of a kilogram: https://www.wolframalpha.com/input?i=%284%2F3%29π%282%5E3-1%...

If this is an important constraint, this would actually be a good use for a some-mega-amps current, regardless of voltage drop between supply and return paths due to load. Or, same effect, coil the wires. And they'd already necessarily be coiled to do anything useful: Use the current itself to magnetically shield everything from the Van Allen belts.

Superconductors would only need a few square centimetres cross section to carry mega-amps, given their critical current limit at liquid nitrogen temperatures can be kilo-amps per mm^2.

But once you're talking about a 36,000 km long superconducting wire with a mega-amp current, you could also do a whole bunch of other fun stuff; lying them in concentric circular rings in the Sahara would give you a very silly, but effective, magnetic catapult. (This will upset a lot of people, and likely a lot of animals, so don't do that on Earth).


No, I wasn't eyeballing, but perhaps someone else was. I went looking for the dielectric strength of vacuum and I found a chart with values for a bunch of different things including vacuum.

And I don't understand the connection to the Van Allen belts--I'm talking about sunlight knocking electrons off your conductors.


> No, I wasn't eyeballing, but perhaps someone else was.

I didn't say you did with that parenthesis, that was to indicate I was being very approximate with the pressures that correspond to your stated breakdown voltage: https://www.accuglassproducts.com/air-dielectric-strength-vs...

> I went looking for the dielectric strength of vacuum and I found a chart with values for a bunch of different things including vacuum.

That's even more wrong than looking up the value of acceleration due to gravity and applying "9.8m/s/s" to the full length of a structure several times Earth's radius (which was also being done in these comments).

Think critically: when you're reducing pressure, at what point does it become "a vacuum"? Answer: there is no hard cut-off point.

(Extra fun: https://en.wikipedia.org/wiki/Paschen%27s_law)

> And I don't understand the connection to the Van Allen belts

You mentioned free electrons. The thing Van Allen belts are, is fast-moving charged particles captured by Earth's magnetic field.

> I'm talking about sunlight knocking electrons off your conductors.

Very easy to defend against photoelectric emission.

Just to re-iterate, if you're lifting something up with a magnetic field, it's non-contact. You can hide the conductors behind any thin non-magnetic barrier you want and it still works.

Say, Selenium, with a work function of 5.9 eV. Tiny percentage of the solar flux is above that.

Even just shading them from the sunlight would work. Like, a sun-shade held off to one side.

Also, you could just have the return line inside the tether: If the supply is on the outside, return on the inside, you can even use the structure of the tether itself as shielding — coaxial voltage differential, so the voltage difference between supply and return lines due to load creates negligible external electrical field.

Honestly, this feels like you've just decided it won't work and are deliberately choosing the worst possible design to fit that conclusion. Extra weird as "but we can't actually build carbon nanotubes longer than 55 cm yet" is a great deal more important than all the stuff I've listed that we can do.


Have you ever seen a megavolt power line? Note how far apart the wires are. They are actually a bit farther apart than they really need to be because it is designed to tolerate a large bird with spread wings, but they still need quite a bit of distance. I believe you can tolerate a closer spacing once you're out in space and have no possibility of a plasma arc.


Indeed, I have not, however, I have looked up the breakdown voltage of vacuum at those altitudes, as long as the graph wasn't completely fictional, in that part of space even just 2 cm (barely, but it does) support a megavolt.


A more relevant criticism of that peak velocity is that it significantly exceeds Earth escape velocity (and is 6/7ths of solar EV from here) and therefore wastes energy: https://en.wikipedia.org/wiki/Escape_velocity#List_of_escape...


Oh, and 10 tonnes is bus-sized. For infrastructure at that scale, you want trains at the very least, and those are on the order of 1,000 tonnes. Multiply force, power, and current by 100 accordingly.


The current method to get 10 tons to low orbit is to burn chemical fuel at a thermal power output of around a gigawatt. This consumes something like 20 times the mass of the payload as propellant, and only barely avoids catastrophic failure 95% of the time. GEO is harder.

From what I've seen nobody currently directly launches more than 4.9 tons direct to GEO (Vulcan Centaur VC4). Starship is supposed do 27 to GTO (not GEO) when finished, but it's not finished.

If a space elevator lasts long enough to amortise the construction costs (nobody knows, what with them not being buildable yet), they would represent an improvement on launch costs relative to current methods, even if you were limited to 10 tons at a time and each GEO being a 2 hour trip.


While n8n is amazing for the odd office automation and giving semi-technical people an enormously powerful tool to get things done, it seems it can’t really replace a real backend, and mostly because of the n8n team choices than any technology.

We were trailing it and wanted to essentially switch our entire backend to it - and technically it seemed to be able to do the job, but their licensing turned out to not be a fit.

For a moderately used app we very quickly burned through their “executions” that were allotted by our license - and that’s where we host it ourselves, configuring and paying for the servers, load balancers, key value store and database, with its failovers and backups.

So the license was to use it on top of all that, and even their highest enterprise license was cutting it close, and if you “run out” of these executions, the service just stops working …

And all of that would have been fair if it was hosted, but sounds ludicrous to me for something we self host.

I think it is an incredible piece of tech, but just not suited for a dynamic startup, and once we spent the time to code up the alternative paths for our use cases, it no longer made sense to use n8n at all, as we mostly solved all the problems it was helping us with.


It seems crazy to me that they impose imaginary limits on the self-hosted version, you can have a server that can handle more executions, but the license won't allow it.


People continue to use these limitations. A long time ago when multicore CPUs were new or even systems with multiple CPUs saw vendors charge higher fees to allow their software to run on more than one core/cpu. My first run in with this was transcoding software with the license running ~$5k per core. To them, it was a whole second computer since everything was single threaded, so they felt it was worth twice as much. All it took was for someone else to not charge per core and took away business for that sales model to go the way of the dodo.


Huh? Isn't that oracle's business strategy in 2025, still?

I mean my current role doesn't put me into these kinds of considerations anymore, but I didn't hear they've changed their ways - so I just thought that's still the way you pay for Oracle DB


Oracle was the first company I thought of who licensed per-CPU.

Many moons ago I was a sysadmin at a company and first heard about their licensing strategy when sizing up some new multi-socket Opteron servers. First time I learned about per-CPU licensing.


It’s per core licensing with core factors. Core factor ranges from 0.25 to 1. I have not checked how it is now, but it used to be as above 5-10 years ago


I can see an argument of "well it typically bogs down and stops being performant at X, and we don't want to get a bad rep because people abuse it, so we don't allow anything over X", but that should still be negotiable if you can show you have plenty of horsepower.


Look I love giving people the benefit of the doubt, but that's not why this pricing model exits. It's because they want to capture a percentage of the value delivered, and the easiest way to do that it to charge by executions


I understood it as a total amount, not how many executions at the same time.


To add onto your concerns, their form nodes and file-blob node are limiting to say the least.

I wrote a post on it here: https://news.ycombinator.com/item?id=45558489


It's an orchestration tool. Backends are a lot more than just orchestration.

There are alternatives to n8n depending on your stack of what is being orchestrated. Node-red, and others have quietly existed for a very long time, similar to how n8n existed for a good while before being discovered by the AI world.


I guess the bigger question is why use two tools when you can use one? Full disclosure I work at Node-RED/FlowFuse, so take that into consideration, but the point remains - if you need to use both an orchestration tool and a flow management/backend management tool, why not use one solution that does both excellently?

I think that's a critical argument that n8n and others will have to overcome - why should users decide to over-specialise one aspect of their stack when they can do so much more and have so much more control elsewhere?


I use both.. And yes, both handle calls between different APIs.

One for the many, and maybe one for a different audience.

Node-red didn't have a login at the time, and was well geared towards direct IoT type flowing. I loved how I can just jump into the logic and flows.

N8N has a lot of pre-configured connectors.

If Node-red had the pre-configured connectors to different systems as easy as n8n, I think the point here would be even stronger.

It's why I use both. I start with the low level, and if orchestration beyond it is needed, I can tie in another tool if needed.


Facing similar issue with monitoring part of executions. What is your solution if I may as - have you taken smth of the shelf and extended to your needs or did you built from the ground up everything?


Honestly we were looking at hatchet / pickaxe - similar vein of a project but more dev focused, but in the end realised our use cases were not all that complex so just built everything in a bespoke manner.

We used n8n for two things mostly - AI agents and process automation.

For AI we just built own MCP servers, and then the agents are quite easy to use as the major frameworks kinda help you with it. N8N’s AI is kinda just a UI layer for langchain - though we just used google’s adk.

For process automation - well there is so much options it’s not even funny.


Surely you can negotiate an unlimited amount of executions?


You could, but then you start getting into additional costs, controlling how much of your metrics are surfaced vs. kept internal, etc. At that point, why not just go for another solution?

Like you could make a car like a truck, but why not just buy a truck in the first place?


It's more like they sell a truck, but disable it remotely if I drive more than 50 miles a day. And I want to drive unlimited miles.

> then you start getting into additional costs, controlling how much of your metrics are surfaced vs. kept internal, etc

I don't know what this means. In my experience with anything sold B2B, all the terms are negotiable. If you want unlimited everything, you can ask for that.


Only by installing nodeRED.


Well there's always YC's n8n competitor

https://www.activepieces.com/

They are open core I think (MIT+enterprise features model)


In some European countries all of this is commonplace - check out the not just bikes video on the subject - https://youtu.be/knbVWXzL4-4?si=NLTMgHiVcgyPv6dc

Detects if you are coming to the intersection and with what speed, and if there is no traffic blocking you automatically cycles the red lights so you don’t have to stop at all.


Yes, all of these services. Plus a ton more - hotels, car hire, various government digital services.

For example I get married abroad and I need to change my name, if a system was present I could just go to a website, enter my request, identify and then wait for my new docs to arrive, all while staying abroad.

But it’s even better - banks / employers don’t need all of my information all the time, thy just need to verify that I am who I say I am at that moment, so the credentials I am giving them through a digital system can reflect that. Call it requesting a scope from a government openid system.

And I have the power to revoke that.

And all of the various little government agencies don’t need to request all the documents to bootstrap trust every single time, they can just be given a convenient (timed) access token by me.

Implemented right, it gives much less data to people in a much more convenient and secure way. I guess the “implemented right” is the problem.

But maybe that’s an orthogonal thing that needs to be solved by itself? How we have an independent central banks that doesn’t (shouldn’t) succumb to the whims of governments - they have a clear narrow mission and they are supposed to follow it regardless of what an administration would want.

If we had an “auth provider” government thing that’s mission might be more closely aligned with the population, giving a government _just enough_ data to make it efficient but so it cannot abuse it.

Built in adversity and distrust is how we finally got a government to “work” with the separation of powers and all of that, maybe we need to think about improving the political system with some know how from web tech, cause I think working efficiency, effectively and reliably in an environment of mistrust is what web tech is known for.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: