Hacker Newsnew | past | comments | ask | show | jobs | submit | chuckadams's commentslogin

Some parts have improved: It's nice that alarms are now slide to cancel. Safari's UI however is now 98% mystery meat.

Shorter:

* VACUUM does not compact your indexes (much).

* VACUUM FULL does. It's slow though.


You also missed the main recommendation of REINDEX INDEX CONCURRENTLY along with optionally pg_squeeze.

That's too reductive. Vacuum full isn't just slow, it exclusively locks the table for the duration of the vacuum and is basically a no-go when the database is in use.

Got any pointers on how to configure this for yarn? I'm not turning anything up in the yarn documentation or in my random google searches.

npm still seems to be debating whether they even want to do it. One of many reasons I ditched npm for yarn years ago (though the initial impetus was npm's confused and constantly changing behaviors around peer dependencies)


Yarn is unfortunately a dead-end security-wise under current maintainership.

If you are still on yarn v1 I suggest being consistent with '--ignore-scripts --frozen-lockfile' and run any necessary lifecycle scripts for dependencies yourself. There is @lavamoat/allow-scripts to manage this if your project warrants it.

If you are on newer yarn versions I strongly encourage to migrate off to either pnpm or npm.


newer yarn versions are _less_ secure than the ancient/abandoned yarn 1? :(

Any links for further reading on security problems "under current maintainership"?


enableScripts: false in .yarnrc.yml https://yarnpkg.com/configuration/yarnrc#enableScripts

And then opt certain packages back in with dependenciesMeta in package.json https://yarnpkg.com/configuration/manifest#dependenciesMeta....


It's static from the perspective of the server. But agreed, it needs a different term.

Malware sometimes suffers from feature creep too.

I remember when the point of an SPA was to not have all these elaborate conversations with the server. Just "here's the whole app, now only ask me for raw data."

It's funny (in a "wtf" sort of way) how in C# right now, the new hotness Microsoft is pushing is Blazor Server, which is basically old-school .aspx Web Forms but with websockets instead of full page reloads.

Every action, every button click, basically every input is sent to the server, and the changed dom is sent back to the client. And we're all just supposed to act like this isn't absolutely insane.


Yes, I say this every time this topic comes up: it took many years to finally have mainstream adoption of client-side interactivity so that things are finally mostly usable on high latency/lossy connections, but now people who’re always on 10ms connections are trying to snatch that away so that entirely local interactions like expanding/collapsing some panels are fucked up the moment a WebSocket is disconnected. Plus nice and simple stateless servers now need to hold all those long-lived connections. WTF. (Before you tell me about Alpine.js, have you actually tried mutating state on both client and server? I have with Phoenix and it sucks.)

Isn’t that what Phoenix (Elixir) is? All server side, small js lib for partial loads, each individual website user gets their own thread on the backend with its own state and everything is tied together with websockets.

Basically you write only backend code, with all the tools available there, and a thin library makes sure to stich the user input to your backend functions and output to the front end code.

Honestly it is kinda nice.


Also what https://anycable.io/ does in Rails (with a server written in Go)

Websockets+thin JS are best for real time stuff more than standard CRUD forms. It will fill in for a ton of high-interactivity usecases where people often reach for React/Vue (then end up pushing absolutely everything needlessly into JS). While keeping most important logic on the server with far less duplication.

For simple forms personally I find the server-by-default solution of https://turbo.hotwired.dev/ to be far better where the server just sends HTML over the wire and a JS library morph-replaces a subset of the DOM, instead of doing full page reloads (ie, clicking edit to in-place change a small form, instead of redirecting to one big form).


Idk about Phoenix, but having tried Blazor, the DX is really nice. It's just a terrible technical solution, and network latency / spotty wifi makes the page feel laggy. Not to mention it eats up server resources to do what could be done on the client instead with way fewer moving parts. Really the only advantage is you don't have to write JS.

It's basically what Phoenix LiveView specifically is. That's only one way to do it, and Phoenix is completely capable of traditional server rendering and SPA style development as well.

LiveView does provide the tools to simulate latency and move some interactions to be purely client side, but it's the developers' responsibility to take advantage of those and we know how that usually goes...


> Honestly it is kinda nice.

It's extremely nice! Coming from the React and Next.js world there is very little that I miss. I prefer to obsess over tests, business logic, scale and maintainability, but the price I pay is that I am no longer able to obsess over frontend micro-interactions.

Not the right platform for every product obviously, but I am starting to believe it is a very good choice for most.


This is how client-server applications have been done for decades, it's basically only the browser that does the whole "big ole requests" thing.

The problem with API + frontend is:

1. You have two applications you have to ensure are always in sync and consistent.

2. Code is duplicated.

3. Velocity decreases because in order to implement almost anything, you need buy-in from the backend AND frontend team(s).

The idea of Blazor Server or Phoenix live view is "the server runs the show". There's now one source of truth, and you don't have to spend time making sure it's consistent.

I would say, really, 80% of bugs in web applications come from the client and server being out of sync. Even if you think about vulnerability like unauthorized access, it's usually just this. If you can eliminate those 80% or mitigate them, then that's huge.

Oh, and thats not even touching on the performance implications. APIs can be performant, but they usually aren't. Usually adding or editing an API is treated as such a high risk activity that people just don't do it - so instead they contort, like, 10 API calls together and discard 99% of the data to get the thing they want on the frontend.


No, it's not. I've built native Windows client-server applications, and many old-school web applications. I never once sent data to the server on every click, keydown, keyup, etc. That's the sort of thing that happens with a naive "livewire-like" approach. Most of the new tools do ship a little JavaScript, and make it slightly less chatty, but it's still not a great way to do it.

A web application should either be server-generated HTML with a little JS sprinkled in, or a client-side application with traditional RPC-like calls when necessary.

Blazor (and old-school .NET Web Forms) do a lot more back-and-forth than either of those two approaches.


Yes, as I've stated, the big stuff is new Web stuff.

When I say traditional client-server applications, I mean the type of stuff like X or IPC - the stuff before the Web.

> A web application should either be server-generated HTML with a little JS sprinkled in, or a client-side application with traditional RPC-like calls when necessary.

There's really no reason it "should" be either one or the other because BOTH have huge drawbacks.

The problem with the first approach (SSR with JS sprinkled) is that particular interactions become very, very hard. Think, for example, a node editor. Why would we have a node editor? We're actually doing this at work right now, building out a node editor for report writing. We're 95% SSR.

Turns out, super duper hard to do with this approach. Because it's so heavily client-side interactive so you need lots and lots of sync points, and ultimately the SERVER will be the one generating the report.

But actually, the client-side approach isn't very good either. Okay, maybe we just serialize the entire node graph and sent it over the pipe once, and then save it now and again. But what if we want to preview what the output is going to look like in real-time? Now this is really, really hard - because we need to incrementally serialize the node graph and send it to the server, generate a bit of report, and get it back, OR we just redo the report generation on the front-end with some front-loaded data - in which case our "preview" isn't a preview at all, it's a recreation.

The solution here is, actually, a chatty protocol. This is the type of thing that's super common and trivial in desktop applications - it's what gives them superpowers. But it's so rare to see on the Web.


You have two applications you have to ensure are always in sync and consistent.

No, the point of the API is to loosely couple the frontend and backend with a contract. The frontend doesn't need to model the backend, and the backend doesn't need to know what's happening on the frontend, it just needs to respect the API output. Changes/additions in the API are handled by API versioning, allowing overlap between old and new.

Code is duplicated.

Not if the frontend isn't trying to model the internals of the backend.

Velocity decreases because in order to implement almost anything, you need buy-in from the backend AND frontend team(s).

Velocity increases because frontend works to a stable API, and backend doesn't need to co-ordinate changes that don't affect the API output. Also, changes involving both don't require simultaneous co-ordinated release: once the PM has approved a change, the backend implements, releases non-breaking API changes, and then frontend goes on its way.


> No, the point of the API is to loosely couple the frontend and backend with a contract. The frontend doesn't need to model the backend, and the backend doesn't need to know what's happening on the frontend, it just needs to respect the API output. Changes/additions in the API are handled by API versioning, allowing overlap between old and new.

This is the idea, and idea which can never be fully realized.

The backend MUST understand what the frontend sees to some degree, because of efficiency, performance, and user-experience.

If we build the perfect RESTful API, where each object is an endpoint and their relationships are modeled by URLs, we have almost realized this vision. But it cost us our server catching on fire. It thrashed our user experience. Our application sucks ass, it's almost unusable. Things show up on the front-end but they're ghosts, everything takes forever to load, every button is a liar, and the quality of our application has reached new depths of hell.

And, we haven't realized the vision even. What about Authentication? User access? Routing?

> Not if the frontend isn't trying to model the internals of the backend.

The frontend does not get a choice, because the model is the model. When you go against the grain of the model and you say "everything is abstract", then you open yourself up to the worst bugs imaginable.

No - things are linked, things are coupled. When we just pretend they are not, we haven't done anything but obscure the points where failure can happen.

> Velocity increases because frontend works to a stable API, and backend doesn't need to co-ordinate changes that don't affect the API output. Also, changes involving both don't require simultaneous co-ordinated release: once the PM has approved a change, the backend implements, releases non-breaking API changes, and then frontend goes on its way.

No, this is a stark decrease in velocity.

When I need to display a new form that, say, coordinates 10 database tables in a complex way, I can just do that if the application is SSR or Livewire-type. I can just do that. I don't need the backend team to implement it in 3 months and then I make the form. I also don't need to wrangle together 15+ APIs and then recreate a database engine in JS to do it.

Realistically, those are your two options. Either you have a performant backend API interface full of one-off implementations, what we might consider spaghetti, or you have a "clean" RESTful API that falls apart as soon as you even try to go against the grain of the data model.

There are, of course, in-betweens. RPC is a great example. We don't model data, we model operations. Maybe we have a "generateForm" method on the backend and the frontend just uses this. You might notice this looks a lot like SSR with extra steps...

But this all assumes the form is generated and then done. What if the data is changing? Maybe it's not a form, maybe it's a node editor? SSR will fall apart here, and so will the clean-code frontend-backend. It will be so hellish, so evil, so convoluted.

Bearing in mind, this is something truly trivial for desktop applications to do. The models of modern web apps just cannot do this in a scalable, or reliable, way. But decades old technology like COM, dbus, and X can. We need to look at what the difference is and decide how we can utilize that.


The problem with all-backend is that to change the order of a couple buttons, you now need buy-in from the backend team. There's definitely a happy medium or several between these extremes: one of them is that you have full-stack devs and don't rigidly separate teams by the implementation technology. Some devs will of course specialize in one area more than others, but that's the point of having a diverse team. There's no good reason that communicating over http has to come with an automatic political boundary.

Communicating over HTTP comes with pretty much as many physical boundaries as possible. The main problem, and power, of APIs is their inflexibility. By their design, and even the design of HTTP itself, they are difficult to change over time. They're interfaces, with defined inputs and outputs.

Say I want to draw a box which has many checkboxes - like a multi-select. A very, very simple, but powerful, widget. In most Web applications, this widget is incredibly hard to develop.

Why is that? Well first we need to get the data for the box, and ideally just this particular page of the box, if it's paginated. So we have to use an API. But the API is going to come with so much baggage - we only need identifiers really, since we're just checking a checkbox. But what API endpoint is going to return a list of just identifiers? Maybe some RESTful APIs, but not most.

Okay okay, so we get a bunch of data and then throw away most of it. Whatever. But oh no - we don't want this multi-select to be split by logical objects, no, we have a different categorization criteria. So then we rope in another API, or maybe a few more, and we then group all the stuff together and try to splice it up ourselves. This is a lot of code, yes, and horribly frail. The realization strikes that we're essentially doing SQL JOIN and GROUP BY in JS.

Okay, so we'll build an API. Oh no you won't. You can't just build an API, it's an interface. What, you're going to write an API for your one-off multi-select? But what if someone else needs it? What about documentation? Versioning? I mean, is this even RESTful? Sure doesn't look like it. This is spaghetti code.

Sigh. Okay, just use the 5 API endpoints and recreate a small database engine on the frontend, who cares.

Or, alternative: you just draw the multi-select. When you need to lazily update it, you just update it. Like you were writing a Qt application and not a web application. Layers and layers of complexity and friction just disappear.


There's a lot of different decisions to make with every individual widget, sure, but I was talking about political boundaries, not physical ones. My point is that it's possible for a single team to make decisions across the stack like whether it's primarily server-side, client-side, or some mashup, and that stuff like l10n and a11y should be the things that get coordinated and worked out across teams. A lot of that starts with keeping hardcore True Believers off the team.

Stop having backend and frontend teams. Start having crossfunctional teams. Problem solved.

Hotwire et al are also doing part of this. It isn't a new concept but it seems to come and go it terms of popularity

Well, maybe it isn't so insane?

Server side rendering has been with us since the beginning, and it still works great.

Client side page manipulation has its place in the world, but there's nothing wrong with the server sending page fragments, especially when you can work with a nice tech stack on the backend to generate it.


Sure. The problem with some frameworks is that they attached server events to things that should be handled on the front-end without a roundtrip.

For instance, I've seen pages with a server-linked HTML button that would open a details panel. That button should open the panel without resorting to sending the event and waiting for a response from the server, unless there is a very, very specific reason for it.


> And we're all just supposed to act like this isn't absolutely insane.

This is insane to you only if you didn't experience the emergence of this technique 20-25 years ago. Almost all server-side templates were already partials of some sort in almost all the server-side environments, so why not just send the filled in partial?

Business logic belongs on the server, not the client. Never the client. The instant you start having to make the client smart enough to think about business logic, you are doomed.


> The instant you start having to make the client smart enough to think about business logic, you are doomed.

Could you explain more here? What do you consider "business logic". Context: I have a client app to fly drone using gamepad, mouse and keyboard, and video feedback and maps, and drone tasking etc.


It's kinda nice.

Main downside is the hot reload is not nearly as nice as TS.

But the coding experience with a C# BE/stack is really nice for admin/internal tools.


Yeah, I kind of hate it... Blazor has a massive payload and/or you're waiting seconds to see a response to a click event. I'm not fond of RSC either... and I say this as someone absolutely and more than happy with React, Redux and MUI for a long while at this point.

I've been loosely following the Rust equivalents (Leptos, Yew, Dioxux) for a while in the hopes that one of them would see a component library near the level of Mantine or MUI (Leptos + Thaw is pretty close). It feels a little safer in the longer term than Blazor IMO and again, RSC for react feels icky at best.


I saw this kind of interactivity in Apache Wicket Java framework. It's very interesting approach.

Until they discovered why so many of us have kept with server side rendering, and only as much JS as needed.

Then they rediscovered PHP, Rails, Java EE/Spring, ASP.NET, and reboted SPAs into fullstack frameworks.


> Then they rediscovered PHP, Rails, Java EE/Spring, ASP.NET, and reboted SPAs into fullstack frameworks.

I can understand the dislike for Next but this is such a poor comparison. If any of those frameworks at any point did half the things React + Next-like frameworks accomplished and the apps/experiences we got since then, we wouldn't be having this discussion.


> If any of those frameworks at any point did half the things React + Next-like frameworks accomplished and the apps/experiences we got since then, we wouldn't be having this discussion.

This is interesting because every Next/React project I see has a slower velocity than the median Rails/Django product 15 years ago. They’re just as busy, but pushing so much complexity around means any productivity savings is cancelled out by maintenance and how much harder state management and security are. Theoretically performance is the justification for this but the multi-second page load times are unconvincing.

From my perspective, it really supports the criticism about culture in our field: none of this is magic, we can measure things like page-weight, response times, or time to complete common tasks (either for developers or our users), but so much of it is driven by what’s in vogue now rather than data.


+1 to this. I seriously believe frontend was more productive in the 2010-2015 era than now, despite the flaws in legacy tech. Projects today have longer timelines, are more complex, slower, harder to deploy, and a maintenance nightmare.

I remember maintaining webpack-based projects, and those were not exactly a model of simplicity. Nor was managing a fleet of pet dev instances with Puppet.

Puppet isn’t a front end problem, but I do agree on Webpack - which is one reason it wasn’t super common. A lot of sites either didn’t try to bundle things or had simple Make-level workflows which were at least very simple, and at the time I noted that these often performed similarly: people did, and still do, want to believe there’s a magic go-faster switch for their front end which obviates the need to reconsider their architectural choices but anyone who actually measured it knew that bundlers just didn’t deliver savings on that scale.

I do kind of miss gulp and wish there was a modern TS version. Vite is mighty powerful, but pretty opaque.

Webpack came out in late 2012 and took a few years to take over, thankfully. I was lucky to avoid it at dayjob™ until around 2019.

I'm not so sure those woes are unique to frontend development.

I still remember the joy of using the flagship rails application - basecamp. Minimal JS, at least compared to now, mostly backend rendering, everything felt really fast and magical to use.

Now they accomplished this by imposing a lot of constraints on what you could do, but honestly it was solid UX at the time so it was fine.

Like the things you could do were just sane things to do in the first place, thus it felt quite ok as a dev.

React apps, _especially_ ones hosted on Next.js rarely feel as snappy, and that is with the benefit of 15 years of engineering and a few order of magnitude perf improvement to most of the tech pieces of the stack.

It’s just wild to me that we had faster web apps, with better organizarion, better dev ex, faster to build and easier to maintain.

The only “wins” I can see for a nextjs project is flexibility, animation (though this is also debatable), and maybe deployment cost, but again I’m comparing to deploying rails 15 years ago, things have improved there as well I’m sure.

I know react can accomplish _a ton_ more on the front end but few projects actually need that power.


How does Next accomplish more than a PHP/Ruby/whatever backend with a React frontend?

If anything the latter is much easier to maintain and to develop for.


Blazor? Razor pages?

We are having this discussion because at some point, the people behind React decided it should be profitable and made it become the drug gateway for NextJS/Vercel

Worse, because Vercel then started its marketing wave, thus many SaaS products only support React/Next.js as extensions points.

Using anything else requires yak shaving instead of coding the application code.

That is the only reason I get to use them.


They weren't the new shinny to pump up the CV, and fill the Github repo for HR applications.

I sometimes feel like I go on and on about this... but there is a difference between application and pages (even if blurry at times), and Next is a result of people doing pages adopting React that was designed for applications when they shouldn't have.

That was indeed one of the main points of SPAs, but React Server Components are generally not used for pure SPAs.

Correct, their main purpose is ecosystem lock-in. Because why return json when you can return html. Why even build a SPA when the old school model of server-side includes and PHP worked just fine? TS with koa and htmx if you must but server-side react components are kind of a waste of time. Give me one example where server side react components are the answer over a fetch and json or just fetching an html page?

The only example that has any traction in my view are web-shops, which claim that time-to-render and time-to-interactivity are critical for customer retention.

Surely there are not so many people building e-commerce sites that server components should have ever become so popular.


The thing is time to render and interactivity is much more reliant on the database queries and the internet connection of the user than anything else. Now instead of a spinner or a progress bar in the toolbar of the browser, now I got skeleton loaders and use half of GB for one tab.

Not to defend the practice, I’ve never partaken, but I think there’s some legit timing arguments that a server renderer can integrate more requests faster thanks to being collocated with services and dbs.

which brings me back to my main point of the web 1.0 architecture. Serving pages from the server-side, where the data lives, and we've come full circle.

I like RSCs and mostly dislike SPAs, but I also understand your sentiment.

Sure they are. Next sites are SPAs.

It also decoupled fe and backend. You could use the same apis for say mobile, desktop and web. Teams didnt have to cross streams allowing for deeper expertise on each side.

Now they are shoving server rendering into react native…


Yeah, but then people started building bloated static websites with those libraries instead of using a saner template engine + javascript approach which is fast, easy to cache, debug, and has stellar performance and SEO.

Little it helped that even React developers were saying that it was the wrong tool for plenty use cases.

Worst of all?

The entire nuance of choosing the right tool for the job has been long lost on most developers. Even the comments I read on HN make me question where the engineering part of the job starts.


It also doesn't help that non-technical stakeholders sometimes want a say in a tech stack conversation as well. I've been at more than one company where either the product team or the acquiring firm wanted us to migrate away from a tried and true Rails setup to a fullstack JS platform simply because they either wanted the UI development flexibility or to not have to hire Ruby devs.

Non-technical MBA's seem to have a hard time grasping that a JS-only platform is not a panacea and comes with serious tradeoffs.


I'd be interested in adopting a sole-purpose framework like that.

I think people just never understood SPA.

Like with almost everything people then shit on something they don’t understand.


Kubernetes has a few things, including cdk8s. Yoke looks promising too.

> And the trope of guns being impersonal compared to swords and knives turns up everywhere.

"This is the weapon of a Jedi Knight. Not as clumsy or random as a blaster. An elegant weapon for a more civilized age."


I read MULE and got nostalgaic for the game, then remembered the same-named thing in emacs, which made me happy that nowadays we have unicode instead.

Dawn of War 3 made DoW 2 look like Game of the Decade by comparison. I hear they're making a DoW 4, and they're not even mentioning 3 when talking about the history.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: