Hacker Newsnew | past | comments | ask | show | jobs | submit | alexghr's commentslogin

> So keep the technical talk to your peers, unless the client explicitly asks for it.

This! Adapt the message to the target audience. The code we write is an abstraction, it takes some input and produces an output. For many it's a black box or can be thought of as a black box. Start high and go lower and lower until you reach the right level of detail for the person you're communicating with.


Sometimes the client wants to talk about code. He wants to be involved in the technical decisions. But he doesn't know diddly about code. So you have these frustrating conversations and try not to call the client an idiot.


This is great to hear and I hope it will mean more people give Firefox a decent try! We need more people on alternative browsers in order to keep healthy competition between rendering engines.

Firefox also handles lots of tabs really well and its Tab Containers plugin enables perfect separation of work tabs and personal tabs (I've got 1.4k tabs split across personal/work containers. I've got a problem, I know)


I was curious if the various projects that used data from the HN 'personal blogs' thread had a noticeable effect on my website. I looked through my nginx logs and saw that it's now getting a tonne of requests to the RSS feed (atm about 1.5k req/day compared to... 0/day prior)


Sure, here's mine: https://www.alexghr.me/.

I've been running this site since ~2015 (same CSS for at least 8 years now) but there's not a lot of content on it. I've been trying to get more into it recently though and I'm posting TIL-style content :)

It started out as a site built out of Mustache templates with plain CSS for styling. A few months ago I migrated it to Astro so that I don't have to maintain a build script written in bash but the CSS and site layout stayed the same.


Just raising this since people might not know about it, but this is a Web standard now https://www.w3.org/annotation/. There are a couple of companies which have created browser plugins for this.


That is super interesting, but looking at the diagram[1] my first gripe is with the nature of their decentralization.

It's not a bad idea, it's a good first step, but they're proposing that any group or private person can host an annotation service.

This will cause first of all competition, because which service has the most annotations? Using my annotation service hosted on my minecraft server might be useless because it only has my own annotations.

So obviously users will want to use a larger service, and here comes all the standard issues of a user-driven internet, trust, donations, groups forming and such. Also you might end up in a bubble, seeing annotations only from one particular side of the political spectrum.

Of course this model does not have the same issues as the fediverse where annotation services can de-federate from each other, the user is free to pick and choose whichever they want. There might even be a helpful counter in the UI that shows how many annotations a particular service has for this website.

The only real solution to all this is some sort of global database, like IPFS perhaps, where we can store the annotations. And then we can all individually host gateway servers to this database that the end user connects through.

But that has its own problems of course, you can't just magically make a distributed database without heavy bandwidth consumption and its own hosting requirements.

1. https://www.w3.org/annotation/diagrams/annotation-architectu...


Web Annotations spec is a great starting place & I hope we can some day see some real breakout wins from it. I'd love to see some cross integration with ActivityPub, as a syndication/transport!

What I really want is no site to win; true success doesn't come from centralized solutions. Each user should have their own annotation feed!

What I'd love to see is something like the return of blogrolls, an annotater's list of people they follow. Users promoting users. A good extension could let us do a N-degrees exploration, let us see comments of people we follow, people they follow, people those people follow... Expanding the network & implicitly suggesting to us other people we might want to follow.

I personally really really loved the social aspect of del.icio.us. Finding other people who were searching deep for interesting content was something I spent time on & it rewarded me handsomely, back in the day. I hope for similar thing here we're not just using this to have annotations, we're also using it as a content discovery tool, seeing what content there is from people we follow.

I'd try to suggest sites should have something like pingbacks, to make it so the site can keep track of annotations. But that would let them filter anotations which I don't like, and more problematically, it's an opt in mechanism. Having centralized search systems seems obvious. Ideally maybe some kind of kademlia hash might offer a P2p alternative. It's quite possible maybe bittorrent pex's P2p layer could be used/abused for this.


There's also a competing W3C standard named Webmention [1] based on WordPress' pingback/linkback protocols [2] which in turn were based on XML-RPC and exploited for DDoS attacks, like most things WordPress (it's one of the reasons your http access logs are chock full of 404s for xmlrpc.php). AFAICS, pingback remains the most used method though, or the only one that ever went mainstream before web commenting consolidated onto a couple news aggregators, HN and reddit among them.

The Web annotations protocol has been published as W3C spec in 2016 already (so not "now"), and, as a child of its time, uses god-awful JSON-LD, just like previous W3C specs chased XML whether it was a good fit or not when exchange of text data is one of the actual use cases for markup languages.

Is there an English word for always getting it wrong and blindly promoting formats? In German, there's the term Schlaglochsuchmaschine (pot hole search engine) as a metaphor borrowed from the automotive domain.

[1|: https://www.w3.org/TR/webmention/

[2]: https://en.wikipedia.org/wiki/Pingback


Hi, offtopic from the above but, I am trying to use your sgml npm package to parse some OFX files, I wonder if you could give some guidance on a problem i am having? I am trying to move away from a wasm compiled OpenSP.

I'm using am example from your website to try and convert from ofx to xml.

"content must start with document element when document type isn't specified" is the error I get.

Many thanks!!


Ask on Stack Overflow and include the term "sgml". If you're not on SO, temporarily add a personal mail address to your profile (in the about field) so I can get in touch.



Wow that page is mobile unfriendly.. From the w3 none the less


This is super cool and I'm definitely bookmarking it for later use :)

I'd suggest moving the "copy css/figma/link" buttons to the right panel or at least making them more obvious (maybe by increasing contrast?) as I had trouble figuring out how to export the shadow I made.


I still couldn't figure out where those buttons are. Kept looking everywhere.


I think what the article tries to say is that OpenAI have already scraped Reddit for training data and with the recent API changes and subreddits going dark, new competitors in the AI space won't have it as easy to get the same training set.


Honestly this sounds like a shower-thought post. With even basic research, Internet Archive and The Eye have Reddit historical data freely available. My desktop PC has all comments and posts from 2007-early 2023, in a convenient jsonl zst. It's only 3TB.


I disagree.

What we gained is a native module system that works the same everywhere (Nodejs, browsers, deno, Bun) vs CommonJS which was really only built for Nodejs (with compatibility layers for others).

We have a specification for this system and any changes done to it have to follow the same process as any other change done to the language (for better or worse).

We have top-level async support that works across import boundaries. This is less useful on the server, but in the browser? With just a `import thing from "https://example.com/foo.js"` I get a fully intialised library even if does async requests as part of its initialisation.

What we lost is just this

``` const someInitializedModule = require("module-name")(someOptions); ```

The `app.use()` example can be replicated with an async `import()`. Maybe it's not as elegant as before.

There's going to be a lot pain of transitioning a big ecosystem like Javascript's to ESM but is that really a reason to not evolve the language?


> With just a `import thing from "https://example.com/foo.js"` I get a fully intialised library even if does async requests as part of its initialisation.

This is indeed convenient if you want to quickly try a lib. However, what happens in a larger project with dozens of dependencies? Those http requests will get inefficient soon, and you'll be back having to compile everything with Babel or similar.


I agree, bundling isn't going anywhere for sure, but the web isn't all single-page-apps built with perfect engineering and top-notch frameworks.

Plenty of websites are just server-side rendered and just need to sprinkle some client side JS. For them it's perfect to be able to drop in a script and not have to worry about bundling or polluting the `window` object while loading external scripts.


Even if we take only the part of the web which is SPAs, 0.1% are well engineered.


If I’m serious about shipping some production app/page, I want complete control over all of this anyways. I never want to require/import an entire .min.js file. I want to tree shake the 15% of it I need.


Why though? The min.js will be cached if it’s a popular library. Your bundling might as well reduce overall performance.


This is a myth in production environments I feel. It's a security risk to import the library on the fly from a source you don't control, and the caching is per user, so you are banking on each specific user having visited a site that happens to have the same version of some library you use from the same domain that you included it from, so that it's cached.

You're also now fighting for response time and bandwidth from a public resource you don't control. You are beholden to their traffic spikes, their downtime and their security incidents.

Just send it from your servers, or your edge nodes. They already have the DNS hit cached, they are already getting everything else from there. Chances are high you're sending image data that far exceeds the JS library anyway. This is especially prudent if you serve users in your own country, and that country isn't the US. Chances are very high your site's largest response delays are US CDN resources if you use them.


Privacy concerns led to browsers caching per-user and per-site, so there is even less advantage to "shared CDNs" in 2023's browsers.

That said, tree-shaking can sometimes be a premature optimization if your site isn't a SPA with a comprehensive view of its tree to shake. Some MPA designs may still benefit from caching the whole ESM .min.js of a site-wide dependency and letting the browser tree-shake at runtime.


> The min.js will be cached if it’s a popular library.

No, it won't any more: https://www.stefanjudis.com/notes/say-goodbye-to-resource-ca...


And it was always pretty minimal benefit. Depended on the exact same version of the library being cached from the same CDN... in the days of jquery hegemony, maybe jquery cache hits could be a thing, but even that was probably minimal. These days JS usage is much more diverse.

It was like an idea people had that this would be a cache benefit, but just theory not actually from real-world observation. I recall several people trying to do investigations to determine how often cache hits would happen in these cases, and finding it wasn't that often after all in real-world use. But I can't find em now, google is crowded out by people talking about what you link to above!


Due to security/privacy concerns, browser caching is now scoped to the origin of the website loading the content, so linking to popular libraries from CDNs provides no caching benefits when loading your site for the first time.


Not every project has dozens of dependencies.

Standardised functionality, in this case, is better than needing tooling.


I also disagree with the article, I happen to find ESM modules very useful and although I've had to work around some of the ways I would have previously done things, mostly they weren't good best-practices in the first place.


>What we lost is just this

>``` const someInitializedModule = require("module-name")(someOptions); ```

That can partially be done such as:

import {jason} from 'https://example.com/foo.js?name=jason

On the script site use: const url = new URL(import.meta.url); console.log(`${url.searchParams.get("name")`);


He isn't talking about named exports; he is talking about immediate invocation of the imported module.

   require("module-name")(someOptions)
is instead represented as

    import { someModule as someModuleFactory } from 'module-name';
    const someInitializedModule = someModuleFactory(someOptions);


The most direct translation is something like:

    (await import("module-name")).default(someOptions)
The biggest issue is import() is async and module proxies aren't allowed to be directly callable so you need to pick an export from the other module to call. But `default` is an appropriate export to call, even if there's no nice shorthand for default imports in the import() case (as there is syntax sugar in the import keyword case for default). Neither of those things seem like deal breakers to me, and have very good reasons for why they are the way they are.


Really? I remember using the commonjs module pattern even before nodejs. It basically just felt like it was a way to write javascript to a standard of quality more than anything in those times. Lots of closure enforcement, and everything wrapped in a top-level function - The module imports were just a side effect. "require" is just a top-level first-class function call. I believe I was using it to build crossplatform mobile applications at one point, and there was absolutely no nodejs involved in that.

All of that said I'm super happy I moved away from mobile/frontend into backend. Way less annoyances.


My bad, my past experience/bias got the better of me here. I only got exposed to CommonJS as a part of Nodejs but it does look like CommonJS was started independently of Node.


yes, RingoJS is also using CommonJS, but yeah CommonJS was mainly for Node. However, now we have standard from import that is part of ES standard. Would rather be using that now that most runtime and browser has implemented it.


I've been using PurelyMail[1] (it's been on hacker news a couple of times already) for about a year now and haven't had any issues with it. It uses consumption-based pricing so for my use (with two domains set up) it comes out to about $0.5/month. I think it's a small team (maybe just one person?) that maintains it.

[1]: https://purelymail.com/


I've been a satisfied Fastmail customer for a couple of years but this looks mighty enticing to switch over. $10 a year OR pay as you go.


this is the kind of pricing i have been looking for. i have a few accounts that are extremely low use because they serve as a backup to other channels, so with the advanced payment model i'd actually even save money over gandi's old pricing.


Another vote for Purelymail. Pricing is great. Easy to setup. It just works. Plus they have a mail migration tool.


It looks like there is a bus factor of 1. Is this not a concern for you?


Well, if nothing else, at least Vercel's new integrations have put new things on my map of free things to use in side-projects (I had never heard of Upstash before now) :))


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: