Hacker Newsnew | past | comments | ask | show | jobs | submit | geocar's commentslogin

⌥- produces a – as well. That's sometimes easier than typing `--` and hoping for the best.

That's an en-dash. You want to also hold shift to make it an em-dash.

oh cool —–—– ——— ——— –—––

cheers for that never even noticed


I think the problem is what is an image?

I made an attempt to enumerate them[1], and whilst I catch this issue with feImage over a decade ago by simply observing that xlink:href attributes can appear anywhere, Roundcube also misses srcset="" and probably other ways, so if the server "prefetched every image" it knew about using the Roundcube algorithm the one in srcset would still act as a beacon.

I feel like the bigger issue is the W3 (nee Google). The new HTML Sanitizer[2] interface does nothing, but some VP is somewhere patting themselves on the back for this. We don't need an object-oriented way to edit HTML, we need the database of changes we want to make.

What I would like to see is the ability to put a <pre-cache href="url"><![CDATA[...]]></pre-cache> that would allow the document to replace requests for url with the embedded data, support what we can, then just turn off networking for things we can't. If networking is enabled, just ignore the pre-cache tags. No mixing means no XSS. Networking disabled means "failures" in the sanitizer is that the page just doesn't "look" right, instead of a leak.

Until then, the HTML4-era solution was a whitelist (instead of trying to blacklist/block things) is best. That's also easier in a lot of ways, but harder to maintain since gmail, outlook, etc are a moving target in _their_ whitelists...

[1]: https://github.com/geocar/firewall.js

[2]: https://developer.mozilla.org/en-US/docs/Web/API/HTML_Saniti...


Why on earth does the HTML sanitiser allow blacklisting?! That can't ever be safe to use, the set of HTML elements can always change.

Note that the API is split into XSS-safe and XSS-unsafe calls. The XSS-safe calls [0] have this noted for each of them (emphasis mine):

> Then drop any elements and attributes that are not allowed by the sanitizer configuration, and any that are considered XSS-unsafe (even if allowed by the configuration)

The XSS-unsafe functions are all named "unsafe". Although considering web programmers, maybe they should have been named "UnsafeDoNotUseOrYouWillBeFired".

[0] https://developer.mozilla.org/en-US/docs/Web/API/HTML_Saniti...


I mean, at least they eventually came to their senses, but it does not inspire confidence!

https://developer.chrome.com/blog/sanitizer-api-deprecation/


That's the old sanitizer API. That was already removed and what you linked earlier is the new sanitizer API.

> What I would like to see is the ability to put a <pre-cache href="url"><![CDATA[...]]></pre-cache> that would allow the document to replace requests for url with the embedded data

multipart/related already exists.


> multipart/related already exists.

Which web browsers render multipart/related correctly served over https?


What is stopping them from doing so instead of going with a NIH solution?

Never mind the context is e-mail, which is not served to a browser over HTTPS.


Got it: So none.

As to why I prefer one thing that doesn’t exist over another thing that doesn’t exist depends on my priors. You might as well be asking my opinion and making fun of it before you know the answer.

What do you think the impact would be if Content-Location: would be if it suddenly gained the interpretation I suggest?

What do you think a script in the package can do to reference a part of the URL is constructed by code?


Who are you thinking of?

Netflix might be spending as much as $120m (but probably a little less), and I thought they were probably Amazon's biggest customer. Does someone (single-buyer) spend more than that with AWS?

Hertzner's revenue is somewhere around $400m, so probably a little scary taking on an additional 30% revenue from a single customer, and Netflix's shareholders would probably be worried about risk relying on a vendor that is much smaller than them.

Sometimes if the companies are friendly to the idea, they could form a joint venture or maybe Netflix could just acquire Hertzner (and compete with Amazon?), but I think it unlikely Hertzner could take on Netflix-sized for nontechnical reasons.

However increasing pop capacity by 30% within 6mo is pretty realistic, so I think they'd probably be able to physically service Netflix without changing too much if management could get comfortable with the idea


A $120M spend on AWS is equivalent to around a $12M spend on Hetzner Dedicated (likely even less, the factor is 10-20x in my experience), so that would be 3% of their revenue from a single customer.

> A $120M spend on AWS is equivalent to around a $12M spend on Hetzner Dedicated (likely even less, the factor is 10-20x in my experience), so that would be 3% of their revenue from a single customer.

I'm not convinced.

I assume someone at Netflix has thought about this, because if that were true and as simple as you say, Netflix would simply just buy Hetzner.

I think there lots of reasons you could have this experience, and it still wouldn't be Netflix's experience.

For one, big applications tend to get discounts. A decade ago when I (the company I was working for) was paying Amazon a mere $0,2M a month and getting much better prices from my account manager than were posted on the website.

There are other reasons (mostly from my own experiences pricing/costing big applications, but also due to some exotic/unusual Amazon features I'm sure Netflix depends on) but this is probably big enough: Volume gets discounts, and at Netflix-size I would expect spectacular discounts.

I do not think we can estimate the factor better than 1.5-2x without a really good example/case-study of a company someplace in-between: How big are the companies you're thinking about? If they're not spending at least $5m a month I doubt the figures would be indicative of the kind of savings Netflix could expect.


We run our own infrastructure, sometimes with our own fincing (4), sometimes external (3). The cost is in tens of millions per year.

When I used to compare to aws, only egress at list price costs as much as my whole infra hosting. All of it.

I would be very interested to understand why netflix does not go 3/4 route. I would speculate that they get more return from putting money in optimising costs for creating original content, rather than cloud bill.


> I would be very interested to understand why netflix does not go 3/4 route. I would speculate that they get more return from putting money in optimising costs for creating original content, rather than cloud bill.

I invest in Netflix, which means I'm giving them some fast cash to grow that business.

I'm not giving them cash so that they can have cash.

If they share a business plan that involves them having cash to do X, I wonder why they aren't just taking my cash to do X.

They know this. That's why on the investors calls they don't talk about "optimising costs" unless they're in trouble.

I understand self-hosting and self-building saves money in the long-long term, and so I do this in my own business, but I'm also not a public company constantly raising money.

> When I used to compare to aws, only egress at list price costs as much as my whole infra hosting. All of it.

I'm a mere 0,1% of your spend, and I get discounts.

You would not be paying "list price".

Netflix definitely would not be.


Of course netflix is optimising costs, otherwise it would not be a business, I just think they put much more effort elsewhere. They could be using other words, like "financial discipline" :)

My point is that even if I get 20 times discount on egress its still nowhere close, since i have to buy everything else - compute, storage are more expensive, and even with 5-10x discounts from list price its not worth it.

(Our cloud bills are in the millions as well, I am familiar with what discounts we can get)


Figma apparently spends around 300-400k/day on AWS. I think this puts them up there.

How is this reasonable? At what point do they pull a Dropbox and de-AWS? I can’t think of why they would gain with AWS over in house hosting at that point.

I’m not surprised, but you’d think there would be some point where they would decide to build a data center of their own. It’s a mature enough company.


That $120m will become $12m when they're not using AWS.

> Hertzner's revenue is somewhere around $400m, so probably a little scary taking on an additional 30% revenue from a single customer

A little scare for both sides.

Unless we're misunderstanding something I think the $100Ms figure is hard to consider in a vacuum.


I'm largely just thinking $HUGE when throwing out that number, but there are plenty of companies that have cloud costs in that range. A quick search brings up Walmart, Meta, Netflix, Spotify, Snap, JP Morgan.

> But you can't take .so files and make one "static" binary out of them.

Yes you can!

This is more-or-less what unexec does

- https://news.ycombinator.com/item?id=21394916

For some reason nobody seems to like this sorcery, probably because it combines the worst of all worlds.

But there's almost[1] nothing special about what the dynamic linker is doing to get those .so files into memory that it can't arrange them in one big file ahead of time!

[1]: ASLR would be one of those things...


What if the library you use calls dlopen later? That’ll fail.

There is no universal, working way to do it. Only some hacks which work in some special cases.


> What if the library you use calls dlopen later? That’ll fail.

Nonsense. xemacs could absolutely call dlopen.

> There is no universal, working way to do it. Only some hacks which work in some special cases.

So you say, but I remember not too long ago you weren't even aware it was possible, and you clearly didn't check one of the most prominent users of this technique, so maybe you should also explain why I or anyone else should give a fuck about what you think is a "hack"?


> How would a modern OS implement this?

fwrite only buffers because write is slow.

make it so write isn't slow and you don't need userspace buffering!


Hah no.

Nobody is running TCP on that link, let alone SSH.


Once upon a time I worked on a project where we SSH'd into a satellite for debugging and updates via your standard electronics hobbiest-tier 915mhz radio. Performance was not great but it worked and was cheap.


This is still done today in the Arducopter community over similar radio links.


I haven't heard much about the ArduCopter (and ArduPilot) projects for a decade, are those projects still at it? I used to run a quadroter I made myself a while back until I crashed it in a tree and decided to find cheaper hobbies...


Well at least crashing drones into trees has never been cheaper hahaha. So it's super easy to get into nowadays, especially if it's just to play around with flight systems instead of going for pure performance.


They're alive and well and producing some pretty impressive software.

Crashing your drone is a learning experience ;)

Remote NSH over Mavlink is interesting, your drone is flying and you are talking to the controller in real time. Just don't type 'reboot'!


ELRS?


Nope this predated ELRS by a bit. I wasn't super involved with the RF stuff so not sure if we rolled our own or used an existing framework


You can run ELRS on 900 MHz but the bitrate is atrocious.


https://github.com/markqvist/Reticulum

and RNode would be a better match.


In aerial robotics, 900MHz telemetry links (like Microhard) are standard. And running SSH over them is common practice I guess.


Why do you guess? I wouldn't expect SSH to be used on a telemetry link. Nor TCP, and probably not IP either.


what's wrong with tcp, on a crappy link, when guaranteed delivery is required? wasn't it invented when slow crappy links were the norm?


Because TCP interprets packet loss as congestion and slows down. If you're already on a slow, lossy wireless link, bandwidth can rapidly fall below the usability threshold. After decades of DARPA attending IETF meetings to find solutions for this exact problem [turns out there were a lot of V4 connections over microwave links in Iraq] there are somewhat standard ways of setting options on sockets to tell the OS to consider packet loss as packet loss and to avoid slowing down as quickly. But you have to know what these options are, and I'm pretty sure the OP's requirement of having `ssh foo.com` just work be complicated by TCP implementations defaulting to the "packet loss means congestion" behavior. Hmm... now that I think about it, I'm not even sure if the control plane options were integrated into the Linux kernel (or Mac or Wintel)

Life is difficult sometimes.


It will time out before your packet gets through, or it will retransmit faster than the link can send packets.


> But it is also ebay's right to decide whether or not its computer will allow requests from your computer.

That is dangerous thinking right there: Ebay does not have rights.

Of course ebay may do it anyway, and it may take time for justice to correct things, but it is not Right, nor their right, to violate law even to protect themselves.


> Ebay does not have rights

No, that is the actual dangerous thinking. Ebay enjoys the same freedom of association that you do. Their right to not do business with you is exactly the same as your right to not do business with them. It's the very same right you exercise every time you use an add blocker.


> Their right to not do business with you is exactly the same as your right to not do business with them.

You are incorrect about that. They are subject to the ADA. I am not.

As a publicly listed company they have a tremendous number of other laws that apply to them and not to me.

> It's the very same right you exercise every time you use an add blocker.

Exactly: As an accessibility tool, it is illegal for a company to deny service in the US (and Ebay is a US corporation, despite their Canadian roots) for the use of the tool.


Is ebay denying you service because you have a disability? If not, the ADA is completely irrelevant.

> it is illegal for a company to deny service in the US

No it isn't. If you want to claim this, cite statute.


> No it isn't. If you want to claim this, cite statute.

Robles v. Domino’s Pizza


Yes they do, as they should. Ebay is in an extremely competitive market and you have lots of other options, if you're abusing their service they need to be allowed to ban you. Imagine if Amazon wasn't allowed to ban scammers, or if they couldn't refuse a login portal to a user, allowing infinite attempts. It's important they get to decide whether to deliver a page to you, let alone keep you as a user.

If we were talking about some government-run water utility then sure, it would different, but a private online store can ban users without ruining their life, and if you're opposed to this new rule you should stop using them in protest.


> > Ebay does not have the right ... to violate law even to protect themselves.

> Yes they do, as they should.

No they should not, and I cannot believe you could say any such thing in good faith.

> if you're abusing their service they need to be allowed to ban you

Who said anything about "abusing their service"?

> Imagine if Amazon wasn't allowed to ban scammers

Nobody is talking about banning scammers.

Don't do this: Don't argue in bad faith. You can still disagree and think companies have the right to commit crimes, but you don't have to act like I'm saying something that I'm clearly not!

> but a private online store can ban users

Actually they can't, because we're now talking about users instead of scammers and abusers: There's something called the Americans for Disability Act, and it protects access to storefronts and no a private online store CANNOT ban users who need an accessibility tool.


The "monograph game" as you put it, is not for mere funsies: We say x+y instead of plus(x,y) because the former is obviously better.


Anything can be credited better for some metric and evaluation scale, and what is obvious to one can be surprising to someone else.

x+y is several step away from plus(x,y), one possible path to render this would be:

  x+y
  x + y
  + x y
  + x , y
  + ( x , y )
  + ( x , y )
  +(x,y)
  plus(x,y)
And there are plenty of other options. For example considering method call noted through middot notation:

  x·+(y)
  x·plus(y)
  x·plus y
  augend·plus(addend)
  augend·plus addend
And of course the tersest would be to allow user to define which operation is done on letter agglutination, so `xy` can be x×y or x+y depending on context. The closest things I aware being used in an actual programming language is terms like `2x` in Julia interpreted as "two times x". But I don’t think it allows to configure the tacit operation through agglutination, while it’s allowing to switch first index to be 0 or 1 which is really in the same spirit of configurability over convention.


> and what is obvious to one can be surprising to someone else.

That is how obvious things work. If you were not surprised that a[i:j] and :[a;i;j] are the same (: a i j) then it is because it was obvious to you, and now that you have had it pointed it out to you, you were able to show all of the different other variants of this thing without even being prompted to, so I think you understand the answer to your question now.


I say (+ x y). :P


I was distracted by this too; I programmed largely in CL and emacs from 1999-2014.

I highly recommend reading: https://dl.acm.org/doi/10.1145/358896.358899

One thing that helped me tremendously with k (and then APL) was when I noticed the morphism xfy<=>f[x;y]<=>(f x y).

This wasn't a new idea; it's right there in:

https://web.archive.org/web/20060211020233/http://community....

starting almost the first page (section 1.2). I simply had not considered the fullness of this because a lot of lispers prefer S-expressions to M-expressions (i.e. that there's more study of it), largely (I conjecture) because S-expressions preserve the morphism between code and data better, and that turns out to be really useful as well.

But the APL community has explored other morphisms beyond this one, and Whitney's projections and views solve a tremendous amount of the problems that macros solve, so I promise I'm not bothered having macros slightly more awkward to write. I write less macros because they're just less useful when you have a better language.


> The thing is, these may very well be good for environmental reasons, but it doesn't work if we just start importing from countries that do the opposite.

Everything I have read suggests the EU has controls to "temporarily suspend tariff preferences on agricultural imports from Mercosur if these imports harm EU producers"

https://www.consilium.europa.eu/en/policies/eu-mercosur-agre...

and they intend to "uphold EU animal welfare rules" specifically so consumers aren't harmed either.

https://www.foodnavigator.com/Article/2025/09/04/eu-mercosur...

> The main issue as I see it...

Who are you? If you're an expert, can you share a couple links with some analysis of which part of this agreement will harm the environment, so I know exactly what you're talking about? And not in a vague hand-wavey way with all these weasel-words about "may very well", but an actual thing, because I live here and can vote, but I think this is a good deal, and am genuinely confused why anyone would think it isn't, so if I can get educated here, I don't want to pass up the chance!


Well, who are you? As a voter, I’m already disappointed in Belgium’s lack of a gold standard for the welfare of chicken-laying eggs and male chick grinding. So we’re stuck buying the more expensive organic eggs. Okay, so the partners promise to uphold animal welfare standards. How are we checking? How often?

Are their emissions lower than ours? Do they pollute their waterways? What do they feed their livestock? Was it grown using pesticides we’ve banned, but feed was conveniently laundered through a 3rd-party importer?

I think it’s good to strike deals with new partners, but Mercosur was consistently criticised for not addressing corruption, not helping the already suicidal EU farmers, etc. It went full steam ahead, without any regard for the voters and their opinions.

Source? Dunno, I went to the protests and maybe I’m very biased.


The problem is the subjunctive mood of the word "art".

"Art thou" should be translated into modern English as "are you to be", and so works better with things (what are you going to be), or people who are alive, and have a future (who are you going to be?).

Those are probably the contexts you are thinking of.


Wherefore are you going to be Romeo?


Wherefore is "For what reason or why". Juliet is not asking where Romeo is physically, but wondering why does he need to be a Montague.

So yes you can interpret it as "for what reason or why are you going to be" (this thing she will now explain).


Wherefore is closer to why, or (as taught in literature classes) for what reason. Wherefore is a question, therefore is an answer.


Yes, I am confused about the meaning of "art" in this context.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: