Cancelled today after responses became code soup, skills ignored completely, and in response to a question told me "its A, no thats wrong, its B, no actually i dont know, please look for the answer".
Something materially changed in last 4 weeks.
Also, see made up boosterism about finding security holes everywhere. Its just fanning the flames of the industry worries about all the stupid account take overs.
haproxy supports both the offload (client) and onload (backend) use case. This is the main reason for why I personally prefer it. I can not comment on how well hitch works in comparison, because I have not used it for years.
fwiw; Varnish Software still maintains and supports hitch, but we can't say we see a bright future for it. Both the ergonomics and the performance of not being integrated into Varnish are pretty bad. It was the crutch we leaned as it was the best thing we could make available.
I would recommend migrating off within a year or two.
To claim "the ergonomics and the performance of not being integrated into Varnish are pretty bad" you would need to show some numbers.
In my view, https://vinyl-cache.org/tutorials/tls_haproxy.html debunks the "ergonomics are bad" argument, because using TLS backends is literally no different than using non-TLS.
On performance, the fundamentals have already been laid out in https://vinyl-cache.org/docs/trunk/phk/ssl.html - crypto being so expensive, that the additional I/O to copy in and out another process makes no difference.
Thanks for the info, but I'm a bit confused, sorry.
The reason for hitch was that tls and caching are a different concern, and the current recommendation is to use haproxy, which also isnt integrated into varnish/vinyl.
But you say that the reason to migrate off hitch is that its not integrated?
But what happend to separation of concerns, then? Is the plan to integrate tls termination into vinyl? Is this a change of policy/outlook?
So because perbu was clearly talking with his varnish software hat on, here's the perspective from someone working on Vinyl Cache FOSS only:
I already commented on the separation of concerns in the tutorial, and the unpublished project which one person from uplex is working on full time will have the key store in a separate process. You might want to read the intro of the tutorial if you have not done so.
But the main reason for why the new project will be integrating TLS more deeply has not been mentioned: It is HTTP/3, or rather QUIC. More on that later this year.
Varnish Software released hitch to facilitate TLS for varnish-cache.
Now that Varnish has been renamed, Varnish Software will keep what has been referred to as a downstream version or a fork, which has TLS built in, basically taking the TLS support from Varnish Enterprise.
This makes Hitch a moot point. So, I assume it'll receive security updates, but not much more.
Wrt. separation of concerns. Varnish with in-core TLS can push terabits per second (synthetic load, but still). Sure, for my blog, that isn't gonna matter, but having a single component to run/update is still valuable.
In particular using hitch/haproxy/nginx for backend is cumbersome.
Totally agree. But, if i may, the docs on varnish and tls are hella confusing. I just re-read the varnish v9 docs, and its not clear at all that/if it supports tls termination.
Literally every doc, from the install guide to the "beef in the sandwich" talks about it NOT supporting tls termination... then one teeny para in "extra features in v9.0" mentions 'use -A flag'...
This is cool! But also, worth mentioning. Sure I know its an open source project so you don't owe anyone anything, but also one with a huge company behind it - and this is a huge change of stance and also, sounds cool.
Quite hard take an article seriously with this line in it:
int hour = order.timestamp().atZone(ZoneId.systemDefault()).getHour();
(Because... does the hour you did the thing change according to where you run the code? no - it should use either the location of the trader, or the exchange, neither of which are related to where the code runs)
Using strings as different kinds ids is kind of an anti pattern too. They are IDs in different domains. They can be a strongly-typed long (or other type, uuid, snowflake, whatever). No string concatenation required. This then carries on to the regular expressions - if you use strong types, you then don't need to validate that the stringly-typed stuff you used earlier, and hopefully didn't permute some function arguments somewhere is actually valid.. it just is)
The example shows rentrantlock for a single entire method.. there's no huge advantage over synchronised in this case.. maybe there's other code thats not shown.
Using double for prices & costs? You really need to be much more sure about what number of money you really have. I cant pay you $2.19999999999999.
If you have a cpu-bound algorithm, running vastly more threads than cpus isn't ever going to help, and if you really have 200 cores, then you'll want to modify your algorithm to remove synchronization... thanks Amdahl!
There may be some suggestions in the article, but it feels forced.
edit: added details on timezones, validation & threads.
Its because, although sometimes a delivery cyclist might be annoying, the reality is that there are almost zero KSI due to cyclist in any country worldwide.
The rules designed for SUV dont actually make sense for human-scale transport.
I'd definitely agree for normal bikes and e-bikes capped to speeds under 30 km/h, but at least in NYC, these delivery bikes often go ridiculously fast. I don't think they should be considered the same regulatory category.
Their speed makes them extremely unpredictable, even if the overall kinetic energy is still relatively low, and being overtaken on a relatively narrow bike lane by a vehicle going almost twice my own speed seems dangerous as well even if there is no collision.
(glanced at it so I could be wrong) They're talking about a public key that can be used to validate the JWT's authenticity. AFAIK there is no need to keep these secret, and it's not possible to (without breaking public key crypto) forge them so it should be safe to store them wherever.
- 90 days is a very long time to keep keys, I'd expect rotation maybe between 10 minutes and a day? I don't see any justification for this in the article.
- There's no need to keep any private keys except the current signing key and maybe an upcoming key. Old keys should be deleted on rotation, not just left to eventually expire.
- https://github.com/aaroncpina/Aaron.Pina.Blog.Article.08/blob/776e3b365d177ed3b779242181f0045cd6387b3f/Aaron.Pina.Blog.Article.08.Server/Program.cs#L70-L77 - You're not allowed to get a new token if you have a a token already? That's unworkable - what if you want to log in on a new device? Or what if the client fails to receive the token request after the server sends it, the classic snag with use-only-once tokens?
- A fun thing about setting an expiry on the keys is that it makes them eligible for eviction with Redis' standard volatile-lru policy. You can configure this, but it would make me nervous.
How can the key be stolen easily? That really depends on the security of the Redis setup. Redis is typically not internet accessible so you'd need some sort of server exploit.
Would have been good if the article example showed a Redis server with TLS and password auth.
Private key material should not be kept in the clear anywhere, ideally.
This includes on your dev machine, serialised in a store, in the heap of your process, anywhere.
Of course, it depends on your threat environment, but the article did mention pci-dss.
If you put it in redis, then anyone that has access (internal baddies exist too!) can steal the key and sign something. Its hard to repudiate that.
The most typical end-game is using a HSM-backed cloud product, generating the PK in the HSM (it never leaves), and making calls across the network to the key vault service for signing requests.
This is a hard tradeoff between availability and compliance. If the cloud service goes down or you have an internet issue, you would lose the ability to sign any new tokens. This is a fairly fundamental aspect of infrastructure so it's worth considering if you absolutely must put it across the wire.
It crosses from everyone has the keys like in this example, to centralising a signing service using just software, or using something like KMS or CloudHSM, or YubiHSM, or going big and getting a HA Luna (or similar) HSM setup.
Copying production data to dev is widely regarded as being a bit of a bad idea, if the data contains any information that relates to a person or real life entity.
Uncontrolled access, inability to comply with "right to be forgotten" legislation, visibility of personal information, including purchases, physical locations, etc etc.
Of course sales, trading, inventory, etc data, even with no customer info is still valuable.
Attempts to anonymise are often incomplete, with various techniques to de-anonymise available.
Database separation, designed to make sure that certain things stay in different domains and cant be combined, also falls apart if you have both the databases on your laptop.
Of course, any threat actor will be happy that prod data is available in dev environments, as security is often much lower in dev environments.
The point is that the order in which that is processed is not left to right.
First the | pipe is established as fd [1]. And then 2>&1 duplicates that pipe into [2]. I.e. right to left: opposite to left-to-right processing of redirections.
When you need to capture both standard error and standard output to a file, you must have them in this order:
bob > file 2>&1
It cannot be:
bob 2>&1 > file
Because then the 2>&1 redirection is performed first (and usually does nothing because stderr and stdout are already the same, pointing to your terminal). Then > file redirects only stdout.
But if you change > file to | process, then it's fine! process gets the combined error and regular output.
Something materially changed in last 4 weeks.
Also, see made up boosterism about finding security holes everywhere. Its just fanning the flames of the industry worries about all the stupid account take overs.
reply