Aren't all DLLs on the Windows platform compiled with an unusual instruction at the start of each function? This makes it possible to somehow hot patch the DLL after it is already in memory
Sorry to say but how you are framing things is simply not true anymore.
You are not required to buy their "Glasfaser Modem 2" you can buy any ONT Modem.
You are not required to use any of their equipment, they give you the data to connect via PPPOE directly.
I bought a house with FTTH in 2023 and never used any Telekom hardware. Nobody forces you to use the peer DNS. The telekom DNS isn't complying to https://cuii.info/anordnungen/ because they want to but to avoid being sued everytime some company wants to block an illegal streaming site.
For practical purposes there's the problem (at least a few years ago?) though that Akamai in particular uses DNS to steer you to the correct portion of its CDN and the default IPs returned by independent DNS resolvers tended to have relatively abysmal peering with the Telekom network that was getting completely overloaded at peak times.
Unfortunately "use <insert favourite DNS provider here> everywhere except for Akamai CDN, for which use the Telekom DNS" isn't something that consumer routers support, so you'd have to start running your own custom DNS resolver to work around that problem…
Comparing Redis to SQL is kinda off topic. Sure you can replace the one with the other but then we are talking about completely different concepts aren't we?
When all we are talking about is "good enough" the bar is set at a whole different level.
I wrote this article about migrating from Redis to SQLite for a particular scenario and the tradeoffs involved.
To be clear, I think the most important thing is understanding the performance characteristics of each technology enough that you can make good choices for your particular scenario.
We're talking about business challenges/features which can be solved by using either of the solutions and analyzing pros/cons. It's not like Redis is bad, but sometimes it's an over-engineered solution and too costly
I wish you'd have expanded on that. I almost always learn about some interesting lower-level tech through people trying to avoid a full-featured heavy-for-their-use-case tool or system.
The cheesy noir persona is for the AI assisted install and that's it. Inside the app, the prompts are strictly business. (They still have roles, but not "characters" or "personas").
As another person that spends the whole day in the terminal. It's sad to see there is no Windows version. I do not understand why I would need gpu acceleration for a terminal, but I would still try it.
I use a company managed/provided machine that runs windows, I do not have to bother maintaining it. All I use is basically Firefox and a MinGW to have a bash
I am using it for ansible, php, java, c, linux configuration issues or general questions. Preparing excel sheets etc..
It's sped the time I need to produce projects from a usual span of 4-20 days to 1-2 days with another 2-3 Testing. Of course I still bill the time it would have taken me but for a professional it can be a great improvement.
While my country will be slow to adopt, we haven't even adopted to smartphones yet - hooray Germany, it will have to adopt eventually ( in 10 years or so )
> Of course I still bill the time it would have taken me but for a professional it can be a great improvement.
This may be a flippant comment, but it actually represents one of the reasons it is difficult to track GenAI usage and impact!
Multiple researchers have hypothesized (often based on discrepancies in data) that the gains from workers using GenAI are not necessarily propagated to their employers. E.g. any time savings may be dedicated to other professional or leisure pursuits.
About 30% percent of traffic to Cloudflare uses HTTP/3 [0], so it seems pretty popular already. For comparison, this is 3× as much traffic as HTTP/1.1.
I'd even go as far as claiming that on reliable wired connections (like between cloudflare and your backend) HTTP/2 is superior to HTTP/3. Choosing HTTP/3 for that part of the journey would be a downgrade
At the very least, the benefits of QUIC are very very dubious for low RTT connections like inside a datacenter, especially when you're losing a bunch of hardware support and moving a fair bit of actual work to userspace where threads need to be scheduled etc. On the other hand Cloudflare to backend is not necessarily low RTT and likely has nonzero congestion.
With that said, I am 100% in agreement that the primary benefits of QUIC in most cases would be between client and CDN, whereas the costs are comparable at every hop.
Is CF typically serving from the edge, or serving from the nearest to the server? I imagine it would be from the edge so that it can CDN what it can. So... most of the time it wont be a low latency connection from CF to backend. Unless your back end is globally distributed too.
Also, within a single server, you should not use HTTP between your frontend nginx and your application server - use FastCGI or SCGI instead, as they preserve metadata (like client IP) much better. You can also use them over the network within a datacenter, in theory.
Is the protocol inherently inferior in situations like that, or is this because we've spent decades optimizing for TCP and building into kernels and hardware? If we imagine a future where QUIC gets that kind of support, will it still be a downgrade?
There is no performance disadvantage at the normal speed of most implementations. With a good QUIC implementation and a good network stack you can drive ~100 Gb/s per core on a regular processor from userspace with 1500-byte MTU with no segmentation offload if you use a unencrypted QUIC configuration. If you use encryption, then you will bottleneck on the encryption/decryption bandwidth of ~20-50 Gb/s depending on your processor.
On the Linux kernel [1], for some benchmark they average ~24 Gb/s for unencrypted TCP from kernel space with 1500-byte MTU using segmentation offload. For encrypted transport, they average ~11 Gb/s. Even using 9000-byte MTU for unencrypted TCP they only average ~39 Gb/s. So there is no inherent disadvantage when considering implementations of this performance level.
And yes, that is a link to a Linux kernel QUIC vs Linux kernel TCP comparison. And yes, the Linux kernel QUIC implementation is only driving ~5 Gb/s which is 20x slower than what I stated is possible for a QUIC implementation above. Every QUIC implementation in the wild is dreadfully slow compared to what you could actually achieve with a proper implementation.
Theoretically, there is a small fundamental advantage to TCP due to not having multiple streams which could allow it maybe a ~2x performance advantage when comparing perfectly optimal implementations. But, you are comparing a per-core control plane throughput using 1500-byte MTU of, by my estimation, ~300 Gb/s on QUIC vs ~600 Gb/s on TCP at which point both are probably bottlenecking on your per-core memory bandwidth anyways.
Go http webserver doesn't support http 3 without external libraries. Nginx doesn't support http 3. Apache doesn't support http 3. node.js doesn't support http 3. Kubernetes ingress doesn't support http 3.
should I go on?
edit: even curl itself - which created the original document linked above - has http 3 just in an experimental build.
> edit: even curl itself - which created the original document linked above - has http 3 just in an experimental build.
It's not experimental when built with ngtcp2, which is what you will get on distros like Debian 13-backports (plain Debian 13 uses OpenSSL-QUIC), Debian 14 and onward, Arch Linux and Gentoo.
My workday is 95% terminal, I work on a company managed windows 11 machine using git-scm as an easily updateable min-gw environment. A git-bash has been my configurable linux terminal on windows.
Why would one need a gpu accelerated terminal? What's the use case here?
I mean, I've worked on connections offering a an mbit of throughput. That was enough for the kind of work I'm doing.
I really do not understand what this is for, can someone enlighten me please?
Well, in the age of 3440x1440 displays and retina, you are forced to render high-quality fonts, not just bitmaps. In order to do that well, and fast enough, you need the GPU.
I see. I'm still holding on to my 3:4 aspect ratio EIZOs as second screen just because I prefer having a smaller second screen instead of 2 huge widescreens.
Opting into anything higher than 1920x1080 seems uncomfortable for me.
Maybe I just get old. I find it hard to read fonts on higher resolutions and it seems like other people do as well..
16pt Lucida Console on 1920x1080 just works way too well for me to even consider switching to anything else.
As I got older, I've noticed I was slouching more and more. So I got one of those LG UltraGear 45” curved monitors. It's gigantic, but helped me so much with posture and eye strain. I view every website at 200% or 250% zoom, my text editor has Inconsolata at 28px.
Of course, that came with some pains. Managing windows was time consuming, so i3 came into the picture. rxvt and st got slow, so zutty came to the rescue.
But I would be a liar if I didn't say that I miss 1024x768 with pixel-perfect fonts and UI widgets.
made hooking into game code much easier than before
reply