Hacker Newsnew | past | comments | ask | show | jobs | submit | venamresm__'s commentslogin


> How is how inputs gets routed to the right window

It is covered for both X11 and Wayland. I just don't get into the particular decisions and details of how each WM/DE picks what they deem the current focused window, since it varies widely and it's more part of window management than of input management (I've written a WM and it's a bit messy). An article on WM/Compositor development would be more appropriate for that, I have a few already on my blog.


Someone else can explain, how me pressing "a" gets to light some LEDs on my screen, and what happens in the output part of the data flow.


It's not covered.

>Obviously, each have their own internal representation and ways of managing the information they get from libinput, XKB, and others, but this is outside the scope of this article

From what I can tell this is the explaination. It says that they manage input. If this is good enough why not simplify what the kernel does by saying Linux has its own way of handling inputs from keyboard and that is out of scope. It just seems arbitrary what handling of input is and isn't in scope.


We also setup our own off-grid solar system, almost 4 years ago now. The cost was much lower due to lower import fees and lack of regulations where I live, it was altogether somewhere between $3-4k total, for 16 panels, mounting rack, inverter (with backup inverter), mptt, 12 acid-lead batteries, etc.. We saved so much money, and fewer headaches, since state electricity only came around 5-6h a day and people relied on scammy-mafia generator providers that ask for insane prices.


In the ASN.1 space everyone hopes that someone can dethrone OSS Nokalva's proprietary solutions


I think it's context-dependent: I don't have insight into OSS Nokalva's use inside big companies, but in the Open Source world it certainly isn't dominant.

In Open Source, I think the biggest ASN.1 implementations I come across are OpenSSL's, libtasn1, asn1c, and then per-language implementations like pyasn1.


Basically any commercial ASN.1 compiler prevents usage of the output in any open-source project. There is that.


Licence also prevent you from modifying generated code.



Most of the open source tools need patching to properly support certain scenarios (been there done that). They also lack support for parsing ASN.1 Value Notation format (textual), which is used everywhere in specifications, OSS Nokalva offers the full set of tools to handle this even with a playground and ASN.1 editor, this is non-existent in open source right now. For now the open source tools only focus on the crypto aspect, and doesn't really dive into telco, banking, biometric, and others.


> In the ASN.1 space everyone hopes that someone can dethrone OSS Nokalva's proprietary solutions

You're buying more than a compiler and runtime, though: you're also getting an SLA and a stricter guarantee about interoperability and bugs and so forth. I have no idea how good their support is (maybe it's atrocious?), but these are important. I had a client who relied on the open-sourced asn1c once who complained about some of the bugs they found in it; they got pushed into buying commercial when the cost-benefit outweighed the software licensing issues.


Meh. After all, if you're not using ASN.1 you're using something like ProtocolBuffers or FlatBuffers or whatever and all open source tooling.


> Meh. After all, if you're not using ASN.1 you're using something like ProtocolBuffers or FlatBuffers or whatever and all open source tooling.

Oh sure--there are plenty of alternatives to ASN.1. My guess is that most people who have the choice don't use ASN.1 precisely because open-source alternatives exist and can feasibly work for most use cases.

But if you happen to have one of the use cases that require ASN.1, open sourced tooling can be problematic precisely because of the need for a robust SLA.


> But if you happen to have one of the use cases that require ASN.1, open sourced tooling can be problematic precisely because of the need for a robust SLA.

Why would you need a support SLA for ASN.1 and not for PB/FB? That makes no sense. And there's plenty of open source ASN.1 tooling now -- just look around this thread!


The difference is the quality of the OSS implementation: most OSS ASN.1 tool choke on the enormous 3GPP specs and others used in the telco industry, thus cannot generate 100% valid code.

For some use-cases, you can get by with manually adjust the generated code. That works until the hardware vendors release a new device that use a more modern 3GPP specs and your code start breaking again.

When using a commercial ASN.1 tooling, they often update their compilers to support the latest 3GPP specs even before the hardware vendors, and thus supporting a new device is way simpler.


If I got paid to write an 3GPP implementation one of the things I might do is make one open source ASN.1 stack really good. I've worked on open source projects as part of proprietary work.


> Why would you need a support SLA for ASN.1 and not for PB/FB? That makes no sense. And there's plenty of open source ASN.1 tooling now -- just look around this thread!

If your business depends on five nines plus of reliability in your 5g communications stack, you might be willing to fork over the price for it. Or if you need a bug fix made in a timely fashion to the compiker or runtime, likewise. As I've noted above, a client of mine moved to a commercial suite of tools for this reason.

Protobuf and flatbuffers have different use cases in my experience, although that's somewhat limited. Protobuf at least also introduced breaking changes between versions 2 and 3. ASN.1 isn't perfect in this regard, but these days incompatibikities have to go through ISO or ITU, etc.

Your experience may be different of course. I'm just pointing out that there are reasons people will opt for a commercial product.


> Protobuf and flatbuffers have different use cases in my experience, although that's somewhat limited.

This is true for the ASN.1 encoding rules as well.

> Protobuf at least also introduced breaking changes between versions 2 and 3. ASN.1 isn't perfect in this regard,

When has ASN.1 ever broken backwards compatibility? I've never heard of an ASN.1 backwards incompatibility. Maybe, if you stretch an interpretation of ASN.1 in 1984 to allow new fields to be added to `SEQUENCE { }` then the later addition of extensibility markers could count as a very weak backwards-incompatible change -- weak in that existing specs that use ASN.1 had to add those markers to `SEQUENCE { }`s that were actually intended to be extensible, but no running code was actually broken. I would be shocked if the ITU-T broke backwards compat for running code.


> When has ASN.1 ever broken backwards compatibility? I've never heard of an ASN.1 backwards incompatibility. Maybe, if you stretch an interpretation of ASN.1 in 1984 to allow new fields to be added to `SEQUENCE { }` then the later addition of extensibility markers could count as a very weak backwards-incompatible change -- weak in that existing specs that use ASN.1 had to add those markers to `SEQUENCE { }`s that were actually intended to be extensible, but no running code was actually broken. I would be shocked if the ITU-T broke backwards compat for running code.

Good question. I was thinking of the transitions in the '80s, although my experience with standards written during that time is very limited.

But yes, one of the reasons people use ASN.1 is because of its hard and fast commitments to backwards compatibility.


> But yes, one of the reasons people use ASN.1 is because of its hard and fast commitments to backwards compatibility.

To be fair I think that's generally expected of things like it. XDR? Stable. DCE/RPC? Obsolete, yes, but stable. MSRPC? A derivative of DCE/RPC, and yes, stable. XML? Stable. JSON? Stable. And so on. All of them stable. If PB broke backwards-compat once then to me that's a very serious problem -- details?


> If PB broke backwards-compat once then to me that's a very serious problem -- details?

Proto2 and Proto3 differ in how they handle default and required elements. Regarding these differences, I found a few references online:

    https://softwareengineering.stackexchange.com/questions/350443/why-protobuf-3-made-all-fields-on-the-messages-optional    

    https://groups.google.com/g/protobuf/c/Pezwn5UYZss

    https://www.hackingnote.com/en/versus/proto2-vs-proto3/
I don't use protobuf regularly, and they claim that the wire formats are bidirectionally compatible. When I last evaluated them with another developer years ago, I don't recall this being the case. (It was not merely a difference between their syntaxes.) I'm not sure the semantics are preserved between the two versions, either (e.g., did I provide a default value? was this element optional and missing? etc.).

They have lately (this is news to me) moved to protobuf editions: https://protobuf.dev/editions/overview/. This provides some flexibility in the code generation and may require some maintenance on the part of the user to ensure that codec behavior remains consistent. Google, for their part, are trying to minimize these disruptions:

> When the subsequent editions are released, default behaviors for features may change. You can have Prototiller do a no-op transformation of your .proto file or you can choose to accept some or all of the new behaviors. Editions are planned to be released roughly once a year.


> five nines

Does one pay for an SLA for every piece of hardware, firmware, and software? The codecs are the least likely cause of downtime.


> Does one pay for an SLA for every piece of hardware, firmware, and software? The codecs are the least likely cause of downtime.

I don't recall saying that—just that I have had clients for whom the support was sufficiently important (because of their own reliability concerns) that they went commercial instead of open source. (They required, among other things, 24x7 support and dedicated resources to fix bugs when found; they also sought guarantees on turn-around time.)


Fair enough.


I don't have the time, though I do have the inclination, to finish Heimdal's ASN.1 compiler, which is already quite awesome. u/lukeh used Heimdal's ASN.1 compiler's JSON transformation of modules output to build an ASN.1 compiler and runtime for Swift.



Yes, though mostly non-technical.

The submission's article here is a little too concerned with irrelevancies -- BSD vs Linux, graphic equalizers -- and is not concerned enough with the biggest improvement in audio quality over the last twenty years: measurement and realtime correction of audio playback systems.

The simplest example is with headphones, because the measurement can generally happen once per model, rather than each time you move things around in a room. Take moderately capable headphones -- this does not correlate with price, there are good choices for $25 and bad choices for $2500 -- and measure their frequency response from 20-20000 Hz. Construct the minimum set of parametric equalizations to bring them to a target curve -- that's the bit which has gone from deep expensive magic to ten seconds on a modern CPU -- and apply that from now on. Your $25 headphones are now 97% as good as the best available in the world.

The standard cautions, because HN is full of pedantic floccinaucinihilipilificators: the headphones must not distort too much; ear canals differ and must be accounted for individually; preference in target curves differ; Olive and Toole's curves are representative of what a reasonably large sample size of humans "like"; uncalibrated microphones introduce their own distortions. Speakers in rooms must be measured in those rooms, as currently configured in terms of furniture and anything absorptive or reflective at audio frequencies.

The second biggest issue, the one not related to quality as such, is the availability of sufficient and sufficiently cheap bandwidth and storage to allow people to run their own music servers and personal/household music streaming services.


> The standard cautions, because HN is full of pedantic floccinaucinihilipilificators

I think (I am not sure, I’m more familiar with the in-room speakers style of audio reproduction) the “biggest” issue is that different speakers have different time-based nonlinearities. This should be most clear in impulse responses. At an extreme example, a headphone that has terrible resonance at 400hz can never be fixed purely by EQ using a standard amp.

> The standard cautions, because HN is full of pedantic floccinaucinihilipilificators

I think (I am not sure, I’m more familiar with the in-room speakers style of audio reproduction) the “biggest” issue is that different speakers have different time-based nonlinearities. This should be most clear in impulse responses. At an extreme example, a headphone that has terrible resonance at 400hz can never be fixed purely by EQ using a standard amp.

Now, this could be solved at least partly using current drive amplifiers. Apple has apparently done this on their AirPods. But it’s not a common thing at all.


It's true, which is why that's in the standard cautions.

But it's also the case that you can get reasonably priced headphones and speakers (reasonably priced by the standards of nonaudiophiles!) that do not have terrible resonances. So: you can't fix everything, but if you're paying attention before you buy, you can avoid making mistakes.

E.g.: Kali LP8v2 are frequently on sale for $400/pair; that includes amplification. Moondrop Chu II IEMs are under $25.


https://venam.nixers.net and nixers.net I have two main feeds, one is about Unix, really deep articles. And I got usual life stuff and philosophy-related articles. I always try to take new perspectives when I write. The second link is to the nixers community.


Got to mention https://16colo.rs/


Thanks, I didn't knew about that. Into my RSS it goes!


This website regroups most of the other wargames and their points: https://www.wechall.net/


People missed the first device that was eSIM only back in 2019: https://www.androidauthority.com/motorola-razr-esim-only-105...


You're misinterpreting what eSIMs are if you think they provide a new way to connect to the network, they don't. They are simply a new sim form factor, so step two in your analysis is unrealistic as the mobile equipment is still owned by mobile operators. There's literally no difference with current sim other than the sim being embedded in the device and the users being allowed to install multiple profiles on it.


I don’t think they were assuming eSIMs provide a new type of connectivity. I think they were referring to the fact that these embedded SIMs are more integrated into the device and therefore physically agnostic to the mobile operator of choice. This would make switching from e.g. Verizon to Apple like switching from from Netflix to Disney+ – much easier since it’s fully software-based. For Apple that would be easy to bundle with the rest of their services. Yet another thing that will retain customers in their ecosystem.


Disclaimer: I've been working in the eSIM ecosystem for the past 3 years.

The integration in the device doesn't really mean the device manufacturer has more control on the sim, far from it. The integration is limited to the interface needed to download/enable/disable profiles, what's referred to as an LPA (local profile assistant). Still OP is misinterpreting that this somehow gives the ability to device manufacturer, which are not network providers, to somehow be able to now step into the telco space. The reality is far from it and Apple cannot provide such service. What they do on the field though, is market devices are easier to use and follow up with mobile operators so that the integration is seamless.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: