They should be fine since I made up the setting name, and even though I am not familiar with Tor client's configuration, I don't believe this is possible without altering its source code.
Also, using this kind of software without understanding how its works even just a little doesn't protect much of your privacy.
No, I am not saying keep the status quo. I am simply challenging the idea that kernel will enjoy benefits that is supposed to be provided by Rust.
Distribution of bugs across the whole codebase is not following the normal distribution but multimodal. Now, imagine where the highest concentration of bugs will be. And how many bugs there will be elsewhere. Easy to guess.
What am I exactly doing again? I am providing my reasoning, sorry if that itches you the wrong way. I guess you don't have to agree but let me express my view, ok? My view is not extremist or polarized as you see. I see the benefit of Rust but I say the benefit is not what Internet cargo-cult programming suggests. There's always a price to be paid, and in case of kernel development I think it outweighs the positive sides.
If I spend 90% of time debugging freaking difficult to debug issues, and Rust solves the other 10% for me, then I don't see it as a good bargain. I need to learn a completely new language, surround myself with a team which is also not hesitant to learn it, and all that under assumption that it won't make some other aspects of development worse. And for surely it will.
Back in the 90s early 00s the internet made us mesh together because each one of us there was a specific person. We had forum signatures and every single post was clearly made by a person, for a person.
Then social media took over and relegated every single person into a tiny unidentifiable avatar next to a non-prominent name, not unlike NPCs in CRPGs.
In turn this has been exploited by the powers that be to ensure the social glue gets even weaker: a society barely held together won't revolt. There's only one thing left to do: productivity, productivity, productivity.
The political opponent is no longer a person. Just a nameless, faceless NPC (personifying everything that's wrong) spawned there to be defeated and collect their social loot tokens.
But I might just be an old fart rambling about the good, old days.
It's not specific to the internet. It's just that basically everyone came into the internet when it became cheap and easy enough to use. However, previously it was only a subset of people who were willing to put in the effort because it offered some benefits to them.
It is simply the typical effect of regression to the mean of large groups. Previously there was the illusion of meshing because the group was more homogenous.
Diversity is touted as a panacea, but it actually has many deleterious effects, the most obvious being the reduction of what is deemed acceptable/normal behavior and speech.
It is not surprising that everyone looks like an NPC because bringing everyone into the fold requires a redefinition of what is considered the acceptable norm. It systematically goes through narrowing the definition to artificially create the illusion of homogeneity (necessary to reduce conflicts).
This is the same process happening to society at large. Psychology is just a replacement for religion, and the various diagnostics just serve as a tool to police what is acceptable behavior.
This is very much the same thing as the various religious scriptures, using moral arguments, appeals to emotions, and enforced tribalism to promote the “correct” way to live one's life.
The endless discussion of psychologists around diagnostics is hopeless; they are just gurus replacing preachers, but instead of using gospels and mythological stories, they use pseudoscientific bullshit to categorize/label behaviors and argue for what they believe should be the norm.
I am on Discord and the balkanization+homogeneization is still as prominent there as everywhere else.
Server admins are just NPCs providing @everyone announcements from time to time, to keep the player engaged (spoiler: the average Joe is just irritated by those). Sometimes you get a quest from them.
Also: 99% won't read profile bios (and you have to pay for actual customization, don't you?) while forum signatures were front-and-center.
I have to say I'm surprised to see Discord mentioned as an opposite to social media instead of... just yet another iteration of the same ploy.
Fuck discord. Another big for-profit platform that is swallowing big chunks of the internet. before discord there were lots of self-organized forums with their own communities and rules. Now I need to register with some big overlord and download their shitty app just to read what has before been just an URL away?
Nah. Right in the browser works great: discord.com/app
You’re going to keep running into a wall thinking of discord like a forum replacement; It’s designed to be an IRC replacement.
The invitation system intentionally creates some privacy so you can build a sense of enclosed community around them, and so you have some control over who sees what. Not having your conversations on full automatic blast to the public is a feature.
IRC works in the browser now thanks to IRCv3. Matrix is another option
The invitation system gives a false sense of privacy. There are bots that crawl publicly posted invites, public IRC channels, etc. Eventually people will understand that IRC and discord are public in the same way we understand usenet to have been public
Yeah, sadly it is the spambots which have killed off independent platforms more than anything else. It sounds like something which could easily give rise to conspiracy theories from people putting the spammers into the same mental baskets as the controlling companies. It isn't the expense of having to leave on a raspberry pi running a server.
TBH "definition" depends on the theory from which you're looking at the notes.
In the eyes of the Common Practice two simultaneous notes are not chords; in rock they most definitely are; in EDM you don't even care, since timbre is all that matters; in jazz you'd say "it depends" (e.g. might even be a triad with an omitted 5th... depending on context!)
My personal theory is that (notation+terms of art) is incredibly information dense even if inscrutable to outsiders.
What you wish for is more akin to coding like this:
declaring a function whose name is "max" and its arguments are "x" (of type number) and "y" (of type number) that returns a number:
statement: if a is greater than b, the function returns a
statement: the function returns b
But programmers don't bat an eye at {}[](),.!&^| (and I just realized I used the term "function" which outsiders might wish was replaced by simpler terminology!)
// This is more readable if you're "in the know"
// even if it looks like a jumbled mess to outsiders
fn max(a: num, b: num): num => a > b ? a : b
Math uses terms of art like "group", "field", "modulo" and "multiplicative inverse"; and notation like "∑"; because they are short and communicate very specific (and common) things, many of which are implicit and we probably wouldn't even notice.
> Math uses terms of art like "group", "field", "modulo" and "multiplicative inverse"; and notation like "∑"; because they are short and communicate very specific (and common) things, many of which are implicit and we probably wouldn't even notice.
I don't have anything against introducing new words. If your concept can be adequately described by existing language that seems like a good way to allow people to learn and talk about it. Technically as a person who has studied philosophy the greek alphabet is also no big hurdle to me. But it is to others. Try googling some weird sign you found in a formula. First you don't know how it is called or how to write it, second any signed might have been used in 100 different formulae so even if you, know how to search for it (there are applications people use to identify mathematical signs) good luck at finding any meaningful answer.
I know for mathematicians these signs are arbitrary and they would say you could just use emojis as well. But then it turns out mathematicians ascribe meaning to which alphabet they are using and whether it is upper- or lowercase. Except sometimes they will break that convention for what appears to be mostly historical reasons.
I know mathematicians will get used to this just fine, but the mathematical notation system has incredibly bad UX and the ideals embedded within it are more about density and intransparency (only the genius mathematician knows what is going on), than about rigorous precision and understanding.
When I studied philosophy there were philosophers like Hegel who had to expand the German language to express their new thoughts. And there were philosophers who shall remain unnamed that would use nearly unparseable dense and complex language to express trivial thoughts. The latter always felt like an attempt to paper over their own deficiencies with the convoluted language they had learned to express themselves in.
Mathematicans can also have a degree of the latter at times. If your notation is more complex than the problem it describes your notation sucks and you waste collective human potential by using it.
Whereas when played separately it would be an referred to as an arpeggio. But in harmony we might still refer to it as a chord, as in saying, arpeggiate the C# minor (chord) to start moonlight sonata.
This might better be described as arpeggiating C#m second inversion or even C#m/G# in the right over C# in the left...
This is getting possibly-weird but you could call it an arpeggiation of G#sus4(#5)/C#
I made Lambda Musika[0][1] a long time ago and its elevator pitch is literally "Lambda Musika, the functional DAW" (as in functional programming).
Check the teal button at the bottom for other examples!
I don't use it that much anymore (Strudel's language is truly expressive) but I still reach for it when I want to do sound design, since Strudel is more like a sequencer (where Lambda Musika lacks).
Rust's async-await is executor-agnostic and runs entirely in userspace. It is just syntax-sugar for Futures as state machines, where "await points" are your states.
An executor (I think this is what you meant by runtime) is nothing special and doesn't need to be tied to OS features at all. You can poll and run futures in a single thread. It's just something that holds and runs futures to completion.
Not very different from an OS scheduler, except it is cooperative instead of preemptive. It's a drop in the ocean of kernel complexities.
Yeah, for example embassy-rs is an RTOS that uses rust async on tiny microcontrollers. You can hook task execution up to a main loop and interrupts pretty easily. (And RTIC is another, more radically simple version which also uses async but just runs everything in interrupt handlers and uses the interrupt priority and nesting capability of most micros to do the scheduling)
I find it interesting that this fulfills some of Dennis Ritchie's goals for what became the STREAMS framework for byte-oriented I/O:
> I decided, with regret, that each processing module could not act as an independent process with its own call record. The numbers seemed against it: on large systems it is necessary to allow for as many as 1000 queues, and I saw no good way to run this many processes without consuming inordinate amounts of storage. As a result, stream server procedures are not allowed to block awaiting data, but instead must return after saving necessary status information explicitly. The contortions required in the code are seldom serious in practice, but the beauty of the scheme would increase if servers could be written as a simple read-write loop in the true coroutine style.
The power of that framework was exactly that it didn't need independent processes. It avoided considerable overhead that way. The cost was that you had to write coroutines by hand, and at a certain point that becomes very difficult to code. With a language that facilitates stackless coroutines, you can get much of the strengths of an architecture like STREAMS while not having to write contorted code.
Ok, I see. I spent a lot of time with .Net VMs, where you cannot simply separate await from the heavy machinery that runs it. I now understand that in a kernel context, you don't need a complex runtime like Tokio. But you still need a way to wake the executor up when hardware does something (like a disk interrupt); but this indeed is not a runtime dependency.
There’s got to be some complexity within the executor implementation though I imagine as I believe you have to suspend and resume execution of the calling thread which can be non-trivial.
I’m aware; you’re not adding new information. I think you’re handwaving the difficulty of implementing work stealing in the kernel (interrupts and whatnot) + the mechanics of suspending/resuming the calling thread which isn’t as simple within the kernel as it is in userspace. eg you have to save all the register state at a minimum but it has to be integrated into the scheduler because the suspension has to pick a next task to execute and resume the register state for. On top of that you’ve got the added difficulty of doing this with work stealing (if you want good performance) and coordinating other CPUs/migrating threads between CPUs. Now you can use non interruptible sections but you really want to minimize those if you care about performance if I recall correctly.
Anyway - as I said. Implementing even a basic executor within the kernel (at least for something more involved than a single CPU machine) is more involved, especially if you care about getting good performance (and threading 100% comes up here as an OS concept so claiming it doesn’t belies a certain amount of unawareness of how kernels work internally and how they handle syscalls).
No. I am adding new information but I think you are stuck on your initial idea.
There's no work stealing. Async-await is cooperative multitasking. There is no suspending or resuming a calling thread. There is no saving register state. There is not even a thread.
I will re-reiterate: async-await is just a state machine and Futures are just async values you can poll.
I'm sure moss has an actual preemptive scheduler for processes, but it's completely unrelated to its internal usage of async-await.
Embassy is a single threaded RTOS. Moss implements support for Linux ABI which presumes the existence of threads. The fact that async/await doesn’t itself imply anything about threads doesn’t negate that using it within the context of a kernel implementation of the Linux kernel ABI does require thread suspension by definition - the CPU needs to stop running the current thread (or process) and start executing the next thread if there’s any blocking that’s required. Clue: there’s a scheduler within the implementation which means there’s more than 1 thread on the system.
You’re arguing about the technical definition of async/await while completely ignoring what it means to write a kernel.
GQL is an HTTP endpoint. The question is, how are you schematizing, documenting, validating, code-generating, monitoring, etc. the request and response on your HTTP endpoints? (OpenAPI is another good choice.)
Really? Hmm... where in the HTTP spec does it allow for returning an arbitrary subset of any specific request, rather than the whole thing? And where does it ensure all the results are keyed by id so that you can actually build and update a sensible cache around all of it rather than the mess that totally free-form HTTP responses lead to? Oh weird HTTP doesn't have any of that stuff? Maybe we should make a new spec, something which does allow for these patterns and behaviors? And it might be confusing if we use the exact same name as HTTP, since the usage patterns are different and it enables new abilities. If only we could think of such a name...
An HTTP Range request asks the server to send parts of a resource back to a client. Range requests are useful for various clients, including media players that support random access, data tools that require only part of a large file, and download managers that let users pause and resume a download.
reply