Hacker Newsnew | past | comments | ask | show | jobs | submit | Seirdy's favoriteslogin

Pre-emptive multitasking is a tool that a solution might use, but it is not a solution on its own. If you have three users spawning one thread/process/task and one user spawning a million (literally, not figuratively), that one user can easily completely starve the three little users. Some of their million tasks may also be starved. But they'll get more done overall.

This whole problem gets way more complicated than our intuition generally can work with. The pathological distribution of the size of various workloads and the pathological distribution of the variety of resources that tasks can consume is not modeled well by our human brains, who really want to work with tasks that are essentially uniform. But they never are. A lot of systems end up punting, either to the OS which has to deal with this anyhow, or to letting programs do their own cooperative internal scheduling, which is what this library implements. In general "but what if I 'just'" solutions to this problem have undesirable pathological edge cases that seem like they "ought" to work, especially at the full generality of an operation system. See also the surprisingly difficult task of OOM-killing the "correct" process; the "well obviously you 'just'" algorithms don't work in the real world, for a very similar reason.

As computers have gotten larger, the pathological distributions have gotten worse. To be honest, if you're thinking of using "fair" it is likely you're better off working on the ability to scale resources instead. There's a niche for this sort of library, but it is constantly shrinking relative to the totality of computing tasks we want to perform (even though it is growing in absolute terms).


> I bet that WhatsApp is one of the rare services you use which actually deployed servers to Australia. To me, 200ms is a telltale sign of intercontinental traffic.

So, I used to work at WhatsApp. And we got this kind of praise when we only had servers in Reston, Virginia (not at aws us-east1, but in the same neighborhood). Nowadays, Facebook is most likely terminating connections in Australia, but messaging most likely goes through another continent. Calling within Australia should stay local though (either p2p or through a nearby relay).

There's lots of things WhatsApp does to improve experience on low quality networks that other services don't (even when we worked in the same buildings and told them they should consider things!)

In no particular order:

0) offline first, phone is the source of truth, although there's multi-device now. You don't need to be online to read messages you have, or to write messages to be sent whenever you're online. Email used to work like this for everyone; and it was no big deal to grab mail once in a while, read it and reply, and then send in a batch. Online messaging is great, if you can, but for things like being on a commuter train where connectivity ebbs and flows, it's nice to pick up messages when you can.

a) hardcode fallback ips for when DNS doesn't work (not if)

b) setup "0rtt" fast resume, so you can start getting messages on the second round trip. This is part of noise pipes or whatever they're called, and tls 1.3

c) do reasonable-ish things to work with MTU. In the old days, FreeBSD reflected the client MSS back to it, which helps when there's a tunnel like PPPoE and it only modifies outgoing syns and not incoming syn+ack. Linux never did that, and afaik, FreeBSD took it out. Behind Facebook infrastructure, they just hardcode the mss for i think 1480 MTU (you can/should check with tcpdump). I did some limited testing, and really the best results come from monitoring for /24's with bad behavior (it's pretty easy, if you look for it --- never got any large packets and packet gaps are a multiple of MSS - space for tcp timestamps) and then sending back client - 20 to those; you could also just always send back client - 20. I think Android finally started doing pMTUD blackhole detection stuff a couple years back, Apple has been doing it really well for longer. Path MTU Discovery is still an issue, and anything you can do to make it happier is good.

d) connect in the background to exchange messages when possible. Don't post notifications unless the message content is on the device. Don't be one of those apps that can only load messsages from the network when the app is in the foreground, because the user might not have connectivity then

e) prioritize messages over telemetry. Don't measure everything, only measure things when you know what you'll do with the numbers. Everybody hates telemetry, but it can be super useful as a developer. But if you've got giant telemetry packs to upload, that's bad by itself, and if you do them before you get messages in and out, you're failing the user.

f) pay attention to how big things are on the wire. Not everything needs to get shrunk as much as possible, but login needs to be very tight, and message sending should be too. IMHO, http and json and xml are too bulky for those, but are ok for multimedia because the payload is big so framing doesn't matter as much, and they're ok for low volume services because they're low volume.


I also integrated on my ssh server. try sshing to funky.nondeterministic.computer

FWIW, I find the classical chess tournaments with the super GMs to be fairly interesting, if only because the focus of the games is more about the metagame than about the game itself.

The article linked at the bottom of the source is a WSJ piece about how Magnus beats the best players because of the "human element".

A lot about the games today are about opening preparation, where the goal is to out-prepare and surprise your opponent by studying opening lines and esoteric responses (somewhere computer play has drastically opened up new fields). Similarly, during the middle/end-games, the best players will try to force uncomfortable decisions on their opponents, knowing what positions their opponents tend to not prefer. For example, in the candidates game round 1, Fabiano took Hikari into a position that had very little in the way of aggressive counter-play, effectively taking away a big advantage that Hikaru would otherwise have had.

Watching these games feels somewhat akin to watching generals develop strategies trying to out maneuver their counterparts on the other side, taking into consideration their strengths and weaknesses as much as the tactics/deployment of troops/etc.


>The Valid method takes a context (which is optional but has been useful for me in the past) and returns a map. If there is a problem with a field, its name is used as the key, and a human-readable explanation of the issue is set as the value.

I used to do this, but ever since reading Lexi Lambda's "Parse, Don't Validate," [0] I've found validators to be much more error-prone than leveraging Go's built-in type checker.

For example, imagine you wanted to defend against the user picking an illegal username. Like you want to make sure the user can't ever specify a username with angle brackets in it.

With the Validator approach, you have to remember to call the validator on 100% of code paths where the username value comes from an untrusted source.

Instead of using a validator, you can do this:

    type Username struct {
      value string
    }

    func NewUsername(username string) (Username, error) {
      // Validate the username adheres to our schema.
      ...

      return Username{username}
    }
That guarantees that you can never forget to validate the username through any codepath. If you have a Username object, you know that it was validated because there was no other way to create the object.

[0] https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...


I think you misconceive how open source projects need to work. Some projects (especially those in the web-dev niche) might view their relationship with upstream the way you do. But others do not.

For Ardour, we feel entirely free to just bring an upstream library into our source tree if we need to. And we also have our dependency stack builder that configures and occasionally patches upstream libraries to be just the way we need them. We do not wait for upstream adoption of our patches.

Most recently, for example, we became aware of impending moves by various Linux distros to remove GTK2, which we rely on. Even though we don't support distro builds, we want Linux maintainers to still be able to build the software, so we just merged GTK2 into our source tree.

This idea that using a 3rd party library becomes some sort of constraint is very, very far from reflecting universal truth. If we need to hack a 3rd party lib to make it do what we need, we just do it. Meanwhile, we get all the benefits of that lib. Ardour depends on about 86 libraries - we would be insane to rewrite all that functionality from scratch.


edit: Apologies for the wall. I think I finally landed on a decent mix after many edits. I'm finished now, lol.

SELinux has a bit of a well deserved reputation... but I, a fairly silly person, have managed to work with it

This video likely explains things far better than I can in this post:

https://www.youtube.com/watch?v=_WOKRaM-HI4

I'll probably fail with specifics, where they certainly do a better job.

So. First it's important to know SELinux runs in one of two modes:

    * A targeted mode where well-known/accounted-for things are protected. For example, nginx
    * A more draconian mode where *everything* is protected
People often present the first [default] mode as if it were the second.

The protection is based on policies that say 'things with this label/at this path are allowed to do XYZ'.

It's very focused on filesystem paths and what relevant applications try to do.

It's entirely manageable, but admittedly, complicated. Without practicing the words I can't express them.

Most people having trouble with SELinux are defying some convention. For example: placing application scratch data in '/etc'.

Policy management is a complicated topic.

The policy can be amended in cases where the standard doesn't apply; I won't cast judgement - sometimes it's a good idea, sometimes not.

Another way to handle this is to copy the label from one path and apply it to the one your application requires/customizes. This is less durable than leaning on the policy.

It acts as a sort of central DB... the goal is to make things such that the policy stores all of the contexts so the files/dirs can have "labels" applied for SELinux


Parts of this reminded me of Daniel Ellsberg's admonition to Henry Kissinger about security clearances[1]:

"[...]You will feel like a fool, and that will last for about two weeks. Then, after you’ve started reading all this daily intelligence input and become used to using what amounts to whole libraries of hidden information, which is much more closely held than mere top secret data, you will forget there ever was a time when you didn’t have it, and you’ll be aware only of the fact that you have it now and most others don’t....and that all those other people are fools."

[1] https://www.motherjones.com/kevin-drum/2010/02/daniel-ellsbe...


Most of the time they're not bugs in the JIT - they're bugs in other parts of the software, basically your path to exploit is:

1. Find bug that gives you arbitrary read

2. and a bug that lets you write to some arbitrary location

3. Find bug the lets you jump to some location

4. Use [1] to find the location of the RWX region

5. use [2] to copy your exploit code into [4]

6. use [3] to jump to [4]

7. Profit

Often times a single use after free gives you 1, 2, and 3. Essentially you use the UaF to get multiple different objects pointing to the same place, but as different types. e.g You get a JS function allocated over the top of a typed array's backing store, then from JS you have an object that the runtime thinks is a typed array, but the pointer to its backing store is actually pointing to part of the RWX heap. Then all you have to do is copy your shell code into the corrupted typed array, and call the function object.

(This requires a GC related use after free, and most of the JS runtimes have gotten progressively more aggressive about validating the heap metadata, but fundamentally if there's a GC bug it's mostly likely just a matter of how much work will be needed to exploit it)


There's currently a draft document for such a format [0], called IXDTF (the Internet Extended Date/Time Format). It allows you to specify a timezone (as a tz name) in brackets following an RFC 3339 string. To give a local time, you have to specify your best estimate of the UTC offset alongside the bracketed timezone. For instance, "2030-07-01 18:00:00 Europe/London" would be "2030-07-01T18:00:00+01:00[Europe/London]".

If the UK changes its rules before that time, then the timestamp becomes "inconsistent" (see section 3.4). The behavior on an inconsistent timestamp is left for the application to decide, but if a ! character is included within the brackets before the timezone name, then it's at least obligated to detect the problem instead of blindly following the UTC offset:

> In case of inconsistent time-offset and time zone suffix, if the critical flag is used on the time zone suffix, an application MUST act on the inconsistency. If the critical flag is not used, it MAY act on the inconsistency. Acting on the inconsistency may involve rejecting the timestamp, or resolving the inconsistency via additional information such as user input and/or programmed behavior.

This extended timestamp format is used in the proposed Temporal library for JavaScript [1] (though I'm not sure if it supports the ! character). The ZonedDateTime.from() parsing function [2] takes an optional "offset" parameter to allow the user to control which part of an inconsistent timestamp takes precedence. It also supports simply omitting the UTC offset and using only the timezone, but it warns that this is ambiguous for times in the repeated hour during a DST transition.

[0] https://www.ietf.org/archive/id/draft-ietf-sedate-datetime-e...

[1] https://tc39.es/proposal-temporal/docs/strings.html#iana-tim...

[2] https://tc39.es/proposal-temporal/docs/ambiguity.html#ambigu...


I think a lot of modern web design suffers from the same AB testing failures that got Pepsi into trouble back in the coke/pepsi days of the 80's. I may be misremembering the details, but the gist was that Pepsi was a bit sweeter, and so in the tiny amounts that people tasted during taste tests, they usually prefer the sweeter drink. In normal use, however, that flavor of Pepsi was too sweet, and sales tanked.

I see the same problem with user tests. People almost always pick the simpler of two options, because at the time they have no reason to pick anything else, and that looks easier. But then you start trying to get actual work done, and the Fisher Priced interface is too simplified.

And I know it's not just you or me that prefer information dense websites. Look at how well-known McMaster's site is, among people who actually need to use it to get work done.

Even in general public terms, things have gone too far. I set up a lot of iPads for elderly folks for use in assistive communication, and I have to go out of my way to use ones that have buttons. The modern ones without a home button are just too complicated for a whole swath of our population, and I am not exaggerating.


It’s simple. Find a reviewer who isn’t that thorough.

The claim doesn't hold. Half-open intervals appear in important places. That happens mainly because the family of finite unions of half-open intervals is closed under complement (that is, they form a semi-ring; this is unrelated to the "ring" concept in abstract algebra). In measure theory, one form of the Carathéodory extension theorem [0] says that you can uniquely extend a measure on a semi-ring to an actual measure (defined on a sigma-algebra).

The equivalent statement in probability theory says that a probability is uniquely determined by its cumulative distribution function, which are sometimes nicer to take limits of. You can also get a probability from a cumulative distribution function, provided your function is right-continuous [1] (which is directly related to half-open intervals). For more examples of half-open intervals in probability, you can look at stochastic processes. See, for example, càdlàgs [2] and Skorohod spaces; they capture the notion that processes that "decide to jump" (think of Markov chains changing states if you like) are right-continuous.

IMO, half-open intervals are just nicer whenever you have to union them or intersect them, and are no worse in other aspects when compared to closed intervals. Also, I think the author makes a big cultural blunder when he dismisses "mathematical aesthetics" as a valid reason. A significant number of mathematicians think of elegance as the ultimate goal in mathematics; as Hardy famously said, "There is no permanent place in the world for ugly mathematics".

[0] http://en.wikipedia.org/wiki/Carath%C3%A9odory%27s_extension...

[1] Well, we also need the obvious conditions: Correct limits at infinity and monotonicity.

[2] http://en.wikipedia.org/wiki/C%C3%A0dl%C3%A0g


https://isbgpsafeyet.com/ is a bit misleading. They mention both IRR filtering and RPKI. Both of them are needed. RPKI currently only is used to validate the origin. When a peer only relies on RPKI, it is trivial to announce a route you are not authorized to announce: just append the legit ASN at the end of the AS path. That's what IRR filtering prevents.

Both are therefore useful. RPKI protects from mistake and mitigate a bit the ability to spoof a route (adding the legit ASN makes the AS path longer and less preferred).


This pattern is correct (but has a flaw). It is a simple worker pool. The first available worker will grab the first piece of work from the channel and process it.

If you set MAX to 1000, you will have 1000 workers — and simultaneous connections.

The flaw is that when the last piece of work gets taken from the channel, the program will end, thus the last pieces of work that at the time are being processed, will get canceled. You could mitigate this by using a second channel, that the workers will access at the end of their work, thus ensuring that it will close only when the last worker finishes its work.


There's DNS NOTIFY (RFC 1996), which is commonly used to trigger a pull. But to minimize malfeasance it's usually ignored unless authenticated (e.g. via DNS TSIG) or appears to originate from a known address. The typical scenario is a master nameserver notifying a list of [known] slaves about a zone update.

There's also DNS Push (RFC 8765), but it's a very recent mechanism. I'm not familiar, but I doubt it's widely supported.


It's more than just crypto-bros trying to take over the web.

It's an italian serial conman in a dark room somewhere, pressing "print" on software that generates ostensibly a dollar-equivalent crypto currency out of thin air, wrapped in a network of criminal enterprises that is currently using special 'insider only' versions of those tokens (tether-trons, vs tether-eths) to facilitate the theft of the life savings of a generation of middle class chinese, wrapped in a nesting doll of scams and pyramid schemes, wrapped in the greed, hubris, and legitimate desperation of a generation of 1st world hustlers, all wrapped in crypto-bros trying to take over the world.

If it was just crypto-bros, there wouldn't be anywhere near so much smoke to confuse/conflate with fire: the market manipulation being facilitated by the dirty (and clean, albeit grifted) money entering the system makes the whole thing seem financially bigger than it really is. But despite all the smoke making, it is still just a fart in a colander: the entire crypto market cap, as fanciful and hyperbolic as it is, is still less than the market cap of Apple.

The really astounding part is that so much of this activity is being recorded on publicly accessible server ledgers, in real-time.


Because they're producing for surround sound systems you can't be bothered to deploy or operate optimally. 5.1, 7.1, etc. The "center" channel, in particular, is where the bulk of dialog is rendered and if your system can't synthesize that channel extremely well on the loud speakers you have then dialog suffers. Also, if you lack a surround sound system -- say you're using a pair of stereo speakers -- all the audio is mixed to these few loud speakers and the spatial segregation that we two-eared beings use to isolate sound is lost. There is no fix for that; the work was produced with the assumption of surround sound and driving all of that audio through a pair of closely spaced flat screen speakers/sound bar/whatever screws everything up spatially and dynamic range wise.

The audio is tailored for the minority of people that invest in "home theater" while most people, even those that can afford to, don't care enough about teevee to build such a system. In theory one should expect media to support non-optimal cases, and provision for exactly that exists, but this costs more production money to do correctly and so it gets neglected in various ways. Obviously they don't neglect the demanding consumer with his costly home theater setup because they'll get pilloried, so they half ass the other side.


I really hate these cheesy lines that are "excellent" according to the article, like:

- "You had enough fluff marketing content and so did we. So the absofreakinlutely BEST thing you could do right now..."

- "Writing this from my couch at home, hoping to find you safe and well at yours. It's been a busy summer! I wanted to share more about out 2.9 release..."

No, just no. Leave me alone, we aren't friends, you did not write this to me personally. When I read such text I am mentally bracing myself and putting up my defenses because we are at war. You are fighting in the arms race in the attention economy at my expense. This kind of email is like mimicry in the animal kingdom. You are faking the appearance of a mail from a friend when it's a business. It's nasty, parasitic and off-putting.

But maybe it's a cultural difference. Perhaps American culture is more receptive to this. But in most European cultures politeness requires a certain distance and being overly enthusiastically friendly makes us immediately think you are a scammer (in real life too). So pay attention to local cultural customs because these kinds of fake-cheerful-friendly-informal mails don't work everywhere.


outlook.com is even worse. If you bring up a mail server, you have to apply to them for permission to send them email. Doesn't matter that you have DKIM, SPF, DMARC, or that the domain is 25 years old. They don't care - you must apply through a process that ironically sends email that isn't DKIM certified in their response.

I know border crossings can take a while, but explaining my polycule is going to make this a long process. I hope they have time for tea, paper diagrams and Google Calendar.

I can actually purr very realistically, and my cats appear to be interested when I do, although of course I haven't the slightest idea of what I'd be saying to them:)

If you want to try you need to imitate first the letter "R" as they do in France, which is roughly done by exhaling while keeping the upper rear part of the tongue in slight contact with the rear of the palate. It comes easy to me as I had this "R" pronunciation until about 17 when I started correcting it (with some of the family disapproving). Now do the same but not using any vocal cords, that is, only breath, then repeat also while inhaling. Keeping the mouth barely open helps a lot to give it its particular sound which with practice can become really close to a real cat purring.


> Absolute idiots

Agreed. You do three things in a crisis:

1) Acknowledge the problem.

You killed someone's dog. This hits like losing family. "We're opening an investigation" is a chickenshit message.

2) CEO needs to speak.

Personal call from the top. If a dog is seriously injured or killed while on your watch, the CEO must make the call. This shows seriousness to the customer. It also ensures top brass feels the pain. That pain prevents future mistakes and is a real emotional tool for correcting cultures.

3) Over-correct.

You'll pay for cremation. You'll pay for counseling. You'll refund every Wag bill they ever incurred. If the family decides to get another dog, you'll offer to pay for their shots and food for a year and pet insurance for life.

The message must be we are sorry, this is unacceptable, and we intend on making this about as difficult for us as it is for you.

(Checklist from a video (with unfortunately and unrelated political content) by Scott Gallaway [1].)

[1] https://www.youtube.com/watch?v=PB-AyvgE8Ns


One of the biggest advantages humans have is that humans can take risks. We have agency. Computers cannot take risks; not because they cannot be programmed to take risks, but because they are programmed by huge corporations with massive, centralized liability.

This is why level five autonomous cars will never exist. It's not a technical problem; it's a political one.

Its easy for someone on hackernews to say "maybe you shouldn't be driving if the car can't"; try saying that to a single mother working as a third-shift nurse, just trying to make ends meet, who's patients will suffer if she doesn't get to work. Try saying that to a technician who needs to get to a downed electric line in the middle of a torrential downpour to restore power to 20,000 people sitting comfortably in their homes. Try saying that to a Marine deployed in Nangarhar, under fire from enemy combatants.

Level Five autonomy could exist, but what we'd end up discovering is that humans let Jesus take the wheel far more often than the engineers in sunny Silicon Valley think. Level Five might mean seeing five feet in front of you and horrible traction, operating on your knowledge of the road, and just driving forward. Until an organization is willing to program a car to still drive forward in those conditions, level five won't happen. Uber practically shut down their entire program after one person died; over 100 people die every day in traditional vehicle accidents.


I did

- USB A/USB C on one end

- Micro USB, Lightning, and USB C on the other end

- 10Gbps data

- 100W power

- supports video over USB-C (4K 60Hz)

https://a.co/d/1q5DhV5

A cheaper not as universal alternative. But good enough to charge everything but my 16 inch MacBook Pro

https://www.amazon.com/dp/B092ZT8CJ9


Forgive the second top-level comment, but I have some thoughts on Narrator and Edge. Disclosure: I worked on the Windows accessibility team at Microsoft during the transition from EdgeHTML to Chromium, and as a third-party screen reader developer before that. But I won't divulge anything confidential here.

It probably comes as no surprise that EdgeHTML and Chromium have completely different accessibility implementations. Narrator always had the best support for EdgeHTML. I was a third-party screen reader developer when EdgeHTML first came out, and for us third-party developers, EdgeHTML was a drastic change from IE. For over a decade, we had provided access to IE by injecting code into the IE process (yes, Windows lets you do that) and accessing the IE DOM in-process using COM. We did something similar for Firefox and Chromium, but using the IAccessible2 API (also COM-based). To improve security, old Edge disallowed this kind of injection; it could only be accessed through the UI Automation API. Narrator was built for this; the rest of us had to adapt after the fact. And since we could only access UIA through inter-process communication, not in-process like we did with the IE DOM and IAccessible2, there were performance problems, even with Narrator. (Luckily, I got to help solve those problems during my time on the Windows accessibility team.)

With Chromium (in both Google Chrome and the new Edge), screen readers can still inject code in-process and use the legacy IAccessible2 API. And NVDA, JAWS, and System Access (which I developed before joining Microsoft) do that. These third-party screen readers access Chrome and new Edge in the same way, at least inside the web content area, so if you're testing with one of these screen readers, it probably doesn't matter which browser you use. The situation with Narrator and Chromium-based browsers is more interesting. Narrator uses the UI Automation API to access all applications. Chromium has a native UIA implementation, largely contributed by the Edge team, but while that implementation is enabled by default in the new Edge, it isn't yet in Chrome. So Narrator accesses Edge using UIA. But for Chrome, and other Chromium-based apps (e.g. Electron apps), Narrator uses a bridge from IAccessible2 to UIA that's built into the UIA core module. So in corner cases, there may be differences in how Narrator behaves in Chrome and Edge.

So, should developers test with Narrator and/or Edge? Well, I may be too biased to answer that. But I think it's likely that Narrator usage is on the rise. While I was on the Narrator team at Microsoft, we heard from time to time about praise that Narrator was getting in the blind community. (Naturally I can't take full credit for that; it was a team effort.) Moreover, since Narrator is the option built into Windows, there will come a point (if it hasn't come already) when it's good enough for many users and they have no reason to seek a third-party alternative. Also, there are some PCs where Narrator is the only fully functional screen reader, specifically those running Windows 10 S (the variant that doesn't allow traditional side-loaded Win32 apps). I'd guess that an increasing number of students and users of corporate PCs are saddled with that variant of Windows. And while I can't say anything about future versions of Windows, one can make an educated guess based on the broader trajectory of the industry.

As for whether it's worth testing with Edge as opposed to Chrome, I don't know. Fortunately, browser usage data is readily available.


This is called recuperation, and it's fairly common. Rather than censoring ideas, capitalism just turns them into another commodity and dilutes their potency.

The Black Mirror episode Fifteen Million Merits shows exactly how it works.

https://en.wikipedia.org/wiki/Recuperation_(politics)


> I’m genuinely asking here out of lack of experience — Is it separate but equal, or is it customizing the user experience?

It's separation, because it introduces the chances of an inequitable experience being created as the "customised" version for screen reader users drifts out of sync. This may not even be down to anything malicious; plenty of companies create mobile-friendly versions of websites and apps, only to find that as years go by, they no longer have the budget to facilitate their upkeep. Far from assigning them to the trash heap of history, these often continue to be provided, offering a substandard experience, and this is despite companies having so much data available about high mobile usage. The chances of it happening to an accessibility-specific version are higher because the likely user take-up is lower, and as this post and thread indicate, speciality skills are often required to make a good job of it. Not like responsive web design, where experts are ten a penny.

There are other reasons this approach is flawed, of course:

1. Screen reader users aren't the only disabled people out there. Heck, even within that group, there are those who use the software as an absolute necessity, and others who use it in addition with other assistive tech like speech recognition or a magnifier. Nobody in their right mind is going to create a separate experience for every subset of disabled users. Even if they did, how would they be surfaced?

2. Some people are only temporarily disabled, e.g. because of an injury. They don't have the lifelong feel for how to look for accessibility settings or separate modes, so they aren't likely to benefit from a shiny accessible version. Making your main product inclusive prepares for this.

3. The aim should be to allow customisations to be included which aren't intrusive to those who don't need them, e.g. via techniques like WAI-ARIA[1] which allow additional context to be added for screen reader users while remaining invisible to everyone else.

[1] https://www.w3.org/TR/wai-aria-1.2/


I suspect there is a growing web developer conspiracy to fight overconsumption and climate change.

I see slow websites with carousels of large images with big texts. They link to value statements, mission statements, experiences, goals, community bullshit. If you want to know products, prices, or even what the site is all about, you must squint your eyes and really concentrate, maybe scroll down to the bottom of the page. I save €1000 per year in impulse buys because of these sites.


It's one of my smorgasbord search engines run from a script (which collectively do so much better than Google or any of the legacy "big-tech" engines)

Why it matters

One of the great American philosophical orators, Rick Roderick (a Texan cowboy philosopher and academic twin of Bill Hicks), spoke about the importance of "the margins":

Rick notices that in history certain people do not get a voice;

Women. The Poor. Old and young people. The "mad" and disabled. Race minorities within any group. Thinkers more than a decade ahead of their time.

There's even a thing called "marginal analysis" in hermeneutics - not the economic concept - the analysis of things that have been left out of history, and why, and by whom (victors writing and re-writing history etc). Erasure leaves a trace that tells its own tale.

These things have become the grist of social justice today (with all it's positive and negative sides). But "injustice" is not really Roderick's point. It hardly needs mentioning that most people get a bad deal, don't get heard, and their lives fade into obscure irrelevance.

What matters is the enormous value and importance of marginal contributions. These are lost. Almost every great discovery, it turns out on deeper analysis, was made by an (or often, simultaneously several) obscure nobody years earlier, who didn't provide a recognised proof in the right journal, or didn't patent the thing. In Plato and Shakespeare, Roderick says, "It is always the fool who delivers the important news". Things don't start at the center, they begin on the margins and diffuse in until they find legitimacy in an acceptable voice.

A marginal search engine engine is, almost by definition, acting counter to the status quo, as the Internet Archive stands in opposition to fickle entropy of bitrot and ephemeral culture.

Well done on this project and good luck.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: