Hacker Newsnew | past | comments | ask | show | jobs | submit | inkyoto's commentslogin

> Since when were payment networks latency sensitive?

Since the advent of e-commerce, POS-networking and fraud detection systems in 1990's-2000's.

User-facing and authorisation path are highly latency sensitive. It includes tap-to-pay, online checkout, issuer authorisation, fraud decisioning, and instant payment confirmation – even moreso for EFT payments.

> […] 2-5 seconds more from card presentation to getting approval back.

This is the mid-1990's level QoS when smaller merchants connected the acquirer bank via a modem connection, and larger ones via ISDN.

Today, payments are nearly instant in most cases, with longer than one-second card payment flows falling into the exceptions territory or inadequate condition of the payment infrastructure.


As well as to eggplant and belladonna.


And tomatoes.


Years ago I made harissa out of peppers for sauce for baked chicken wings. To my surprise it tasted tomato-ey.

After doing some Google searches I realized the plants were related and eventually it sort of made sense. Peppers are almost like a very dry, very firm tomato.

In hindsight it's obvious but at the time it was very surprising.


you are both right of course

but for some reason my fealty to potato does not extend to tomatoes and eggplant quite the same way. i feel toward potatoes sort of how gary Larson feels about cows


PDP-11, m68k – to name a few, did not allow misaligned access to anything that was not a byte.

Neither are RISC nor modern.


In regards to 68000 I don't remember, only used it during demoscene coding parties when allowed to touch Amiga from my friends.

I have only seen PDP-11 Assembly snippets in UNIX related books, wasn't aware of its alignment requirements.


PDP-11 was a major source of inspiration for m68k architecture designers. The influence can be seen in multiple places, starting from the orthogonal ISA design down to instruction mnemonics.

It is quite likely that not allowing the misaligned access was also influenced by PDP-11.


If I'm not mistaken, microcode is a thing at least on Intel CPU's, and that is how they patched Spectre, Meltdown and other vulnerabilities – Intel released a microcode update that BIOS applies at the cold start and hot patches the CPU.

Maybe other CPU's have it as well, though I do not have enough information on that.


> In Linux the default swap behaviour is to also swap out the memory mapped to the executable file, not just memory allocated by the process […] I believe both Windows and macOS don't swap out code pages, so the applications remain responsive, at the of (potentially) lower swap efficiency

Linux does not page out code pages into the swap. You might be conflating page reclamation with swapping instead.

In Linux, executable «.text» pages are mapped[0] as file-backed pages, not anonymous memory, so when the kernel needs to reclaim RAM it normally drops those pages and reloads them from the executable file on the next page fault once they are accessed again (i.e. on demand) rather than writing them to swap.

In this particular regard, Linux is no different from any other modern UNIX[1] kernel (*BSD, Solaris, AIX and may others).

[0] Via mmap(2) in argv[0], essentially.

[1] Modern UNIX is mid-1990's and onwards.


Yes, you are correct, I wasn't precise enough. It doesn't make sense to swap the existing code pages, they are just unmapped. (And that's the reason why you get "text file busy" when doing scp over the file: since the OS relies on the fact that the .text pages can be safely unmapped it needs to guarantee that they stay read-only)


> In 1970 it might have been the only way to provide a flexible API, but nowadays we have a great variety of extensible serialization formats better than "struct".

Actually, fork(2) was very inefficient in the 1970's and for another decade, but that changed with BSD 4.3 which shipped an entirely new VMM in 1990 in 4.3-Reno BSD, which – subsequently – allowed a CoW fork(2) to come into existence in 4.4 BSD in 1993.

Two changes sped fork (2) up dramatically, but before then it entailed copying not just process' structs but also the entire memory space upon a fork.


AFAIR it was quite efficient (basically free) on pre-VM PDP-11 where the kernel swapped the whole address space on a context switch. It only involved swapping to a new disk area.


I used MINIX on 8086 which was similar and it definitely was not efficient. It had to make a copy of the whole address space on fork. It was the introduction of paging and copy-on-write that made fork efficient.


Oh, is that how MINIX did that? AIUI, the original UNIX could only hold one process in memory at a time, so its fork() would dump the process's current working space to disk, then rename it with a new PID, and return to the user space — essentially, the parent process literally turned into the child process. That's also where the misconception "after fork(), the child gets to run before the parent" comes from.


What are we comparing Ada to… PHP?


> The "German cognate is closer" is not helpful!

It is not helpful because comparing English from 1000 AD with Modern High German is the wrong premise to start off with.

The correct and more interesting comparison would be with Old High German from around the same time although it did not indicate the umlaut in the spelling at the time (which would happen 400-500 years later) – even though the i-umlaut had already developed.

So «schön» was «scōni» (or «sconi») in OHG. Also, ö and ü developed from /o/ and /u/, so juxtaposing them with English ē is likely incorrect.


It is not helpful because comparing English from 1000 AD with Modern High German is the wrong premise to start off with.

I hear this premise repeated time and time again. Search the internet. I believed this premise, and actually started studying German again while waiting for my Old English textbook to arrive. It did not help.


I do not need to search the internet as I am fluent at German as well.

The knowledge of Modern High German helps little to none as far as the comprehension of Old English is concerned. From a modern German speaker's perspective, Old English – with a relatively small number of exceptions – is gibberish.


Before the wide adoption of Unicode in mainstream operating systems, quite a few people used -- (two ASCII minus signs) to differentiate between a hyphen and a dash (of either pedigree), and some people used -- in emails and online where a dash was required.

Most think that it came from TeX, which had -- (for an en dash) and --- (for an em dash, although I don't think I have ever observed it out in the wild outside TeX), but in fact, the habit well predates TeX and goes all the way back to typewriters where typists habitually hit two hyphens in a row to approximate an em dash. The approximated em dash was described in hard-copy manuscript preparation rules such as The Chicago Manual of Style.

So, if you have ever used a typewriter or TeX, you can claim an even richer than 20 years’ heritage of using the em dash.


> […] C optimization tricks are hacks, the fact godbolt exists is proof that C is not meant to be optimizable at all, it is brute force witchcraft.

> At a certain point though, something's gotta give, the compiler can do guesswork, but it should do no more, if you have to add more metadata then so be it it's certainly less tedious than putting pragmas and _____ everywhere, some C code just looks like the writings of an insane person.

There is not even a single correct or factual statement in cited strings of words.

C optimisation is not «hacks» or «witchcraft»; it is built on decades of academic work and formal program analysis: optimisers use data-flow analysis over lattices and fixed points (abstract interpretation) and disciplined intermediate representations such as SSA, and there is academic work on proving that these transformations preserve semantics.

Modern C is also deliberately designed to permit optimisation under the as-if rule, with UB (undefined behaviour) and aliasing rules providing semantic latitude for aggressive transformations. The flip side is non-negotiable: compilers can't «guess» facts they can't prove, and many of the most valuable optimisations require guarantees about aliasing, alignment, loop independence, value ranges, and absence of UB that are often not derivable from arbitrary pointer-heavy C, especially under separate compilation.

That is why constructs such as «restrict», attributes and pragmas exist: they are not insanity, they are explicit semantic promises or cost-model steering that supply information the compiler otherwise must conservatively assume away.

«metadata instead» is the same trade-off in a different wrapper, unless you either trust it (changing the contract) or verify it (reintroducing the hard analysis problem).

Godbolt exists because these optimisations are systematic and comparable, not because optimisation is impossible.

Also, directives are not new, C-specific embarrassment: ALGOL-68 had «pragmats» (the direct ancestor of today’s «pragma» terminology), and PL/I had longstanding in-source compiler control directives, so this mechanism is decades older than and predates modern C tooling.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: