Hacker Newsnew | past | comments | ask | show | jobs | submit | knorker's commentslogin

I would say that I understand all the levels down to (but not including) what it means for electron to repel another particle of negative charge.

But what is not possible is to understand all these levels at the same time. And that has many implications.

Humans we have limits on working memory, and if I need to swap in L1 cache logic, then I can't think of TCP congestion windows, CWDM, multiple inheritance, and QoS at the same time. But I wonder what superpowers AI can bring, not because it's necessarily smarter, but because we can increase the working memory across abstraction layers.


I think they are referring to the 1997 hack/leak: https://www.wired.com/1997/01/hackers-hack-crack-steal-quake...

> The first batches of Quake executables, quake.exe and vquake.exe were programmed on HP 712-60 running NeXT and cross-compiled with DJGPP running on a DEC Alpha server 2100A.

Is that accurate? I thought DJGPP only ran on and for PC compatible x86. ID had Alpha for things like running qbps and light and vis (these took for--ever to run, so the alpha SMP was really useful), but for building the actual DOS binaries, surely this was DJGPP on x86 PC?

Was DJGPP able to run on Alpha for cross compilation? I'm skeptical, but I could be wrong.

Edit: Actually it looks like you could. But did they? https://www.delorie.com/djgpp/v2faq/faq22_9.html


I asked John Carmack and he told me they did.

There is also an interview of Dave Tayor explicitly mentioning compiling Quake on the Alpha in 20s (source: https://www.gamers.org/dhs/usavisit/dallas.html#:~:text=comp... I don't think he meant running qbsp or vis or light.


> he told me they did.

This is when they (or at least Carmack) was doing development on Next? So were those the DOS builds?


I thought the same thing. There wouldn't be a huge advantage to cross-compiling in this instance since the target platform can happily run the compiler?

Running your builds on a much larger, higher performance server — using a real, decent, stable multi-user OS with proper networking — is a huge advantage.

Yes, but the gains may be lost in the logistics of shipping the build binary back to the PC for actual execution.

An incremental build of C (not C++) code is pretty fast, and was pretty fast back then too.

In q1source.zip this article links to is only 198k lines spread across 384 files. The largest file is 3391 lines. Though the linked q1source.zip is QW and WinQuake, so not exactly the DJGPP build. (quote the README: "The original dos version of Quake should also be buildable from these sources, but we didn't bother trying").

It's just not that big a codebase, even by 1990s standards. It was written by just a small team of amazing coders.

I mean correct me if you have actual data to prove me wrong, but my memory at the time is that build times were really not a problem. C is just really fast to build. Even back in, was it 1997, when the source code was found laying around on an ftp server or something: https://www.wired.com/1997/01/hackers-hack-crack-steal-quake...


"Shipping" wouldn't be a problem, they could just run it from a network drive. Their PCs were networked, they needed to test deathmatches after all ;)

And the compilation speed difference wouldn't be small. The HP workstations they were using were "entry level" systems with (at max spec) a 100MHz CPU. Their Alpha server had four CPUs running at probably 275MHz. I know which system I would choose for compiles.


> "Shipping" wouldn't be a problem, they could just run it from a network drive.

This is exactly the shipping I'm talking about. The gains would be so miniscule (because, again, and incremental compile was never actually slow even on the PC) and the network overhead adds up. Especially back then.

> just run it from a network drive.

It still needs to be transferred to run.

> I know which system I would choose for compiles.

All else equal, perhaps. But were you actually a developer in the 90s?


Whats the problem? 1997? They were probably using 10BaseTX network, its 10Mbit... Using Novel Netware would allow you to trasnfer data at 1MB/s.. quake.exe is < 0.5MB.. so trasnfer will take around 1 sec..

Not sure what you mean by "problem". I said miniscule cancels out miniscule.

Networking in that era was not a problem. I also don’t know why you’re so steadfast in claiming that builds on local PCs were anything but painfully slow.

It’s also not just a question of local builds for development — people wanted centralized build servers to produce canonical regular builds. Given the choice between a PC and large Sun, DEC, or SGI hardware, the only rational choice was the big iron.

To think that local builds were fast, and that networking was a problem, leads me to question either your memory, whether you were there, or if you simply had an extremely non-representative developer experience in the 90s.


Again, I have no idea what you mean by networking being a "problem".

You keep claiming it somehow incurred substantial overhead relative to the potential gains from building on a large server.

Networking was a solved problem by the mid 90s, and moving the game executable and assets across the wire would have taken ~45 seconds on 10BaseT, and ~4 seconds on 100BaseT. Between Samba, NFS, and Netware, supporting DOS clients was trivial.

Large, multi-CPU systems — with PCI, gigabytes of RAM, and fast SCSI disks (often in striped RAID-0 configurations) — were not marginally faster than a desktop PC. The difference was night and day.

Did you actively work with big iron servers and ethernet deployments in the 90s? I ask because your recollection just does not remotely match my experience of that decade. My first job was deploying a campus-wide 10Base-T network and dual ISDN uplink in ~1993; by 1995 I was working as a software engineer at companies shipping for Solaris/IRIX/HP-UX/OpenServer/UnixWare/Digital UNIX/Windows NT/et al (and by the late 90s, Linux and FreeBSD).


Ok that's not what I said. So we'll just leave it there.

That's exactly what you said, and it was incorrect:

> This is exactly the shipping I'm talking about. The gains would be so miniscule (because, again, and incremental compile was never actually slow even on the PC) and the network overhead adds up. Especially back then.

The network overhead was negligible. The gains were enormous.


>> I said miniscule cancels out miniscule.

> You keep claiming it somehow incurred substantial overhead

This is going nowhere. You keep putting words in my mouth. Final message.


Jesus Christ. Networking was cheap. Local builds on a PC were expensive. You are pedantic, foolish, and wrong.

Were you even a developer in the 90s? Are you trying to annoy people?


> I mean correct me if you have actual data to prove me wrong, but my memory at the time is that build times were really not a problem.

I never had cause to build quake, but my Linux kernel builds took something like 3-4 hours on an i486. It was a bit better on the dual socket pentium I had at work, but it was still painfully slow.

I specifically remember setting up gcc cross toolchains to build Linux binaries on our big iron ultrasparc machines because the performance difference was so huge — more CPUs, much faster disks, and lots more RAM.

That gap disappeared pretty quickly as we headed into the 2000s, but in 1997 it was still very large.


I remember two huge speedups back in the day: `gcc -pipe` and `make -j`.

`gcc -pipe` worked best when you had gobs of RAM. Disk I/O was so slow, especially compared to DRAM, that the ability to bypass all those temp file steps was a god-send. So you'd always opt for the pipeline if you could fill memory.

`make -j` was the easiest parallel processing hack ever. As long as you had multiple CPUs or cores, `make -j` would fill them up and keep them all busy as much as possible. Now, you could place artificial limits such as `-j4` or `-j8` if you wanted to hold back some resources or keep interactivity. But the parallelism was another god-send when you had a big compile job.

It was often a standard but informal benchmark to see how fast your system could rebuild a Linux kernel, or a distro of XFree86.


> Linux kernel builds took something like 3-4 hours on an i486

From cold, or from modified config.h, sure. But also keep in mind that the Pentium came out in 1993.


I assumed that obviously this is very clever performance art to show how even the healthiest food can be turned into brainrotting sludge.

But then I look at the comments, and it really looks like some people want this.

Now I'm depressed.


Yeah, that was the intention.

The fact that you have to be more specific than "Scott" says a lot.


That’s more likely just you.

Anyone who knows Apple knows who “Scott” is referring to. Scott Forstall.


Heh, I assumed he was referring to "Scott the Woz" Scott Wozniak, a vintage-gaming youtuber. I assumed that the GP took a more literal attack on "only one 'Woz'", hile you took a more symbolic "only one engineer of such quality". In the context of Apple, sure "Scott" is Scott Forstall, but that's not necessarily the context.


I could be wrong then if that was their reference. I was in the mindset of foundational Apple leaders, not other Woz’s outside the Apple hemisphere.

EDIT: reading this again, now thinking you are right and they are just being snarky about the “one Woz in the world” existing.


Woz is not just "some guy at apple". He's a force in his own right to the point of being bigger than Apple in some ways.

"Woz" is googlable. His name doesn't need context. "Larry" could be Ellison or Page. "Scott" could be Forstall or Adams.

Who played Scott Forstall in the movie?

Anyway, other comments proven it's not just me, too.


That's crazy because I assumed they were obviously talking about Apple's first CEO.

For "Scott Apple" search string, Google agrees with me and the forstall guy is just a secondary mention.


For me he will always be “Scotty”. “Scott” at Apple will almost always imply Scott Forstall.


My first computer was an Apple IIGS and everything since then has been a Mac. "Scott" doesn't bring anyone specific to mind for me. Maybe that connection is automatic for newcomers who immediately think "iPhone" when they hear "Apple."


I had a very long career at Apple. I have also met and spent time with Woz on multiple occasions. I have some bias here.

Possibly my assumption was incorrectly based more on people who actually worked at Apple vs what the normal public thinks of when they hear “Scott” and “Woz” in the context of Apple.


It would make sense that people on the inside would be a lot more aware of him. Forstall was obviously a pretty big name in the community but not to the point of getting a shorthand name like that. And he was mostly forgotten pretty quickly after he left.


That auto flip back and forth between before and after is the most annoying thing I've seen since the blink tag was removed.


yeah I would like to read the code before it switches but nope


This is not exactly right. True, $8B is not earth shattering because of the US's enormous debt. But by removing a potential $8B owner, it is a reduction in demand, and thus a tiny reduction in price. This is literally the first rule of pricing: "supply and demand".

Sure, someone else is on the other side of the deal. But their demand is also satiated at a certain price point. Hell, if they wanted to buy from other sellers then it's not like T bills were not liquid.

Would you say the same if Norway's wealth fund offloaded their $181B? At those scales it would be more likely that it'd be visibly price affecting, and therefore affect the US's ability to borrow at existing cost.

So yes, when you sell your one NVDA, you are reducing demand and thus price. Epsilon, but nonzero.


Oh, no vector extension. Probably a dealbreaker for me.


why?


Well Linux distros are consolidating around RVA23 target, for one thing (I'm not OP).


The difference in performance in the kind of compute workloads I'm interested in are so improved by SIMD/Vector that there isn't even any point evaluating non-RVV hardware.


RISC-V Vector is roughly equivalent to MMX, SSE, and AVX. A lot of tasks without those instructions are flat out slower without.


Ahh, I read that as "Oh no, vector extension" my bad


It makes me sick to see London bragging about this. This is last century technology, and other cities managed to retrofit this just fine, including of course using leaky cable in the tunnels.

That it took 20-30 years longer than everyone else is through absolute incompetence and mismanagement. It would have been in place at least 10 years ago if they hadn't screwed up the RFP that Huawei won.

And it's not even shared infra! Vodafone is WAY behind the other networks.

I have worked with these things. There's no valid excuse for being 20-30 years behind on this.

And it's still not landed! By the time it finally gets to all stations I wouldn't be surprised if it's 40-50 years behind everyone else.


The actual reason it took so long was TFL wanted to rent this to a mobile network to create revenue for themselves.

It wasn't enough to be cost-neutral it had to make them money.


Well the Huawei winning scandal appeared to be real. But also, I wouldn't be surprised.

In any case, as I said I have actual expertise in this area, and exactly none of being 20-50 years late has any technical reasons. I'm not surprised if there the layers of incompetence go deeper and include what you said.


What are the legalities of having a cash balance, withdrawable? (So not "credits" or something)

Do you need a banking license, or partner with someone who has?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: