Hacker Newsnew | past | comments | ask | show | jobs | submit | einr's commentslogin

120k LoC of probably largely vibecoded nonsense for a window with a text box and a button that lets you send and receive some data over a HTTP API.

Their Thunderbird for iOS repo is 34k lines.

I'm so very tired.


Hi, I'm on team that worked on this. No it's not vibe coded. We do pretty intense code review of every PR. It looks like the number you're seeing is including lock files and artifacts that are not part of the core coverage.

Fair enough if it’s not vibe coded, I’ll take your word for it. Code review seems like it’s mostly bots (Claude, Cursor, Greptile) from the PRs I looked at?

Nevertheless, AI use is not what really stood out to me. It’s that it’s SO MUCH CODE. I have no idea how you guys maintain or reason about the quality or security of something like this. Good luck, I guess.


Ah yeah, after accusing others that they vibe-code without proofs, no apology and steering the accusations to other things?

"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

"Please don't fulminate."

"Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative."

https://news.ycombinator.com/newsguidelines.html


[flagged]


[flagged]


Please don't respond to a bad comment by breaking the site guidelines yourself. That only makes things worse.

https://news.ycombinator.com/newsguidelines.html

Edit: you've unfortunately been breaking the site guidelines repeatedly. For example https://news.ycombinator.com/item?id=47798555 and https://news.ycombinator.com/item?id=47672260. That last one was particularly bad! Please don't do that!


>120k LoC of probably largely vibecoded nonsense for a window with a text box and a button that lets you send and receive some data over a HTTP API.

"I will make loads of assumptions without checking so that I can invent reasons to get mad"

Note that about 30,000 of those lines are JSON files for localization and testing, as one example.


22,056 is not about 30,000. Per scc:

  Language      Files     Lines   Blanks  Comments     Code
  ─────────────────────────────────────────────────────────
  TypeScript      760    109110    14500      7397    87213
  JSON             41     22056        6         0    22050
  Markdown         56      7150     2086         0     5064
  YAML             33      3965      406       208     3351
  ... and many more with fewer than 1k lines
Regarding "loads of assumptions," it's hard to tell how much of this is vibecoded slop (definitely non-zero looking at the commit log), but I don't think it's that outrageous to claim 87k sloc is too much for a textbox and an API wrapper.

How much UI text does this thing have that it needs thousands of lines of localization? Where are these files?

Especially curious because I see a whole lot of hardcoded english text in there…


Are you arguing that 90k LoC for a window with a text box and an overengineered textarea tag is somehow more acceptable than 120k?

That's still an immense amount of code for a chat interface essentially consisting of a text box and a button, which any OS (mobile or desktop) can usually throw up in a few lines of code.

Maybe you wouldn't be so tired if you didn't make assumptions of things to be mad about

On the bright side - it doesn't load without javascript ...in Firefox...

I had to check the comments here to even see what this product was for that reason.

Wait what ? Did you include libraries imported by NPM in this count ?

I imagine that would bump that number to milions.

I just checked one old take home task in Angular I did last year and the total number of lines is over five million over 35k+ files.


I don’t think so. I just used a public GitHub LoC counting tool directly on the repo, there are a few.

https://ghloc.vercel.app/thunderbird/thunderbolt?branch=main claims 141k and most of it is Typescript.


What fatigues you about this observation?

Would recommend exercise


This is a pain, to be sure, but surely there is some sort of logic you could implement to detect whether a file is a Real File that actually exists on the device (if so, back it up) or a pointer to the cloud (ignore it by default, probably, but maybe provide a user setting to force it to back up even these)

It used to be the case that placeholder files were very obvious but now OneDrive and iCloud (possibly others) work more like an attached network storage with some local cache, and that was a good move for most programs because back then a file being evicted from storage looked like a file deletion.

It's a little interesting that they would pick Office 2000 as an example, since Office 97 and onwards do not use standard OS widgets -- it reimplements and draws them itself*.

The menu bar in Office 2000 does not look like the standard OS menu bar, for instance. The colors, icons and spacing are non-standard. This is only slightly jarring, because it's pretty well done, but it's still inconsistent with every other app.

This was kind of the beginning of the end for Windows consistency -- when even Microsoft thought that their own toolkit and UX standards were insufficient for their flagship application. Things have only become worse since then.

* This becomes very obvious when you run Office 97 on NT 3.51, which generally looks like Windows 3.1, but since Office 97 renders itself and does not care about OS widgets, it looks like this: http://toastytech.com/guis/nt351word.png


This is funny because Teams on Windows uses its custom notifications instead of the system ones. But running it in a browser under Linux, I get my native notifications!


How could I possibly forget the lock!

The DX/2 66 is a true legend of a chip. It was so good. The final nail in the coffin for the Amiga and for 68k. I love the Amiga, but it just didn’t Doom.

Before it, you could claim that a 68040 was kinda-sorta keeping up with the 486 and that the nicer design and better operating systems of other computers made up for the delta in raw performance, but the DX/2 66 running Doom was the final piece of proof that the worse-is-better approach of using raw CPU grunt to blast pixels at screen memory instead of relying on clever custom circuitry was winning.

Faced with overwhelming evidence, everyone sold their Amiga 1200s and jumped ship to that hated Wintel platform.


I remember arguments (and benchmarks) around all the variations of the 486 since the bus speed/clock speed was uncoupled (the /2 is clock doubling). For some applications, a 50Mhz 486 with a 50Mhz bus would beat a DX/2 66Mhz with a 33Mhz bus.

And sometimes the DX/4 100Mhz would be slowest of all those at 25Mhz bus.


Nearly correct. The DX/4 100MHz had a 33MHz bus. The DX/4 75MHz had the 25MHz bus. I remember well because I had both.

Now I remember being annoyed that it wasn't the DX/3 as it should have been!

Especially since when actual clock quadrupled chips eventually came out they had to call themselves ridiculous things like ”5x86” instead of DX/4. (The Am5x86 133 runs at 4x33 MHz)

I think 5x86 had more to do with marketing than anything else, because the Pentium had already been on the market for a while when the Am5x86 came out.

I think it’s a bit of both. It absolutely tried very hard to pretend that it was a ”586” (Pentium class) but also ”5x” is right there and implies that if the DX4 is 4, this is 5.

The full name on the chip on some of them is ”Am5x86-P75 DX5-133” which implies a lot of things, some of which are flat out misleading (it does not get very close to ”P75” performance)


I had one of these back on the day. A very fine 486

I remember being so excited when I figured out how to jumper my DX/4 100 and operate it with clock doubling and a 50 MHz front side bus speed. Same core speed, faster memory and I/O.

My peripherals seemed to take it. My graphics output showed some slight glitches, which I was OK with for the speed.

However, I think it was a bit unstable and would fail a correctness challenge like compiling XFree86 or the Linux kernel, which were like overnight long runs. Must have been some bit flips in there occasionally. I seem to recall that once that reality settled into my brain, I went back to the clock tripler config.


I still remember scribbling on Athlons with a pencil to max them out - we probably spent as much on heatsinks as we saved on CPUs.

As I noted in my other comment (1), in 1985 Amiga OCS bitplane graphics (separate each bits of a pixel index into separate areas) was a huge boon in 2d capability since it lowered bandwidth to 6/8ths but made 3d rendering a major pain in the ass.

The Aga chipset of the 1200/4000 stupidly only added 2 more bitplanes. The CD32 chip actually had byte-per-pixel (chunky) graphics modes but the omission from the 1200 was fatal.

Reading in hindsight there was probably too many structural issues for Commodore to remain competitive anyhow, but an alt-history where they would've seen the needs for 3d rendering is tantalizing.

1: https://news.ycombinator.com/item?id=47717334


> The Aga chipset of the 1200/4000 stupidly only added 2 more bitplanes. The CD32 chip actually had byte-per-pixel (chunky) graphics modes but the omission from the 1200 was fatal.

The intention was good, but the Akiko chip was functionally almost useless. It was soon surpassed by CPU chunky to planar algorithms. I don't think it was ever even used in any serious way by any released games (though it might have been used to help with FMV).


Ah, I was under the impression that it had a native chunky mode but it was a built-in C2P routine? Anyhow, seems it was useful (1) when running on stock CD32's but not in conjunction with faster machines.

1: https://forum.amiga.org/index.php?topic=51616.msg544232#msg5...


Which brings me to my pet peeve, the already slow 68020 (680ec20) at 14MHz was crippled by, even though it had a 32-bit bus, was only connected to a 16-bit RAM bus. (Chipram.)

This 16-bit memory (2 megs) is also where the framebuffer and audio lives, so the stock CPU in A1200 has to share bandwidth with display signal generation and the graphics and audio processing.

All-in-all, it meant the Amiga 1200 had only about twice the memory throughput of the Amiga 500. (About 5 megabytes/s vs about 10 megabytes/s)

If the A1200 had at least some extra 32-bit memory (it existed as a third party add-on) the CPU could have had its own uncontested memory with a troughput of about 20-40 megabyte/s.

Imagine the difference it would have made if the machine had just a little extra memory.

That's just a tiny detail. That the chipset wasn't 32-bit was another disappointment.

The bigger problem was that Commodore as a company was aimless.


Yeah, and it took ~7 years to make those marginal improvements over the earlier Amiga chipset! I'm ignoring ECS, since it barely added anything over OCS for the average user.

Commodore so slowly and ineffectually improving on the OCS didn't help, but the original sin of the Amiga was committed in the beginning, with planar graphics (i.e., slow and hard to work with, even setting aside HAM) and TV-oriented resolutions/refresh rates (i.e., users needing to buy a "flicker fixer"). It's like they looked at one of the most important reasons for the PC and Mac's success—a gorgeous, rock-solid monochrome display—and said "Let's do exactly the opposite!"

Iirc interlaced display and 6 bitplanes were a compromise to allow color graphics in 1985 with the memory bandwidths available at the time.

If it's a sin or feature can of course be debated but I remember playing games on an Amiga in the early 90s and until Doom the graphics capabilities didn't look outdated.

By 1992 with AGA however I agree, flicker and planar graphics(with 8 bitplanes any total memory bandwidth gains were gone) was a downside/sin that should've been fixed to stay relevant.


5 sins in 1992: - 8 bit planar instead of chunky - progressive display (vs interlaced) - sound was not 16-bit - should have been 68030 with mmu support (vs 68020ec) - HD mandatory.

If they addressed this, the Doom experience would have run better on Amiga.


The CD32 chip actually had byte-per-pixel (chunky) graphics modes but the omission from the 1200 was fatal.

I agree. Unfortunately, even with chunky graphics and/or 3D foresight, 68k would still have been a dead end and Commodore would still have been mismanaged into death. It’s fun to dream though…


Was it necessarily a dead end? Considering the ways Intel and later AMD managed to upgrade/re-invent x86 that until x64 still retained so much of the x86 instruction encoding/heritage (heck, even x64 retains some of the instruction encoding characteristics).

Had the Amiga retained relevance for longer and without a push for PowerPC I don't see a reason why 68k wouldn't have been extended. Heck the FPGA Apollo 68080 would've matched end of 1990s P-II's and FPGA's aren't speed monsters to begin with.


The 68060 is pretty good to be fair, but it never ended up being widely used and Motorola definitely saw PPC as the future.

Maybe if these theoretical new 68k Amigas became a huge market hit they could have taken the arch further and it could have remained competitive, but all the other 68k shops had already pretty much given up or moved on already (Apple was already going PPC, Sun went SPARC, NeXT gave up on their 68k hardware, Atari was exiting the computer business entirely, etc) so I don’t know that the market would have been there to support development against the vast amount of competition from both the huge x86 bastion on one hand and the multitude of RISC newcomers on the other.


Right, and I think that is a junction. Had Motorola not been enamoured with the new shiny as a chipcompany and realized that they already had a huge market that just wanted improved performance of their software and pushed 68k improvements instead of a new PPC architecture, both Apple and (a better managed) Commodore could've been competitive with improved 68k designs.

Remember, Intel also barked up the wrong tree with Itanium for 64bit and didn't really let go until AMD forced their hand with x64.


The argument is that 68k is "CISCier" than x86, the addressing modes in particular, so making a performant modern out-of-order superscaler core that uses it would be harder than x86.

I believe in that. But Commodore could have plunked a cheap 68020 in their machines for backwards compatilibity (like how MSX2 had a SOC MSX1 inside, PS2 had a PS1 SOC, PS3 had a PS2 SOC, and so on) and put another "real" socketed CPU as a co-processor. Or made big-box machines with CPUs on PCI cards, for infinite expansion options. "True" multitasking, perfect for CAD, 3D rendering and non-linear video editing. It would have been very cool with an architecture where the UI could be rendered with almost hard realtime and heavy processing happened elsewhere.

This is almost exactly what the plan was, until C= went out of business:

https://en.wikipedia.org/wiki/Amiga_Hombre_chipset

It was going to be HP PA-RISC based and have an AGA Amiga SoC, including a 68k core.


How much of Hombre is myth-and-legend? Given how little progress with made with OCS->ECS->AGA, it seems unlikely they could even have built an Amiga SoC, nevermind designed a new 64-bit chipset.


Don't agree there considering x86 has MODRM, size-prefix(16/32 and later 64bit operand sizes), SIB(with prefix for 32bit), segment/selector prefixes,etc.

Biggest difference perhaps where 68000 is more complicated is postincrement but considering all the cruft 32bit X86 already inherited from 8086 compared to the "clean" 32bit variations of 68000 I'd make it a toss at best but leaning to 68000 being easier (stuff like IP relative addressing also exists on the RISC-y ARM arch).

Apart from addressing the sheer number of weird x86 instructions and prefixes has always been the bane of lowpower x86.


There were no tech problems IMHO, it was all mgmt problems. They could have chosen a handful of completely different (edit: mutually exclusive even!) tech paths and still have won, but instead they chose to do almost nothing except bleeding the company dry.

Edit: I don't mean that their success was certain if they executed better. I mean they did almost nothing and got the guaranteed outcome: failure. (And their engineers were brilliant but had very little resources to work with.)


At that point in time I would not have called it Wintel yet. That started after Windows 95, IIRC.

Yep. 486DX/2 was when I started seriously looking at moving on from the Amiga. I wound up with a DX/4 100 sometime in 1994.

My classmate kept his Amiga 1200 a bit longer! ...eventually he got a PC with Pentium 60 MHz.

Yeah, there were holdouts of course but the DX/2 really seems like the breaking point.

(Also, a Pentium 60 is barely faster than a DX/2 66 at many tasks — it is a Bad Processor — but that’s another conversation ;)


Pentium is a bad processor? It's way faster than 486, especially on FP it's not even close.

The original Pentiums (socket 4, 60 or 66 MHz) had the infamous floating point division bug, had underwhelming perf for anything not FP bound (most things), ran hot, and were too expensive for what you got. A DX/4 100 was nearly always a more rational choice.

Second gen Pentiums, starting with the 75 MHz, were great.


I had a P60 that had the F0 0F bug; Windows would crash for weird reasons on it, but Linux ran like a champ because it actually had a workaround. Luckily my chip was already recalled for the FDIV bug so it wasn't a total boat anchor. Loved that machine. I had BeOS, QNX, and one time I made Linux look like Solaris with all the Open Look stuff - really enjoyed that aesthetic.

Now we have these amazing displays and graphics cards and there's literally no way to make my Mac have different window titlebars or anything. So boring


Didn't do you try again Linux recently?

Actually the first generation Socket 4 Pentiums (60 and 66 MHz) had the FOOF bug (and yes, they were bad processors — but overall system architecture with the very first PCI bus implementation with ISA legacy rather than ISA and a single VESA Local Bud expansion) was a huge step forward compared to the 486.

The FOOF bug was actually discovered on the first step of the later 90 MHz Pentium (which was released with the 100 MHz Pentium, which also suffered from the bug). However this was corrected with a hardware stepping. The 75 MHz Pentium was actually released as part of this later stepping, and it was a binned 90/100 MHz part. There were no first step 75 MHz Pentiums.


Idk if the 75 was really that great tho, mostly in that it had a 50Mhz FSB rather than 60 or 66Mhz like most other parts.

Another factor for the later P1s being better IIRC was improved chipsets.


We had a 90 overclocked to 100Mhz that served as the family computer, I inherited from it when the family computer was upgraded to a K6 II and it chugged along as my personal computer until ~2001 thanks to Linux whike the Ghz barrier had been broken for a while already in the Intel world.

I think my next computer came with an AMD Duron 900Mhz, an entry level at the time but the jump from the pentium 100Mhz was such a huge gap it still felt like a formula 1.


To be more exact, I think the first great Pentium was the 133, but the 75 is the first that was a real, proper jump in performance from a fast 486 and represented decent price/performance.

It didn't help that the earliest P5 Pentiums ran on a 5V rail. Newer revisions starting with the P54 core used 3.3V and helped with keeping the chips cool.

The Pentium was great, but the 60 and 66MHz versions were not liked, they ran way too hot.

They ran on 5V supplies and it was only later that the whole architecture was changed to 3.3 V with the 90 and 100 MHz Pentiums (which were then discovered to have the infamous FOOF division bug).

I think from the price people also expect a similar performance boost as going from 386 to 486. What made Pentium also confusing is that during this time Intel introduced PCI.

From a 486 with VLB to a Pentium with PCI everything became a lot nicer.


Many tasks perhaps, but running Quake was not one of them.

Yeah, it does alright and is a significant difference to a DX/2, but Quake came out in ’96 and the P60 came out as a super expensive workstation class CPU in ’93. If you were a gamer in ’96 it is unlikely you were rocking a P60 because it was not ever good value for money.

You could play 320x200 Quake acceptably on a P60. On a DX4 too, though barely - my family had both in the mid 90s. I'd be surprised if Quake is playable on a DX2.

I haven't mentioned America or any other continent. It is the Europeans who are shouting about sovereignty right now.

Well, no one has mentioned computer hardware until you did.

Surely you understand how "all the motherboards are made in Taiwan" is less of an immediate risk to sovereignty than "all of our business and personal data is stored on American servers and subject to US law"

It would be nice if Europe could produce its own computers, but right now no one can except China, so what is your point? That limited sovereignty efforts undertaken in the realm of reality are futile and that enables you to get some cheap shots in for whatever reason?


Computing is the software and the hardware. So you're right, I feel that it is futile.

Well, you can use the old hardware which you've already got if you get cut off from foreign suppliers. But the same is true for software. It's even more true for software.

If the French government and other Europeans were serious about reducing or eliminating dependency on American cloud services, they should switch to older versions of MS Office and MS Windows be done with it. No need to retrain your workers, and a realistic and speedy way to implement it.


they should switch to older versions of MS Office and MS Windows be done with it

That does not make any sense at all. These are full of known security vulnerabilities.


There is one very serious issue with software: it needs updates for security issues that are uncovered. And it might be built requiring access to MS cloud services to work. To get rid of these problems is basically equivalent to adopting open source products.

Unfortunately that’s an unacceptable security risk, especially for a government.

The same is not true in Europe, so there's not a huge Dell, HP, or IBM equivalent.

In the 90s and up until the early 00s we used to have quite a few pretty serious contenders, but they are all dead now: ICL, Siemens-Nixdorf, Tulip, Bull, Olivetti, etc.


Sometimes organizations need to undertake work that is not friction free to achieve longer term goals.

> are there any machines still running on a Z80?

A LOT of them. Zilog only announced its discontinuation in 2024.

But that also means that there are A LOT of them out there, and they are cheap and generally extremely reliable parts. So if you rely on a device with a Z80 in it and you're worried about the CPU failing you can have hundreds of these things on the shelf for ~no money.

So I would say it's of limited utility for industrial applications for now simply because scarcity is not an issue for the real thing. This might change in the future so it's good that projects like this exist.


It's sad Z80 production was discontinued. There are some niche IC manufacturers that specialize in legacy parts (eg. Rochester Electronics comes to mind). It would have been nice if Zilog had passed on manufacture of the good 'ol Z80 to a manufacturer like this. Even if it's just small production batches every couple of months/years or so.

There'll be plenty of hobbyists and/or legacy industrial / niche applications for a looong time to come.


The Commodore REU (RAM Expansion Unit) architecture for the C64/C128 allows for up to 16 MiB - 256 banks of 256 addresses in 256 pages.

Due to the lack of support hardware in the C64 (no hardware RAM bank switching/MMU) this memory is not bank switched and then directly addressable by the CPU, it's copied on request by DMA into actual system RAM. But in some sense, a C64 with a 16 MiB REU is a 6502 with 16 MiB RAM.

But yeah, you want CPU addressable RAM with real bank switching. You couldn't really do 16 MiB, you wouldn't want to bank switch the entire 64 KiB memory space. The Commander X16 (a modern hobbyist 6502 computer) supports up to 2 MiB by having hardware capable of switching 256 banks into an 8 KiB window (2 MiB/256 banks = 8 KiB).

Let's say you design something with 32 KiB pages instead -- that seems kind of plausible, depending on what the system does -- you could then do 256*32 = 8 MiB and still have 32 KiB of non-paged memory space available. I think this looks like just about the maximum you would want to do without the code or hardware getting too hairy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: