Just a note about folklore.org. This site is just fantastic. I've spent many hours reading the fun stories that Andy's posted. They have a way of instantly transporting me back to the late 70's/early 80's Silicon Valley.
The site was created and most of its articles were written by Andy Hertzfeld, who was an engineer on the original Mac team. The development of the original Mac is explicitly the subject of the site. Not sure why anyone would expect it to focus on anything outside of Apple...
We were reluctant to show it to Steve, knowing that he would want to commandeer it, but he heard about it from someone and demanded to see it. We showed it to him, and, unfortunately, he loved it. But he also insisted that Apple owned all the rights to it, even though we had developed it in our spare time.
This kind of greed is so sad. It would have cost Steve pocket lint to compensate them fairly for that contribution. Probably a new car or family vacation would have made them feel like they got something out of this. I think at this point, both were fairly wealthy, so they probably only wanted some token of gratitude.
I wonder how many great products remained toys left on engineers floppy disks because of this greedy attitude and the environment it created? This is the epitome of no good deed goes unpunished.
I unreservedly agree that Steve should have compensated these engineers for their work, even if only for the selfish reason of encouraging its engineers to push boundaries of engineering, and to slow the rate of engineers leaving.
But on the same token these engineers should have known that as full time employees, work they did during "outside hours" belonged to their employer. And I think that's fair, since that work relied upon knowledge and materials acquired through their employer. Not to mention access to people like Bud Tribble whenever you need it!
The moral of the story is that if you want to engage in skunkworks and you think you should be rewarded for its outcomes, talk to your manager ahead of time (in abstract terms, if necessary) about potential bounties or bonuses for doing such work. If you don't set any expectations ahead of time, expect nothing.
At the time, a CP/M machine was a business machine, because it could run a word processor, a spreadsheet, and dBase. (Literally the name of the database program from Ashton-Tate.) So you could buy a "serious" CP/M machine, but not an Apple II, which was an educational/hobbyist machine.
But if you bought an Apple II with the CP/M card, it was a business machine again.
The first spreadsheet, VisiCalc, was developed and first released on the Apple II (and apparently accounted for lots of its sales) - native CP/M spreadsheets, which came along later, were frankly a bit crap.
But I agree, there was more business software on CP/M.
Exactly. I still have a //e on my desk and learned how to use VisiCalc because a high school friend's dad was an exec at Symbolics and we would swap back and forth between playing games and trying to figure out cool things to do with spreadsheets.
I swore I would never leave my Apple //e, at least until Borland released Turbo Prolog, and then I absolutely had to have a PC.
> but not an Apple II, which was an educational/hobbyist machine
Which makes it doubly interesting that the AppleⅡ eventually shared the same fate to allow Macintosh LC systems to run old edusoft: https://en.wikipedia.org/wiki/Apple_IIe_Card
I purchased one in 1984 to learn CP/M and Z-80 assembly on the Apple II. Later I used it to learn Fortran and Turbo Pascal, and finally bought a PCPI Applicard to speed the things up. The Z-80 softcard could be one of the best cards in Apple II's history, IMHO.
I had that card for my Apple II (still have it in fact). It was solely to run WordStar. Once AppleWorks was released, I never booted into CP/M again. (For some reason, I never used Apple Writer. I can't recall why not.)
This is a fantastic story. For the folks doing a side hustle to your daily job, the last few paragraphs regarding ownership of results is worth reading carefully.
I think it's weird that Apple's first few products - which were definitely world-changing - have so many stories around them, but the iPhone, which is arguably a waaaay more influential device, has none that I've seen.
There have been plenty of articles and stories from insiders about the creation of the iPhone, iPod, and other important Apple products. I’ve seen many in Wired and other tech publications. They may not have the same candor and charm of a folklore.org posting, but they are fascinating all the same.
It’s wonderful that Hertzfeld et al are willing share so much great technical and personal information about their time in that Apple era. This is the kind of stuff that people like us who read sites like this love.
I fear that similar stories of the modern era will be forever buried under the threat of lawsuits from violating NDAs and that’s a real shame. Stories like this need to be told.
The engineering work was far simpler then. A single person can reasonably understand the whole of how an Apple II computer is designed and works, end to end. A small team of such people interacted in highly productive ways to iterate that innovation, yielding many interesting tales.
The iPhone is many orders of magnitude more complex. It took thousands of engineers to design and build it, each working on a tiny piece. Entire subsystems (e.g. the modem chip) are wholly outsourced to other companies. There are no people who understand fully how more than a small piece of it works. The yield is a series of sprint planning meetings.
Apple was also a much smaller company then, steeped in the (unthinkable today) openness of 70s/80s compute culture. Apple itself grew out of the Homebrew Computer Club, a hobbyist meetup where people swapped designs and technical information freely.
"Doing interesting engineering things with computer designs" was perceived as a very small niche of purely technical interest to a few geeks and nerds. The culture around the engineering wasn't at all locked down with NDAs and battallions of lawyers - why bother, the economic value of the technical information was seen as marginal and not worth bothering over. How times change!
Fewer people worked on these early products, and they were involved basically end-to-end, so the people involved collected a fair bit of interesting stories. iPhone development probably has tons of interesting stories associated with them, but those stories are spread out across so many people that you're unlikely to find a single person like Andy who knows many of them first hand.
I have heard some of the funny BTS stories around some of the iPhone versions from Apple people, but they are not published anywhere for obvious reasons.
Many of those people still work there, and those product lines are still being sold.
Are you sure about that? There must be some stories out there.. There are some great books about the Xbox written by Dean Takahashi a few years back, in the same general tech era.
Also, if you like folklore.org, you should check out Alex St. John's stories about Windows 95. He's a bit of an unreliable narrator, but his stories are really interesting
> The 6522 chip had a timer which could generate an interrupt at a specified interval. The problem was synchronizing it with the video, since the video generation was not accessible to the processor. Burrell solved the problem by wiring up the spare flip flop to the low bit of the data bus, using it to latch whatever data the video was displaying so the processor could read it.
> To synchronize with the video, Burrell had me fill the Apple II's frame buffer so the low bit was on most of the time, but set off at the end of the last scan line. I wrote a routine to sit in a tight loop, reading the latch. When the low bit changed, we would know the vertical blanking interval had just begun.
I am confused about this. What is the low bit used for aside from this synchronization scheme? I assumed it would correspond to a color pallet index, or an address into a character ROM. Why is it available to be used for synchronization?
The gist is that the last byte of each horizontal raster is not displayed on-screen, but with the hack the 6502 can see it. So if you set the low bit of the last byte to 1 for every horizontal line except the last one, now you know when the hardware has sent out the last byte of the last line. Vertical Blank comes right after that.
Ah, thanks. Like the poster you responded to probably, the actual bit (hah) that I was missing was that the last bit was apparently no used by the framebuffer in any way, i.e. it's available for synchronization because changes to it are not visible on the screen.
- Everything in the Apple II is synchronized with a master clock, including the video.
- The video chip in some 8-bit systems in this era could generate an IRQ when it was generating a specific raster line--the VIC-II in the Commodore 64 for example. This enabled split screen effects or updating during vertical blank (set interrupt to happen at last scan line).
- Apple II's video did not have this capability.
- This hardware adds a VIA, with the timer-generated interrupt. Since the VIA clock is synced with the same clock that everything else is synced with, including the video, it should be possible to generate an IRQ exactly once per frame. If that IRQ can happen at the right time, it can be used to effectively emulate a raster line IRQ.
- So--starting that timer at the right time: if you fill the framebuffer with 0x01, except for the last line, and keep peeking at what the video chip is putting out, enabling the timer when you no longer see 1's, you then have a reference point to enable the VIA timer IRQ at a right place.
- Once you have the timer set you can clear the frame buffer and use it normally.
The Apple II has a very weird graphics mode in which each byte stores seven pixels plus a palette bit. On a monochrome monitor, the palette bit is unused and the seven pixel bits just become black or white dots on the screen. On an NTSC color TV there's an additional color signal added to each block of seven pixels, and through the use of artifact color the seven dots per byte becomes three-and-a-half color dots per byte.
At first I thought this might be referring to the palette bit - as this was almost certainly intended for B&W operation that bit is free to be used for other purposes. However, that's the high bit. Using the low bit would mean every seventh column of pixels would need to be white. Either that, or the engineer in question misremembered something and actually meant to say the palette bit.
I can't find any information or examples on it at the moment, but doesn't the palette bit shift the remaining 7 pixels one pixel when viewed as monochrome?
IIRC, it shifts them half a pixel, regardless of whether it's color or monochrome, and that's why it affects the palette.
In color mode, the on/off pixels form a psuedo-sine wave, and the phase of this wave determines the color: if it's on even pixels you get green, if it's on odd you get purple, if it's on even-and-a-half you get orange, and if it's on odd-and-a-half you get blue (give or take which alignments are even/odd and half/whole, since I can't remember, but that's the basic idea).
The Apple ][ display was effectively a 1 bit per pixel display (not exactly, but close enough). I understood this to mean that while synchronizing they painted the entire screen white except the lower right corner pixel which was black. Once synced the 6522 could generate the timing signal it needed independently and the display could do whatever it needed.
The prime number offset mentioned in the article is the other necessary piece of the hack. And a beautiful one at that.
That's weird though, a screen painted like that isn't very useful. It seems to me that the bit used for synchronization here either has no effect, or a not very visible one.
I think the idea there is that you only paint it that way for one frame (1/60 sec, usually?). Similar to how some old hitscan light guns[0] on arcade machines worked.
The idea is that you do the calibration once (eg on program start), then the timer interrupt handles it until and unless it get desynced. So only for one frame, total (per run of the mouse-using program).
Ah, that makes more sense, thank you. I thought this was constant synchronization, but IIRC the Apple II only has one oscillator from which all clocks are derived (as was common for 8 bit machines).
IIUC, it still only has one clock (oscillator), but the mouse card has a seperate timer (counter) in it's VIA, which needs to be synchronized with the video hardware because the VIA is not initialized (with the same number-of-clock-ticks-per-(vblank)-interrupt as the video hardware runs at) at power-on.
Yes, I understand. The point is, with a single oscillator (emphasis, I mean oscillator) being the source of all derived clocks, synchronizing once is sufficient, as all clocks will keep a fixed relation.
If, say, the video card had its own oscillator (comparably how modern video cards do or did), the clock domains would drift from each other and frequent resynchronization were necessary (but you'd just use the interrupt instead that most likely would be added in this scenario).
But I think you helped me understand the scheme. I'd add that according to the article, a single frame is not necessarily enough, given the description on how synchronization can be missed at the first attempt, but this is hardly relevant, given that the synchronization still likely happens only once and within a brief timeframe.
The Apple didn't even have pallets. Each bit represented an absolute color based on its relationship to other bits and the color face signal inherent in the ntsc signal.
Lots of color Fidelity most likely meant being limited to only a couple of the four actual color Shades available at the time, due to the use of the pallet bit.
• Engineers work on projects after hours for the sheer joy of engineering.
• Management not compensating these efforts encourages the engineers to leave (and the ones you probably actually want to keep).
• Clever hack: creating a loop with a prime-number cycle-count relationship with another loop so to avoid "phase locking".
• Clever hack: two-chip peripheral card for the Apple II.
• The utility (albeit slow-speed) of the Apple II design that allowed for a two-chip peripheral card solution in the first place.
• Bill, Bill, Bud, Burrell, Battlestar Galactica.