Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Lynn Conway co-wrote "the book" on VLSI design, "Introduction to VLSI Systems", created and taught this historic VLSI Design Course in 1978, which was the first time students designed and fabricated their own integrated circuits, including James Clark (SGI) who made the Geometry Engine, and Guy L Steel (MIT) who made the Scheme Microprocessor.

She invented superscalar architecture at IBM, just to be fired in 1968 after she revealed her intention to transition, then 52 years later IBM formally apologized to her in 2020. She successfully rebooted her life, invented and taught VLSI design to industry pioneers who founded many successful companies based on the design methodology she invented, wrote the book on, and personally taught to them, and then she became a trans activist who helped many people transition, find each other, and avoid suicide, fight abuse and bigotry, and find acceptance, by telling her story and building an online community.

Lynn Conway receives 2009 IEEE Computer Society Computer Pioneer Award:

https://www.youtube.com/watch?v=i4Txvjia3p0



I ended up teaching Carver Mead's course in its last year at Caltech, which was quite an experience. In many ways, the course was (and is) horribly out of date, and not just because of the fact that students were using open-source Berkeley tools to draw layouts. The real reason why this course was obsolete was that Mead and Conway so thoroughly won the argument on the idea of creating rigorous abstractions that people don't learn any other way.

It just seems so obvious today that you can create gates, you can create macros, you can create complex designs, and you can define the interface at every level so you can hook them up and they just work. That idea came out of Conway and the early pioneers of VLSI.

The same ideas are the core of how we work with libraries when doing software engineering, too.


> It just seems so obvious today that you can create gates, you can create macros, you can create complex designs, and you can define the interface at every level so you can hook them up and they just work. That idea came out of Conway and the early pioneers of VLSI.

And you can see the opposite of this in many early microprocessor designs, like the (original, NMOS) 6502 and Z80. There's a lot of highly idiosyncratic designs for gates, heavily customized for the physical and electrical context that they're used in - and I won't deny that they're often very clever and space-efficient, but they were also extraordinarily time-intensive to design, and weren't reusable. It made some complex designs possible within the limitations of the time's fabrication technology, but it wasn't an approach which would have ever scaled to larger designs.

One great example of this is this bit of 6502 overflow logic:

http://www.righto.com/2013/01/a-small-part-of-6502-chip-expl...


> There's a lot of highly idiosyncratic designs for gates, heavily customized for the physical and electrical context that they're used in - and I won't deny that they're often very clever and space-efficient, but they were also extraordinarily time-intensive to design, and weren't reusable.

Is this optimization now something that hardware design tools do automatically?


They optimize on a different level. Instead of trying to optimize the arrangement of individual transistors, you start with a set of standard cells which contain optimized transistor-level implementations of individual gates, and have your design tools optimize the placement and routing of those cells within a grid system.


Does that mean there's an opportunity for increasing performance by bringing collections of gates into scope for optimization? Or does that not actually let you decrease transistors very much?


There is some, but most of that actually gets pulled into standard cell libraries (the gate libraries), which are very big collections of primitives. Most of them have a lot more than just the standard gates you think of - they include many 3-input gates, adder cells, multiplexer cells, flip flops of all kinds, and all sorts of other basic building blocks that are micro-optimized. They tend to use a standard width of 7 or 9 "tracks," where a track is defined by the width of the lowest metal layer, and the optimization comes from reducing the length of the gate. They also have gates of different sizes/strengths, so you can use the weak and small version on paths that are not critical, and the bigger and faster versions on critical paths.


There is - but given the size of the design space that's mostly done with a library of gates - synthesis/layout pick cells from that library and place them, often putting connected gates together - you could then merge gates in some smart way to save a few percent in area but chances are you wouldn't gain much because you'd have to shuffle all the other gates in that row a bit, and that would mess with timing elsewhere.

Also routing (wires between gates) constrains how close many gates can be, and for everything but regular arrays of gates there may be little point


I'm not sure if you are talking about MAGIC for the layout. That is what I used back in the day. I was surprised to find out later in my professional career that this was created by John Ousterhout, the same person who created Tcl/Tk.

It's really amazing to me how versatile these early hackers were.


I am talking about MAGIC for layout, but there is a whole suite of tools from the same lab. IRSIM is a switch-level simulator that's also very useful, and there are a number of other tools for schematic capture, analog stuff, and a whole digital synthesis flow.

It's all open-source, and if you're building a chip with an SCMOS process or another process with lambda-based design rules, it's still a pretty nice set of tools to use.

Incidentally, if you're building open-source silicon, MAGIC is somewhere in that process, and I assume the Berkeley logic placer is still there even though the front-end is usually something else.


Ousterhout (and/or his students) has kept on hacking: Homa, Raft, RAMcloud...

I also like his software design book.

https://cs.stanford.edu/~ouster


In the book, Dealers of Lightning, by Michael Hiltzik, (about Xerox PARC), chapter 21, "The Silicon Revolution", details the work Lynn Conway, Carver Mead, and Doug Fairbairn did with VLSI at PARC.

Excerpts - text in double parentheses provided for context:

"Lynn Conway and I," Fairbairn remembered, "were the ones who said, 'This VLSI is hot shit.'"

For the next year, Caltech and PARC educated each other. Mead transferred his theories about microelectronics and computer science, and Conway and Fairbairn paid him back by developing design methods and tools giving engineers the ability to create integrated circuits of unprecedented complexity on Alto-sized workstations.

...If the computer lab -- particularly ((Butler)) Lampson, who commanded management's respect -- continued to carp at the money being spent on the hazy potential of VLSI, who knew how long she could survive at PARC?...

While discussing this one day with Mead and Fairbairn she realized the problem was not just scientific, but cultural. VLSI had not been around long enough even to generate textbooks and college courses -- the paraphernalia of sound science that, she was convinced, would force everyone else to take it seriously.

"We should write the book," she told Mead. "A book that communicates the simplest, most elegant rules and methods for VLSI design would make it look like a mature, proven science, like anything does if it's been around for the ten or fifteen years you normally have behind a textbook."

Mead was skeptical...

That's where you're wrong, she replied. What was the aim of all the technology that surrounded them at PARC, if not to facilitate just the project she was proposing? They had Altos ((computer workstations)) running Bravo ((word processor)), a network to link long-distance collaborators, and high-speed laser-driven Dover printers to produce professional-looking manuscripts.

Their collaboration that summer on what became the seminal text of the new technology was only one of Conway's efforts to distill and spread the VLSI gospel. The same year she agreed to teach a guest course at MIT (using the first few chapters of the still-maturing textbook), then printed up her lecture notes for instructors at an ever-enlarging circle of interested universities. By mid-1979 she was able to offer an additional incentive to a dozen schools: If they would transmit student designs to PARC over the ARPANET, PARC would arrange to have the chips built, packaged, and returned to the students for testing.

((Jim)) Clark understood at once that the computing efficiency VLSI offered was the key to expanding the potential of computer graphics. That summer he essentially relocated to PARC, taking over a vacant office next door to Conway's and steeping himself in VLSI lore. Within four months he had finished the Geometry Engine chip, the product of that summer's total immersion.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: