.init is not necessarily 0. For int it is, but for float it's NaN. For char it's 255 and for an enum, it's whatever you have decided it is.
enum thing {
first = -1,
second,
etc.
}
This way, a variable of type thing has -1 as init value.
What makes the choice of < > for template parameter bad appears when someone tries to nest templates.
a<b<arg>>
and now a means greater becomes a shift right.
That's one of the reason that it you had a genious ideo to find domething else in D
a(template params)(runtime params) at declaration
a!(template params)(runtime params) at invocation with the type deduction and parenthesis omission making often even disappear completely the template syntax.
I just wrote a somewhat bigger program in D and I mostly used UFCS chains for the toString() overrides to have better debug outputs.
There are also some idiomatic forms that are frequent like conversion
bar = foo.to!int is much more readable than bar = to!int(foo)
It slowed down to 1 MHz for I/O and Apple ][ compatibility.
I wouldn't call it a disaster, sales and marketing wise mainly, but that also had a lot to do with the IBM PC coming out around the same time.
It was probably the most complex 6502 design, and mainly consisted of discrete logic chips rather than custom chips that other manufactures were starting to use. It had advanced features like an additional addressing mode to access up to 512k RAM without bank switching. (Plus two speed arrow keys)
It was a disaster for a lot of reasons but not because it was a bad architecture.
It overheated, unseated chips, had a non-functional clock chip and other kinds of terrible quality controls. It also had to compete against the IBM PC while Apple still didn’t even had added lowercase input to their II+.
It was not just a bandwidth issue. I remember my first encounter with the Internet was on a HP workstation in Germany connected to South-Africa with telnet. The connection went over a Datex-P (X25) 2400 Baud line. The issue with X25 nets was that it was expensive. The monthly rent was around 500 DM and each packet sent also had to been paid a few cents. You would really try to optimize the use of the line and interactive rsh or telnet trafic was definitely not ideal.
NEC V20/V30 not that much but 80186 and all their specialized embedded variants from Intel (80186EA/EB/EC) and AMD (Am186EM) were extremely appreciated as it allowed to use normal MS-DOS compilers and software.
Am186EM we loved that one. 100 pin PQFP with unmultiplexed bus, CMOS up to 40Mhz, including UART, SPI etc.
Yes, AM186EM was doubleplus good. I used it in an early interface between a Kodak DC-20 camera (early digital) and IrDA at high-speed (1 Mb/s in those days).
Also liked the V20/V30 too, built a PC card comm controller with those. You are correct, it was normal to use MS-DOS compilers, although I did have to get a special-purpose debugger (code; interface was uart) for the '186.
With those kinds of products, it's 'annoying' that Intel sort of gave away the embedded space.
(agree: 8051 is high up there in the 'microcontroller' space.)
Motorola's embedded cpu's had strange numbers that would make it difficult to recognize to which cpu family it belonged. 68705 were 6800 derived, 68302 were 68000 derived.
As for the 6809 it l ooks like that there were no embedded derivatives of it. 68HC16 seems to be a 16 bit extended 6800 using a similar technic as 65816 to extend the 6502.