Hacker Newsnew | past | comments | ask | show | jobs | submit | Tsiklon's commentslogin

The DualShock/DualAnalog were not quite the same as the DualShock 2, the face buttons on the DualShock 2 were advertised as being pressure sensitive. Some games were capable of using this.


Funnily enough, this caused issues with PS2 games ported to Xbox subsequently. Metal Gear Solid 2 made heavy use of the pressure sensitive buttons for weapon aiming vs shooting. I recall the Xbox didn't have pressure sensitive buttons, so had to do something different to achieve this (I'd need someone else to fill in the gaps here, I never owned an Xbox!)


Original Xbox had the pressure-sensitive buttons, but 360 did not, which specifically caused issues for MGS 2 and 3 in the HD Collection. Twin Snakes on the Gamecube suffered similarly, requiring awkward combinations of Y and A to lower your pistol or raise your automatic weapon without firing.


THAT'S WHY. As an avid Metal Gear enthusiast during the release of MGS2, I remember having nearly-impossible time finding MGS2 Substance for PS2 when I wanted to do my first real replay way back in the day. I imagine it was the more popular version since it had working pressure buttons, presumably.


The original Xbox actually did have pressure sensitive face buttons. Off the top of my head, the only game I know that used them is Vexx (which strangely didn't use them on the PS2...)


The PS3 had those too, but they were dropped for the PS4 and PS5. I did read that it caused a few headaches for the classics ported forward.

Speaking of oddball controller features, I was a bit surprised the PS5 retained the little trackpad, given how little use it seemed to get on the PS4— even in obvious situations like Assassins Creed where you're moving an on-screen cursor around a map, but only with the thumbstick.


Renaissance fairs don’t really exist in Europe. Most countries have their own traditional festivals and customs which date back to that time period.


I don't know what a renaissance fair is like exactly, but Netherland has several medieval and fantasy themed festivals.

Castle Fest is centered around music (I think it was originally organized by the folk band Omnia), but has tons of combat and archery demonstrations, a market with clothes and weapons (blunt steel and boffer), esoteric stuff and anything else that vaguely fits in there.

Elfia, formerly known as the Elf Fantasy Fair is a fantasy-themed cosplay festival held twice a year.

Keltfest (10,000 visitors, apparently) is about ancient crafts, music, archery, with workshops and demonstrations.

I've also been to a few smaller ones I forgot the names of, but these are all pretty big, I think.


> Renaissance fairs don’t really exist in Europe.

We have them in Slovenia. In the summer local castles will put on a fair for the tourists and kids. Sometimes multiple!

Typically you get a visiting troupe or two from Italy, they like to do flag twirling shows. There’s always a group of folks from Czech who put on an armored fighting competition. I’ve seen jousting tournaments too. And then you have a slew of local artisans selling wares roughly inspired by the time period. There’s often an outdoor play or two, maybe belly dancing.

It’s all great fun.


There were at least a dozen this past summer. There was one in Žovnek Castle two weeks ago. A couple of weeks before that, in Ptuj. Before that, in Celje. Before that, there was a Roman-era festival in Ptuj, which was seriously cool. And the best medieval-themed festival was in Žužemberk.

I went to all of them because I have young kids and they love that stuff. It's a fun day out. Also gives us an excuse to wear all of the arms and armor we've collected, lol.

There are lots of these in Austria, too, BTW. There was a two-week-long joust fair/festival by Millstätter See in early August!

In truth, these festivals are all over Europe. They're just not well advertised, so you have to look for them.

The next one in the region, by the way, is the medieval-themed Advent festival at the castle in Friesach. (Austria.) I'll be there for sure.


> In truth, these festivals are all over Europe. They're just not well advertised, so you have to look for them.

Considering how crowded they feel, I’d say the advertising is plenty sufficient. They’re local events with limited space.


That's true of some of them. Some of the other ones -- usually the ones in out-of-the-way castles (but also, surprisingly, both of the events in Ptuj) -- could have been twice as busy, and they'd still have plenty of room.

Ah, anyway, it's nice that there's always something to do on a Summer's weekend...


At least here in Spain we do have medieval markets not tied to a specific festivity.

Not sure how much they resemble American ren fairs, but they tend to have falconry exhibitions, smiths forging stuff, sellers of wooden toys and artisan jewelry..


At a surface level, that sounds very similar. There’s also a lot of selling of art, swords, axes, period clothing, and such. Oh, and it’s the US so there’s lots and lots of food.


I wonder if some part of it is because in Europe it's local history, while in America it's something imported from an idealized past.


All 3 are appliance manufacturers. Miele and Henry are predominantly vacuum cleaner brands.

Henry hoovers are ubiquitous in the professional market in the UK and well regarded for durability, performance and the cute face all their cleaners have. Essentially anyone in the UK will have used, or seen one be used


I like how he describes the mood or feeling of the chimneys he’s climbed. Villainous or friendly.


I think Itanium was a remarkable success in some other ways. Intel utterly destroyed the workstation market with it. HP-UX, IRIX, AIX, Solaris.

Itanium sounded the deathknell for all of them.

The only Unix to survive with any market share is MacOS, (arguably because of its lateness to the party) and it has only relatively recently went back to a more bespoke architecture


I'd argue it was Linux (on x86) and the dot-com crash that destroyed the workstation market, not Itanium. The early 2000s was awash in used workstation gear, especially Sun. I've never seen anyone with an Itanium box.


While Linux helped, I'd argue the true factor is that x86 failed to die as projected.

The common attitude in the 80s and 90s was that legacy ISAs like 68k and x86 had no future. They had zero chance to keep up with the innovation of modern RISC designs. But not only did x86 keep up, it was actually outperforming many RISC ISAs.

The true factor is out-of-order execution. Some RISC contemporary designs were out-of-order too (Especially Alpha, and PowerPC to a lesser extent), but both AMD and Intel were forced to go all-in on the concept in a desperate attempt to keep the legacy x86 ISA going.

Turns out large out-of-order designs was the correct path (mostly OoO has side effect of being able to reorder memory accesses and execute them in parallel), and AMD/Intel had a bit of a head start, a pre-existing customer base and plenty of revenue for R&D.

IMO, Itanium failed not because it was a bad design, but because it was on the wrong path. Itanium was an attempt to achieve roughly the same end goal as OoO, but with a completely in-order design, relying on static scheduling. It had massive amounts of complexity that let it re-order memory reads. In an alternative universe where OoO (aka dynamic scheduling) failed, Itanium might actually be a good design.

Anyway, by the early 2000s, there just wasn't much advantage to a RISC workstation (or RISC servers). x86 could keep up, was continuing to get faster and often cheaper. And there were massive advantages to having the same ISA across your servers, workstations and desktops.


Bob Colwell mentions originally doing out of order design at Multiflow.

He was a key player in the Pentium Pro out of order implementation.

https://www.sigmicro.org/media/oralhistories/colwell.pdf

"We should also say that the 360/91 from IBM in the 1960s was also out of order, it was the first one and it was not academic, that was a real machine. Incidentally that is one of the reasons that we picked certain terms that we used for the insides of the P6, like the reservation station that came straight out of the 360/91."

Here is his Itanium commentary:

"Anyway this chip architect guy is standing up in front of this group promising the moon and stars. And I finally put my hand up and said I just could not see how you're proposing to get to those kind of performance levels. And he said well we've got a simulation, and I thought Ah, ok. That shut me up for a little bit, but then something occurred to me and I interrupted him again. I said, wait I am sorry to derail this meeting. But how would you use a simulator if you don't have a compiler? He said, well that's true we don't have a compiler yet, so I hand assembled my simulations. I asked "How did you do thousands of line of code that way?" He said “No, I did 30 lines of code”. Flabbergasted, I said, "You're predicting the entire future of this architecture on 30 lines of hand generated code?" [chuckle], I said it just like that, I did not mean to be insulting but I was just thunderstruck. Andy Grove piped up and said "we are not here right now to reconsider the future of this effort, so let’s move on"."


> Bob Colwell mentions originally doing out of order design at Multiflow.

Actually no, it was Metaflow [0] who was doing out-of-order. To quote Colwell:

"I think he lacked faith that the three of us could pull this off. So he contacted a group called Metaflow. Not to be confused with Multiflow, no connection."

"Metaflow was a San Diego group startup. They were trying to design an out of order microarchitecture for chips. Fred thought what the heck, we can just license theirs and remove lot of risk from our project. But we looked at them, we talked to their guys, we used their simulator for a while, but eventually we became convinced that there were some fundamental design decisions that Metaflow had made that we thought would ultimately limit what we could do with Intel silicon."

Multiflow, [1] where Colwell worked, has nothing to do with OoO, its design is actually way closer to Itanium. So close, in-fact that the Itanium project is arguably a direct decedent of Multiflow (HP licensed the technology, and hired Multiflow's founder, Josh Fisher). Colwell claims that Itainum's compiler is nothing more than the Multiflow compiler with large chunks rewritten for better performance.

[0] https://en.wikipedia.org/wiki/Metaflow_Technologies

[1] https://en.wikipedia.org/wiki/Multiflow


I thoroughly acknowledge and enjoy your clarification.


> The true factor is out-of-order execution.

I'm pressing X: the doubt button.

I would argue that speculative execution/branch prediction and wider pipeline, both of which that OoO largely benefitted from, would be more than OoO themselves to be the sole factor. In fact I believe the improvement in semiconductor manufacturing process node could contribute more to the IPC gain than OoO itself.


To be clear, when I (and most people) say OoO, I don't mean just the act of executing instructions out-of-order. I mean the whole modern paradigm of "complex branch predictors, controlling wide front-ends, feeding schedulers with wide back-ends and hundreds or even thousands of instructions in flight".

It's a little annoying that OoO is overloaded in this way. I have seen some people suggesting we should be calling these designs "Massively-Out-of-Order" or "Great-Big-Out-of-Order" in order to be more specific, but that terminology isn't in common use.

And yes, there are some designs out there which are technically out-of-order, but don't count as MOoO/GBOoO. The early PowerPC cores come to mind.

It's not that executing instructions out-of-order benefits from complex branch prediction and wide execution units, OoO is what made it viable to start using wide execution units and complex branch prediction in the first place.

A simple in-order core simply can't extract that much parallelism, the benefits drop off quickly after two-wide super scalar. And accurate branch prediction is of limited usefulness when the pipeline is that short.

There are really only two ways to extract more parallelism. You either do complex out-of-order scheduling (aka dynamic scheduling), or you take the VLIW approach and try to solve it with static scheduling, like the Itanium. They really are just two sides of the same "I want a wide core" coin.

And we all know how badly the Itanium failed.


> I mean the whole modern paradigm of "complex branch predictors, controlling wide front-ends, feeding schedulers with wide back-ends and hundreds or even thousands of instructions in flight".

Ah, the philosophy of having the CPU execution out of ordered, you mean.

> A simple in-order core simply can't extract that much parallelism

While yes, it is also noticable that it does not have data hazard because a pipeline simply doesn't exist at all, and thus there is no need for implicit pipeline bubble or delay slot.

> And accurate branch prediction is of limited usefulness when the pipeline is that short.

You can also use a software virtual machine to turn an out-of-order CPU into basically running in-order code and you can see how slow that goes. That's why JIT VM such as HotSpot and GraalVM for JVM platform, RyuJIT for CoreCLR, and TurboFan for V8 is so much faster, because when you compile them to native instruction, the branch predictor could finally kick in.

> like the Itanium > And we all know how badly the Itanium failed.

Itanium is not exactly VLIW. It is an EPIC [^1] fail though.

[1]: https://en.wikipedia.org/wiki/Explicitly_parallel_instructio...


I think the idea there is that it's less direct. Intel's lack of interest in a 64-bit x86 spawned AMD x64. The failure of Itanium then let that Linux/AMD x64 kill off the workstation market, and the larger RISC/Unix market. Linux on 32 bit X86 or 64 bit RISC alone was making some headway there, but the Linux/x64 combo is what enabled the full kill off.


Intel's lack of interest in delivering 64bit for "peons" running x86 also was part - I remember when first discussion in popular computer magazines showed of amd64, that intel's proposed timeline was discussed, and it very much indicated a wish to push for "buy our super expensive stuff" and trying to squeeze money.

Meanwhile the decision to keep Itanium on expensive but lower-volume market meant that there simply wasn't much market growth, especially once non-technical part of killing other RISCs failed. Ultimately Itanium was left as recommended way in some markets to run Oracle databases (due to partnership between Oracle and HP) and not much else, while shops that used other RISC platforms either migrated to AMD64, or moved to other RISC platforms (even forcing HP to resurrect Alpha for last one gen)


Yup. I had a front row seat. So many discussion with startups in the 2Ks that boiled down to "we can get a Sun/HP/DEC machine, or we can get 4-5 nice Wintel boxes running Linux for the same price". So at the point where everyone figured out Linux was a 'good enough' Unix for dev work and porting to the incumbents was a reasonable prospect, it was "so do we all want to share one machine or go find 500% more funding just to have the marquis brand". Once you made that leap, "we don't need the incumbents" because inevitable.


It was amazing how fast that happened. I remember one startup mainly supported Sun, late 90's, early 2000's. This was for a so called "enterprise" app that would run on-prem. They wanted me to move the app to Linux (Red Hat, I think?) so they could take it to a trade show booth without reliable Internet access. It was a pretty simple port.


Looking back, I think we can now conclude that it was largely inevitable for the other designs to fade sooner or later – and that is what has happened.

The late 90's to the early aughts' race for highest-frequency, highest-performance CPUs exposed not a need for a CPU-only, highly specialised foundry, but a need for sustained access to the very front of process technology – continuous, multibillion-dollar investment and a steep learning curve. Pure-play foundries such as TSMC could justify that spend by aggregating huge, diverse demand across CPU's, GPU's and SoC's, whilst only a handful of integrated device manufacturers could fund it internally at scale.

The major RISC houses – DEC, MIPS, Sun, HP and IBM – had excellent designs, yet as they pushed performance they repeatedly ran into process-cadence and capital-intensity limits. Some owned fabs but struggled to keep them competitive; others outsourced and were constrained by partners’ roadmaps. One can trace the pattern in the moves of the era: DEC selling its fab, Sun relying on partners such as TI and later TSMC, HP shifting PA-RISC to external processes, and IBM standing out as an exception for a time before ultimately stepping away from leading-edge manufacturing as well.

A compounding factor was corporate portfolio focus. Conglomerates such as Motorola, TI and NEC ran diversified businesses and prioritised the segments where their fab economics worked best – often defence, embedded processors and DSP's – rather than pouring ever greater sums into low-volume, general-purpose RISC CPU's. IBM continued to innovate and POWER endured, but industry consolidation steadily reduced the number of independent RISC CPU houses.

In the end, x86 benefited from an integrated device manufacturer (i.e. Intel) with massive volume and a durable process lead, which set the cadence for the rest of the field. The outcome was less about the superiority of a CPU-only foundry and more about scale – continuous access to the leading node, paid for by either gigantic internal volume or a foundry model that spread the cost across many advanced products.


Yes. AFAIU the cost of process R&D and building and running leading-edge fabs massively outweigh the cost of CPU architecture R&D. It's just a world of its own largely out the comfort zone of software people, hence we endlessly debate the merits of this or that ISA, or this or that microarchitecture, a bit like the drunkard searching for his keys under the streetlamp.

It's also interesting to note that back then the consensus was that you needed your own in-house fab with tight integration between the fab and CPU design teams to build the highest performance CPU's. Merchant fabs were seen as second-best options for those who didn't need the highest performance or couldn't afford their own in-house fab. Only later did the meteoric rise of TSMC to the top spot on the semiconductor food chain upend that notion.


If you're counting all desktop/server computers, Linux has way more market share than all of the Unices ever did. It's probably even true for desktop Linux. If you count mobile phones, Android is a Linux derivative, and iOS is a BSD derivative. The fundamental issue for the workstation vendors was simply that with the P6, Intel was near parity or even ahead of the workstation vendors in performance, and it cost something like 1/4 as much.


HP-UX was one of the most popular operating systems to run on Itanium though?


HP was also one of the few companies to actually sell Itanium systems! They were also the last to stop selling them. They ported both OpenVMS and HP-UX to Itanium.


HP also ported NonStop to Itanium.


Well, largely because they made it difficult for customers to stay on PA-RISC, then later, because their competitors were dying off...and if you were in the market for stodgy RISC/Unix there weren't many other choices.


As for RISC/Unix, in the enterprise, IBM's POWER/AIX is still around. I know some die hard IBM shops still using it.

I guess Oracle / Sun sparc is also still hanging on. I haven't seen a Sun shop since the early 2000's...


There's still a lot of AIX around and the LoB is seeing revenue growth. You just don't hear about it on HN because it's mostly doing mundane, mission critical stuff buried in large orgs.

I still run into a number of Solaris/SPARC shops, but even the most die hard of them are actively looking for the off-ramp. The writing is on that wall.


I believe it! For a few years, I worked on fairly large system deployed to an AIX environment. The hardware and software were both rock solid. While I haven't used it, the performance of the newer POWER stuff looks incredible.


Oracle sales would push you towards HP-UX on Itanium as recommended platform.

To the point that once that ended with Oracle's purchase of Sun, there was a lawsuit between Oracle and HP. And a lot of angry customers as HP-UX was pushed to the last moment of acquisition announcement.


That's what we ran. Core system was written on PICK Basic in the 80's and it just kept going on and on. I was buying HP Integrity (Itanium line) spare parts on eBay up until about 10 years ago.


Absolutely not. Sun destroyed itself and Solaris, not Intel. The others were even more also-rans than Solaris.


If Sun had been more liberal with Solaris licensing on x86 in the early years (before, say, 2000), we might all be running Solaris servers today. Sun / Solaris was the Unix for most of the 90's through the dot-com crash.

Almost all early startups I worked with were Sun / Solaris shops. All the early ISPs I worked with had Sun boxes for their customer shell accounts and web hosts. They put the "dot in dot-com", after all...


To quote @swiftonsecurity - https://x.com/swiftonsecurity/status/1650223598903382016 ;

> DO NOT TAKE HOME THE FREE 1U SERVER YOU DO NOT WANT THAT ANYWHERE A CLOSET DOOR WILL NOT STOP ITS BANSHEE WAIL TO THE DARK LORD AN UNHOLY CONDUIT TO THE DEPTHS OF INSOMNIA BINDING DARKNESS TO EVEN THE DAY


This 1000%; and some 1us are extra 666. I had a sparc t2000 at one point, it was so much louder than a 1u Supermicro. Or whatever was in Microsoft HW labs, those you could hear from multiple hallways over… There were non optional earplugs at the doors.


It seems it’s only US English keyboards that apple print the name of the key on.

All ISO layouts plus a few of the other ANSI layouts (like Korean) make use of the symbols


Logitech appear to have made the unifying receiver legacy tech now. Preferring their Bolt receiver going forward. This does have a USB-C receiver available but not supplied with their devices

https://www.amazon.com/dp/B0F9DWFSHP


Hmm... $30 and only being sold by third-party sellers there. That's not encouraging. Hope that's a temporary inventory shortage because of pent-up demand and not a sign that they barely intend to make any of these. Because I have a half drawer worth of various of the compact USB-A receivers and have literally never seen any USB-C equivalents in real life yet... It's time.


Generally PCI-E lanes and memory bandwidth tend to be the big difference between mobile and proper desktop workstation processors.

Core count used to be a big difference but the ARM Procs in the Apple machines certainly meet the lower end workstation parts now. to exceed it you're spending big big money to get high core counts in the x86 space.

Proper desktop processors have lots and lots of PCI-E Lanes. The current cream of the crop Threadripper Pro 9000 series have 128 PCI-E 5.0 Lanes. A frankly enormous amount of fast connectivity.

M2 Ultra, the current closest workstation processor in Apple's lineup (at least in a comparable form factor in the Mac Pro) has 32 lanes of PCI-E 4.0 connectivity that's enhanced by being slotted into a PCI-E Switch fabric on their Mac Pro. (this I suspect is actually why there hasn't been a rework of the Mac Pro to use M3 Ultra - that they'll ditch the switch fabric for direct wiring on their next one)

Memory bandwidth is a closer thing to call here - using the Threadripper pro 9000 series as an example we have 8 channels of 6400MT/s DDR5 ECC. According to kingston the bus width of DDR5 is 64b so that'll get us ((6400 * 64)/8) = 51,200MB/s per channel; or 409.6 GB/s when all 8 channels are loaded.

On the M4 Max the reported bandwidth is 546 GB/s - but i'm not so certain how this is calculated as the maths doesn't quite stack up from the information i have (8533 MT/s, bus width of 64b, seems to point towards 68,264MB/s per channel. the reported speed doesn't neatly slot into those numbers).

In short the memory bandwidth bonus workstation processors traditionally have is met by the M4 Max, but PCI-E Extensibility is not.

In the mac world though that's usually not a problem as you're not able to load up a Mac Pro with a bunch of RTX Pro 6000s and have it be usable in MacOS. You can however load your machine with some high bandwidth NICs or HBAs i suppose (but i've not seen what's available for this platform)


The M4 Max’s bus width is 512 bytes, not 64.


Aha! That'll definitely get you to 546G/s.


Recently purchased a Sony Android TV. And in the interest of giving it a fair shout, connected it to my network and signed in. The very first thing I saw was a near as makes no difference full screen advert for the Minecraft movie.

Nothing remotely related to selecting an input source or a channel. Just a straight advertisement pane. There is no opt out. No way to turn off the noise that I could see.

This is a plain statement of what their purpose is - sell advertisements first, display what you want second.

Disgusted with it I’ve factory reset the tv and MAC address banned it from my network.

I curse the whole Smart Tv industry.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: