Apparently they announced this in October. They are kicking Altera, which they acquired for 16.7 billion back out into the world to live or die on its own. Which feels a bit weird given what feels like AMD's relative success with Xilinx.
It also dampens their "US developed" technology pitch (which they had been pushing pretty hard vs other solutions that were fabbed at TSMC) I wonder if they will also give up on their third party use of their fabs.
Intel was the first "real" job I ever had and it was during Andy Grove's tenure as CEO and his "paranoid" attitude was really pervasive. I had been working on graphics chip which were the next big thing until the chip recession and then they weren't. And it followed a long series of Intel sending out tendrils outside its core microcomputer (eventually x86/x51 only) winners only to snap them back at the first sign of challenges.
I wonder if we can get better open source tool support from the new entity. That would be a win.
> only to snap them back at the first sign of challenges.
It's really quite impressive how bad Intel's track record has been. Would you say it's a broad cultural problem within Intel? Bad incentives?
(I'b be interested if someone could come up with a list of all of Intel's failed ventures. StrongARM, Optane, Altera... Obviously some of these are par for the course for such a large company as Intel, but Intel still seems to stand out)
(I know one reason Apple is so secretive about its ventures is because it knows many of them won't pan out, and it knows that due to its size anything it does has huge repercussions, and wants to avoid that)
I agree with 1-6 that they have always been really focused on their margins, they were never a company that wanted to go into debt for something that was 'unproven.'
They have had lots of things they stopped doing, the graphics chip I was involved in (82786) was one, the speech chips, the iAPX431, the modem chips, their digital watch was an early one.
I always saw it as an exceptionally low tolerance for risk. When I was their the 386 had just fabbed and I was one of the people watching when the very first chip was powered up and it actually worked (which amazed pretty much everyone). Sun Microsystems was a little startup in Mountain View using the 68000 and getting some traction I suggested to Intel that if they used the graphic chip I was working on and the 386 they could build an Intel branded "Workstation" and both replace the VAXStations they were using for CAD work internally and sell them as part of Intel's initiative to become a "systems company." I got a response to my memo[1] from Dave House who was later CEO but at the time just the head of Microprocessor Operation (MIPO) Marketing explaining to this lowly NCG (new college grad) that Workstations were not much of a market and Intel was only interested in pursuing the "big" opportunities. Six months later I joined Sun (just after they IPO'd sadly) and didn't look back. :-)
[1] Yes memo, as in typed up and printed out and put in a manilla envelope where on the last line of the front had it coming from Dave and being addressed to me. And yes, it started as a memo I had printed out and given to my manager because while Intel had "heard" of electronic mail they preferred the "durability" of actual mail and forcing people to use actual mail cut down on "frivolous" communications.
>I always saw it as an exceptionally low tolerance for risk.
Big laugh at the claim of intel having a exceptionally low tolerance to risk when they're the only semi company left still running their own fabs, decades after AMD sold off their own for being too risky of a venture to keep running by themselves. And every fab company out there will vouch for how risky that line of business is. Even more so to pour billions into a business like this while knowing Samsung/TSMC might beat you.
More correct would be that Intel has a low tolerance to risk for things that are way outside of their core competencies which have traditionally been designing X86 CPUs and fabbing them, and that's mostly it.
I commend them for not giving up on advancing their fab nodes despite the massive issues and setbacks(they could have spun it off like AMD did), and for building the ARC GPUs and keep improving them despite low sales.
>> when they're the only semi company left still running their own fabs
Yes and even within that they are risk averse, no need to look beyond 10nm and their reluctance to EUV.
So they are really risk averse and they just want their success to continue infinitely. In fact, contrary to Pat G claiming that Nvidia is lucky; it's really Intel which was lucky.
>> commend them for not giving up on advancing their fab nodes despite the massive issues and setbacks(they could have spun it off like AMD did), and for building the ARC GPUs
even if the turn around time in semi market is longer, the way they faltered on their Arc timelines and no encouraging signs of their new fab nodes is not commendable!
>even if the turn around time in semi market is longer, the way they faltered on their Arc timelines and no encouraging signs of their new fab nodes is not commendable!
Sure, Intel fucked up along the way, and it's easy to point fingers and laugh from the comfort of the armchair at Intel tripping over massively complex engineering challenges, but if designing and building successful GPUs and sub-7nm fabs was easy, everyone would be doing it.
It is unfortunate that you are equating pointing to the facts as finger pointing and laughing. It would be better if you substantiate instead with facts and data that imply/prove otherwise.
Further, neither do the facts nor my comment in any way implies building GPU's is easy. OTHO, when a company like Intel with their massive competitive advantage (when their fabs were the market leaders) thus leading to financial muscle fritter it away in what is supposedly their strength (fabs), don't you think that they in fact deserve to be laughed at?
>It is unfortunate that you are equating pointing to the facts as finger pointing and laughing. It would be better if you substantiate instead with facts and data that imply/prove otherwise.
Is it that unfortunate? Because if you knew how difficult semi fabrication or GPU design is, you probably wouldn't say these failures are such big deals worth pointing out as negatives, but part and parcel of the process of solving such complex problems only 3 players worldwide have mastered.
Do you see Nvidia trying to fab chips? Do you see TSMC trying to design GPUs? Imagine Intel is doing both at the same time. Of course they will fail at points along the way, the important thing is they stay in the game to provide us with competition.
The billions Biden threw their way isn’t really that much when their fabs cost billions to build. Don’t think the grants cover the full cost of their new fabs - it was 50 billion total right divided among all the corporations including TSMC, IIRC.
> Yes and even within that they are risk averse, no need to look beyond 10nm and their reluctance to EUV.
Actually from what I heard, their failures with their Intel 7 was partly due to trying too many unproven technologies - i.e. they pulled a Zumwalt. With their current Intel 4 & 3 I heard they were more conservative.
Intel actually had trouble getting 14nm off the ground and it was late to ramp up, that should have been a signal that past progress might be coming to an end. I believe that when things got difficult managements response was to try to catch back up by going crazy on the 10nm jump and it went horribly.
What was more shocking is how long them seemed to be in denial before putting contingency plans in place.
>> I thought AMD had to spin off their fabs due to cost and cash flow issue because Intel was too busy playing dirty?
IMHO Ruiz cut back on engineering and increased marketing. Then purchased ATI which I though was an odd or even dumb move (it was not). But that expensive purchase along with the failure of bulldozer almost killed the company. That's why they spun off Global Foundries - to stay alive.
"Real men have fabs!" - AMD founder and CEO at the time, Jerry Sanders [1]
He said this some time in the 80's to early 90's as a defence for their expensive commitment to still be running their own fabs when most big semi companies like Motorola were selling off their fabs and going fabless because it was eating their resources and it was impossible to compete with the likes of TSMC on node shrinks and value.
So even without Intel screwing with them in anti-competitive ways, they could not have kept their fabs competitive much longer. The writing was already on the wall way back then but Jerry Sanders was just being stubborn.
I mean... Maybe? But Intel proved their success of owning fabs throughout the 90s and 2000s, only 20-30 years after that statement were Intel even considered to be possibly dis-advantaged by owning it's fabs. But that was also a very different market.
So I don't think it's as direct a link as you seem to claim. Maybe it was the right decision even if AMD had the money to keep it going. But it will likely forever be one of those questions that has no solid answer either way.
Intel had the margin and volume to be the exception to the rule, meanwhile Qualcomm, Broadcom, and a ton of other fabless design firms were making good money unencumbered by the risk of investing in a fab division, and with the ability to switch to the best fab for each chip at a given time.
> More correct would be that Intel has a low tolerance to risk for things that are way outside of their core competencies which have traditionally been designing X86 CPUs and fabbing them, and that's mostly it.
As a former Intel-er myself (during Craig Barrett, Paul Otellini times) I would disagree. Intel took bold and foolish bets bying DiaLogic, selling Xscale, betting on WiMAX mobile technology while fighting LTE backed by all telcos. I would characterize Intel's risk management as "passive-aggressive".
I have wondered if leadership didn't understand the Gartner Hype Cycle, because a lot of projects seem to get killed after the investments but before significant success was even possible (the Trough of Disillusionment).
I worked at Intel for 10 years, mainly the Otellini years. To myself I called it Corporate Attention Deficit Disorder. I think it's a symptom of bad management, always moving on to the next shiny rock, no long term vision.
During my tenure I witnessed several failed initiatives. Itanium, Wimax, Digital Home, x86 phone (android), LTE modem. Imagine the billions wasted.
* They got their LTE modem into the iPhone 11 but just couldn't compete.
* ARC GPUs would have been competitve if they had be launched a year or two sooner and its stagnancy is widley due to lack of funding since the market was dead during launch and we were in a massive market pullback where they were ultra hesitatnt to spend ANY money. (massive layoffs too)
* Optane was a great product and had a good roadmap, but they were burning cash waiting for adoption and their customers had decoupled their memory and storage from the physical layer to mitigate limitations of NAND ssd so much that they didn't see the price as worth it, despite the obvious latency and endurance benefits that it had over even SLC NAND drives.
Sometimes they just miss the market for otherwise good products.
> They got their LTE modem into the iPhone 11 but just couldn't compete
it wasn't "just couldn't compete", those modems were horrible. the speed was lower than the Qualcomm that managed to be shipped in some of the iphones (from 8-11), and the connections were extremely flaky. I had a 9 with an intel modem, and it was abysmal compared to the previous iPhones I had owned.
that apple went back to Qualcomm, bought the intel division, and still is shipping Qualcomm modems says a lot -- it's still just not there.
I can’t entirely blame WiMax on Intel. I only saw it from the sidelines, but I had a laptop with integrated WiMax (complete with Intel chipset and an apparently functional antenna) in a market where Sprint had their much hyped SDR network, and they had WiMax working enough for their trade show folks to demonstrate it. But it was not one of their launch markets, so they… didn’t launch. They literally refused to take my money and sell me service.
I don’t care how well Intel executes and how well your launch market functions, if you’re selling a product that caters to business travelers and users, you had better actually be willing to sell the thing, across a large swath of the country, or it will fail regardless of how good it is.
Don't forget Ultra Wide Band (another project that set fire to a pile of money used for hardware and software development), and the jettisoning of StrongARM.
I have a hard time believing that Intel will succeed at being a foundry this time around given how many failed prior attempts they have made. The foundry model is incompatible with the high margin duopoly CPU vendor model. What will prevent Intel from scheduling their own high margin products in fabs over foundry work from other companies? They will always have a business incentive to deprioritize their foundry customers when push comes to shove.
Reading some of the comments made by TSMC are enlightening. One of the reasons TSMC provided for their ability to succeed when Intel was flailing with their 10nm+++++ process is that TSMC runs experiments 24/7. This allows TSMC to run more experiments per day/week/month than Intel. I would really like to know when Intel starts running 3 shifts of process development engineers at a single fab to iterate faster, as that would be a true sign that meaningful change has occurred inside the beast.
Good luck finding enough engineers who want to work those shifts. That might work in Taiwan, or mainland China (where 996 work is normal), but not in the US.
Anyone know why Intel / HP were so quick to productize Itanium?
Given the strong assumptions it was making about compile-time instruction scheduling, I'd have expected them to de-risk the project first. I.e., get a suitable compiler working, and get some benchmark results on a simulator.
"Anyway this chip architect guy is standing up in front of this group promising the moon and stars. And I finally put my hand up and said I just could not see how you're proposing to get to those kind of performance levels. And he said well we've got a simulation, and I thought Ah, ok. That shut me up for a little bit, but then something occurred to me and I interrupted him again. I said, wait I am sorry to derail this meeting. But how would you use a simulator if you don't have a compiler? He said, well that's true we don't have a compiler yet, so I hand assembled my simulations. I asked "How did you do thousands of line of code that way?" He said “No, I did 30 lines of code”. Flabbergasted, I said, "You're predicting the entire future of this architecture on 30 lines of hand generated code?" [chuckle], I said it just like that, I did not mean to be insulting but I was just thunderstruck. Andy Grove piped up and said "we are not here right now to reconsider the future of this effort, so let’s move on". I said "Okay, it's your money, if that's what you want."
Suddenly this came up again later in another guise but again Andy shut me off, he said "we're not here to discuss it". Gordon Moore is sitting next to me and hasn’t said a word, he looks to all intents and purposes like he's asleep. He's got his eyes closed most of the time, you think okay, the guy's tired, he's old. But no, 20 minutes into this, he suddenly opens his eyes and he points to me and he asks, "did you get ever get an answer to your question?" and I said, "actually no, none that I can understand". Gordon looked around and says, "how are we planning to move ahead with this, if the answers don't make sense?" and this time Andy Grove said to him "We’re not here to discuss that, Gordon"."
When Intel does something like this, it reveals how tight their operating margin is. This is not a good sign. Intel has never been able to push away from the fact that they'll always be a hardware first company.
It's a very tight market out there. TSMC was able to make it with very tight partnerships with OEMs and I doubt Intel can pull off something like that.
When Intel acquired Altera, they scrapped the old catalog of Altera FPGAs. A few corps I was working with, who had heavy lock in to Altera due to those FPGAs, got royally screwed (had to suddenly redesign, and re-certify a bunch of boards that they where selling to happy costumers). Of course they went with Xilinx at the redesign, because f*ck you, and it didn't hurt that the SoC offerings of Xilinx where very superior at the time. I don't think it was a cost reduction issue, Altera was selling those old FPGAs at a BIG markup.
The amount of market I saw dissolve for Altera in a puff was incredible. To this day one of the most shortsighted corp decisions I have ever witnessed.
yeah, FPGAs are commonly used in market segments which value long-term availability and support of components (>10 years at least). Definitely not something to churn like consumer CPUs.
I don’t know I don’t have the credentials to be an engineer at Intel, but I think they know its a bad idea to end or spin off their foundry service, especially now that they have a new client that will know the ins and outs of their fabrication process.
I believe I've read that design rules give away parts of the process so usually you have to sign anexclusivity agreement not to work with another vendor for a while, or firewall teams that work with different vendors.
I can't imagine what you could take from one place to another that isn't trivial if they've got to the point of being even able to provide the technology. In the grand scheme of things, these rules are relatively high level. There are NDAs but that's more about non-public roadmaps, timelines and specialised IP i.e. future things that a competitor could then have time to work towards.
The point being, there's no world where a fab would worry about sharing their design rules.
This is old and not the one I read, but what I could find through search:
> Engineers needed manufacturing process information to verify that designs were free from systematic yield limiters, but foundries were reluctant to distribute process information that could reveal best practices and trade secrets. Encryption enabled the protection of selected design rules, process data and other information while still giving customers the details they needed.
> In this discussion, you must understand that the required protection is much more than just an obfuscation of rule definitions so they are no longer human readable. The process also includes the encryption of related files and control over all the outputs from the tool environment that provide information about any checks.
Sometimes they don't even bother to sell it. In the case of Barefoot Networks, they bought it, spent three years developing the next-generation product, got it to within 80-90% of being ready for tapeout, and then just pulled the plug.
One of my relatives is a high level Barefoot employee who was integrated into Intel leadership after the acquisition. I was told that it was killed because of worries about internal competition with Intel IPUs. It still seems like a ridiculously foolish idea, considering how much money Intel spent acquiring Barefoot. Not that the founders and CEO care, they got their money!
The whole thing definitely seems symptomatic of Intel's extreme caution. I wonder why they didn't apply the same caution when buying Barefoot. Intel obviously didn't have a great idea on how to leverage them.
This is all in stark contrast to AMD buying Pensando, a company that my SO works for. It makes sense for AMD to acquire Pensando to expand their data center offerings and compete directly with Intel. I think AMD made a smart buy.
I was a part-time contractor for Barefoot, and came along for the ride during the acquisition, so I have some first-hand knowledge of this. I am very much out of the managerial loop so I have no insight into the actual motives for killing the project, but I can tell you that at least one of the founders cared very much and fought tooth and nail to keep it alive.
> It still seems like a ridiculously foolish idea, considering how much money Intel spent acquiring Barefoot
Acquiring companies in order to kill them is a horrible business practice but not unusual. The thing that makes no sense to me is why they kept it going for three years before killing it. If they bought it in order to kill it, they should have done that before spending another hundred million on it.
They might have discouraged alternatives to pop-up. 'Oh Intel has this in the pipeline, it's hopeless to compete, let's invest in something else'. I've heard this so many times, just to see Intel kill said product, product-line.
..
At this point, if it's not about x86 cpus, listening to Intel's roadmaps seems foolish.
Why on earth should Intel care about having plausible deniability about that? Buying a company in order to kill it is not illegal. It isn't even considered unethical except by a few utopian idealists. It's a common and accepted practice in the business world.
You get rid of your competition either way, though - either the competing products are now part of your offerings, or they're no longer sold. Either situation leads to the same level of reduced competition.
The customer reaction to this decision spoke volumes. The official reason given is margins compared to hardware such as the IPU but, personally, I chalk it up to turf wars. Barefoot had a distinct culture to Intel's other networking units. There is a reason many of its engineers ended up at FAANGs and not other semiconductor companies.
> I wonder if we can get better open source tool support from the new entity. That would be a win.
Unfortunately, this seems unlikely. The FPGA vendor tools are a complete shit show and open source tools would be greatly welcomed - they're doing really well in the Lattice/ICE40 space but those are small FPGAs. But Altera and Xilinx don't seem at all inclined to encourage the development of open source alternatives.
That's not where the money is. They don't care about hobbyists, startups or schools. In FPGA world is about big contracts, big design wins. Networking, defense, space, industrial control etc. It's why the design tools are so bad too: there's no reason for the FPGA vendors to invest in those tools when the design wins come from size and performance. In fact, giving customers an escape hatch from vendor lock in would be a Very Bad Idea.
Indeed. I was working at a startup doing some FPGA work about 10 years ago. We ran into several bugs in the Xilinx software. Initially we could submit bug reports, but about 6 months in Xilinx suddenly changed their policy and decreed that only tier 1 customers (definitely not us) could submit bugs - everyone else had to look for help on the forums. We spent a lot of our time just working around the bugs in their tools. At that point I determined that, though FPGA development was kind of fun and interesting, I would not go into a field where I was going to be dependent on such buggy software and went back to the software side (where pretty much all the development tools are open source and if you run into trouble you're going to be able to get help).
It does beg the question whether this is simply a chicken-and-egg problem.
When it is almost impossible to develop for, you only get big contracts because nobody else has the resources to design products for it. On the other hand, with good and free design tools the toy projects done by hobbyists and schools can serve as the catalyst for using it in medium-scale projects.
There are plenty of applications imaginable for something like Intel's SmartNIC platform - but you're not going to see any of them unless tinkerers can get their hands on them.
Xilinx, for example, used to be very friendly about making really cheap or even free FPGAs available to academic users. This is because those users lead either directly to design wins (some academic designs go big, NSF style) or they become corporate users who are familiar with Xilinx gear.
But the software was crap, and they would have gotten more wins if the software wasn’t so awful.
Xilinx/AMD doesn't make overt donations any more AFAIK, but they still subsidize hardware as a loss leader and for academics. For example, academic groups can buy the RFSoC 4x2 board for $2149 USD, which is a small fraction of the volume price of the chip that's on it.
"EDA software is crap" is so close to an axiom around here that I think Vivado doesn't get the credit it deserves. The synthesis flow has been rapidly modernizing and supports a chunk of VHDL-2019. The bundled simulator still lags on support and features, for reasons that are pretty easy to understand.
> Xilinx/AMD doesn't make overt donations any more AFAIK
That’s too bad. I once got a small pile (8?) of max-spec Virtex-5 chips, for free, FedExed from Xilinx. Xilinx apparently valued them so little that they didn’t even bother putting the full address on the envelope. Tracking it down was fun.
The only remotely supportable way to synthesize images for them was to run a primitive CentOS 5 container on a big server — the Windows version of the tools were still 32-bit, and 4G of address space was too little for the synthesis workflow. Even with the magic 64-bit RHEL/CentOS-only build, it would take 45 minutes or so for the tools to notice any errors during synthesis.
Of course, that's kind of also why nVidia is trouncing AMD in ML/GPGPU. AMD chased after the big iron, but nVidia got stuff working on consumer hardware… (their early lead definitely helped too, of course)
>> The FPGA vendor tools are a complete shit show and open source tools would be greatly welcomed - they're doing really well in the Lattice/ICE40 space but those are small FPGAs.
Thanks for the stock tip. The size of a chip can change, and will. Stupidity really tends not to. Great dev tools are incredibly important, and the people using them actually can influence decisions on which parts to use.
Side question, how do you see the future of Intel? Do you share the doom perspective or think that they continue to have opportunities ot catch up? Thanks!
Every year Intel announces a new failure, and every year I feel more ashamed for having Intel as the bulk of my software engineering experience on my resume. I saw the signs when I was still there and attending quarterly updates. It felt like in one breath they admitted to missing the mobile boat and in another they said only gamers need dedicated/discrete powerful gpus.
Intel can and does produce phenomenal technical products. Their compilers and HPC toolsets have done enormous good. Their chips are second to none (or perhaps tied for first, depending on who you ask). They are not a shameful company. They are just yesterday's giant, and as such, are due for a Microsoft-like reinvention.
Intel is my daily driver for gaming, due to it's incredible (but sadly cancelled) Extreme Compute Element form factor with its incredible customization.
What they're struggling with is adapting to a changing world and to find new business centers. But this is a hardware/software world built on Intel, even if it changes daily.
> Their chips are second to none (or perhaps tied for first, depending on who you ask).
No. They're definitely behind AMD. AMD has significantly better power efficiency[1], better performance in gaming for most of the titles[2], and vastly more cores and better multithreaded performance (with Threadrippers[3]).
I’ve been on the extreme of maxing out fps in games for a decade.
Gamersnexus is great! But they and the other YouTube benchmarkers don’t tell the whole story. when you are playing games that are bottlenecked by single thread performance single core performance and therefore memory speed becomes important. This is very difficult to benchmark as most of those games where it matters (CoD warzone) don’t have an easy way to benchmark. When I was testing my OC I literally would change the ram frequency in the bios, and drop into a pubg match, and screen record my fps and manually add it to a spreadsheet.
So take those benchmarks and then add some performance gains from memory overclocking that I (and others) been able to get meaningful gains in fps beyond just overclocking the cpu. 5-10% more fps is not amazing, but it’s definitely worth it to me for games like warzone or valorant.
AMD’s support for high end ram has always lagged. Happy to use amd for non-gaming, but if you’re trying to build “the best gaming rig” I’m not sure AMD can take that crown. And until people are willing to benchmark single threaded bottleneck games like warzone with different ram speeds, I doubt this argument will be settled.
This was true until X3D. Only very few titles benefit from fast memory over fitting ~all the memory a typical game accesses in a single frame into screaming fast L3.
Memory speed is much less relevant when you have a giant L3 due to 3D stacked cache. I don't think you're trying to mislead anyone, but I'm pretty skeptical that you're getting 5-10% more than an X3D, even with memory overclocking, given how close the non-3D stacked AMD chips get to Intel here. [0]
The gap between those non-3D cache chips and the newer 3D-cache ones is generally very big for games because they touch huge amounts of data every frame and thrash memory.
> Based on the numbers we've given you, from our data, and the prices we have today, the decisions for the most part are pretty clear: The best gaming CPU is the 7800X3D (that's an objective fact), the most efficient part is the 7980X, the 5800X3D is the best upgrade path, and Intel makes the strongest showing in the i5-13600K or 14600K (whichever is cheaper) for a balanced build, or the 12100F for an ultra-budget build.
Compare to toms' hardware 2023 listing earlier in the year.
Category | Winner | Alternate
Overall Best CPU for Gaming: Intel Core i5-13400 (Buy) [More] AMD Ryzen 5 7600 (Buy) | Ryzen 5 5600X3D
High Performance Value Best CPU for Gaming: AMD Ryzen 7 7800X3D (Buy) [More] Intel Core i7-14700K (Buy) | Ryzen 7 5800X3D (Buy)
Highest Performance Best CPU for Gaming: AMD Ryzen 9 7950X3D (Buy) [More] Intel Core i9-13900K (Buy)
Mid-Range Best CPU for Gaming: Intel Core i5-13600K (Buy) [More] AMD Ryzen 5 7600X (Buy)
Budget Best CPU for Gaming: Intel Core i3-12100F (Buy) [More] AMD Ryzen 5 5600 (Buy)
Entry-Level Best CPU for Gaming: AMD Ryzen 5 5600G (Buy)
(summarizing: Intel best CPU, Best mid and budget entries, so winner in 3/5 categories. And what does it mean to give AMD "Best high performance" and "Highest performance?").
This is a neck-and-neck race, with AMD a fan favorite, and year over year changes in leadership. No way is Intel out of this as a viable competitor of equal-ish standing. I don't mean to denegrate AMD in my original post, but I do mean to say that Intel is still producing top-tier results.
Toms has always had a slight bias towards Intel in their assessments, I think. Their picks are pretty reasonable, but the "highest performance for gaming" category with the 7950X3D and 13900k doesn't make a lot of sense to me, when the 7800X3D which won the "High perf" category beats the 13900k significantly in most games benchmarks, while drawing much less power. Intel has some great value offerings, but the 7800X3D is the real champ in gaming right now. The fact they have Intel winning 3/5 of their categories seems like a demonstration of their subtle bias. Other sources like Anandtech have historically done a better job of neutral reporting.
> This is a neck-and-neck race, with AMD a fan favorite, and year over year changes in leadership.
> (summarizing: Intel best CPU, Best mid and budget entries, so winner in 3/5 categories. And what does it mean to give AMD "Best high performance" and "Highest performance?").
Intel as much more brand recognition than AMD. So calling AMD fan favorite is kinda strange to me.
There are only two x86-64 chip designer right now, and from a pure market strategy it doesn't make sense for AMD to offer much more value to customer than what they can get from the only competition in town. If you want to really understand how intel is strugling you have to did into the execution speed of both companies, the profit margin on each SKU etc... etc...
Intel is only neck and neck with AMD if you don't look at things like power efficiency and how much money they are actually making per chip.
> In particular, Intel still has a 5x lead on AMD in server market, which, of course, is a pretty big market.
This a function of market inertia more than anything. From a technical perspective AMD Zen 4 workstation and server offering seem to be much better these days.
But it was also design where Intel failed to innovate for many years.
For example, the chiplet approach that AMD embraced allowed AMD to put together far more powerful CPUs, i.e. with more cores and at a competitive price, than Intel could.
Trying to be measured here: Depending on who you ask, you'll find that for gaming, AMD or Intel alternate pretty regularly on technical benchmarks. At the moment, AMD is most power efficient, on average, and can eek out higher overall benchmark performance (as of late 2023). Throughout 2023, AMD and Intel were both listed as "best" on Toms Hardware, PC Mag, etc etc
For server marketshare, intel vastly dominates (80%?). For gaming marketshare, Intel still has 60-70% the marketshare.
Even if it's a mixed bag, if given an Intel (or AMD) processor, there's no way I would throw it in the trash and if a current-gen chip, you can find things that either are better than the other with.
Both are top-tier accomplishments. AMD has a signifigant fan-following though, which tends to distort the story a little.
In terms of gaming cpus, they definetly are neck and neck with performance.
But yes you are right, Intels chips arent something you would throw away in the trash. They are good chips. They need to stick around. Its just that you cant say marketshare makes them the best.
I didn't say marketshare makes them the best. I said there were lots of metrics, that on any given metric, AMD and Intel trade first place spots frequently, and marketshare implies there's something special still about Intel.
Intel’s engineering is pretty solid, I think anyone who looks down on Intel’s engineers because they “only” managed to overcome their management’s wasteful dithering for… like… 30 years is not worth working for.
These sound largely like management/product direction issues and is not really indicitive of bad engineering quality. I wouldn't be concerned about the quality of your resume.
That's a pretty personal take; don't take that shame on yourself. Stick to the usual, comfortingly impersonal "Obviously, Intel is dead and buried and all their management sucks and they ruined the world" whenever Intel has announced a setback any time in the last 50 years. No one has a blanket "we don't hire losers from Intel" policy; it's not like you worked at CA or Oracle.
This was actually a bright-spot in Intel's lineup that had a chance to be a moonshot. I don't know why Intel is giving up this early especially when AMD acquired Xilinx recently. Is intel trying to emulate Nvidia? I don't think they should try to become another Nvidia. They should be building programmable AI chips with FPGA. Heck, LLMs are able to code Verilog. There are many many possibilities.
They acquired Altera in 2015, which in the tech world is when the dinosaurs roamed the earth. How much more time would they need to see a profitable synergy? Bearing in mind they can probably see what's in the pipeline for the next couple of years.
I'm not an expert... But in my naiive opinion it seems entirely reasonable to expect an acquisition to start paying off within ten years?
A big issue is that sticking an FPGA onto the CPU die itself doesn't really make sense. Those are some really expensive transistors, and you're essentially wasting them when they are not being used for cores or cache. Besides, you'd be spending significant amounts of money building a niche product.
It would be a lot more viable to use a chiplet approach: bake the FPGAs on a slightly-cheaper node, and just glue them right next to the CPU itself. Unfortunately Intel is still miles behind AMD when it comes to chiplets, and their tech is only just now barely starting to ship.
Because intel messes up everything with their bureaucracy. Look at the NUC, they made it extremely hard to get because of their convoluted distribution strategy. They could have easily done direct to consumer and made it easy to get.
Till they fix their culture problem, they should be unloading these business units and profit by holding majority stakes in them.
> They should be building programmable AI chips with FPGA
Eh, people have been saying this for over a decade, and I think that opportunity has passed.
FPGAs are not going to beat GPUs anytime soon due to the software ecosystem (among other things), and ultimately they are not going to outrun ASICs that are now economically viable (especially in the embedded space).
> FPGAs are not going to beat GPUs anytime soon due to the software ecosystem
Completely true as long as the chip companies stonewall open-source toolchains.
If Xilinx or Altera published the bitstream format for their highest-volume high-end chip (i.e. one or two generations back from the bleeding edge) you'd see the effect on the entire AI space within twelve months. I'm not kidding.
The interest in this kind of access from developers is enormous, and the problems in this space are extremely regular. There are massive opportunities for FPGAs to avoid spilling to DRAM or even to SRAM -- you have an ocean of tiny register-speed memories in LUTRAM mode.
But it will never happen. And so FPGAs will continue to be trinkets for weapons manufacturers and not much else.
If they adopt open-source tool chains Xilinx & Altera would be signing their own death-warrants. The hardware would be reproduced by third parties (the open source code would contain all you need to know to produce the chips) and it would become a commodity like DRAM. Profit margins for the incumbent companies would go to zero and the users would profit immensely.
Instead, FPGAs are sold to big entities like governments and big corps who can throw bodies at the crappy toolchains until they produce what they want.
Now I'm wondering about a world where OpenAI can treat their compute not as "dedicate this time slice of an A100 to generating tokens for customer A's query" but rather "there is a custom pipeline at a gate level that exactly matches our model architecture, hardcoding weights at the gate level, and can treat parallel requests the same way a processor would pipeline instructions." I don't think any human could write the TensorFlow compiler for this, but theoretically a generative AI could learn how to, experimenting on its own compute hardware? I don't know that we've seen the last of FPGAs.
Intel bought Altera in 2015 when it still thought 10nm would be on time and an advanced node. That did not work out. The idea was to get a better FPGA and have more customers to justify fab build out expenses.
Gelsinger more recently said he does not want to force products to be on Intel fabrication. Use Intel processes where it makes sense. No reason to push Altera FPGA to Intel 10/7/4. No reason to push NICs to Intel 10/7/4. And so on.
This is pretty funny. Intel bought Altera, which forced AMD to buy Xilinx with all the zero interest rate money floating around. AMD's purchase of Xilinx made less sense because AMD is fabless, but Intel didn't end up doing anything with Altera. Its not clear if Altera even started using Intel fabs for its chips. AMD's Xilinx has been comparatively more successful, but I don't think that had anything to do with AMD.
Maybe we can look forward to all the ZIRP semiconductor consolidations to unwind.
Well, it "made" sense for intel because this allowed Altera to switch to Intel fabs. Not saying it's a huge strategic value or anything, just saying that's the difference.
Altera switched to Intel fabs from TSMC in 2013, so yeah it makes sense for Intel to pick up the folks you don't have to do much to integrate into your manufacturing process. But AMD and Xilinx both already being fabless meant it was a wash...no process integration to be done.
The only "synergy" that has come from AMD-Xilinx is that AMD took a (relatively simple) DSP for machine learning that Xilinx had built and put it into their newer CPU lines. That's still better than Intel-Altera, which basically didn't integrate at all, despite having grandiose plans.
Intel marketing built the market of x86 server + tightly integrated FPGA. Large customers built products based on their promises. Then they didn't deliver.
Then AMD bought Xilinx and started shipping to the market.
This is sad news; I was hoping one day we'd see chips with substantial on-die FPGA fabrics, ideally in ways that we could program with open tools. This announcement makes that less likely.
Red Hat supported a lot of research into this and there's some really interesting stuff, but nothing that is very compelling for commercial use. What uses do you have in mind?
Acceleration of new hashing and cryptographic algorithms, new compression algorithms, new codecs, and many other similar things, without adding special-purpose instructions for them.
Implementation of fast virtual peripherals. Accurate emulation of special-purpose hardware.
Implementation of acceleration modules for well-established software. Imagine if popular libraries or databases or other engines didn't just come with acceleration using SIMD and other instruction sets, they also came with modules loadable on an attached FPGA if you have one.
The problem is that these kind of applications suck for multi-tenant clouds which FPGAs turn out to be poorly suited for (high costs to switch out programs) and are too expensive for the consumer. So the applications are quite niche and limited to traditional use-cases of prototyping ASIC designs rather than actual algorithm accelerators which benefit more from dedicated circuitry/instructions in terms of adoption.
As (very much) a layman I've hoped to see something like dynamic hardware acceleration. Eg for a game that has advanced hair simulation, or 3d sound simulation: reconfigure the fpga and offload the calculations. Maybe even more pedestrian things like fpga driven bullet tragectory physics might be implemented at the game engine level.
More optimistically something along the lines of hands-off offloading, where if the scheduler sees enough of the same type of calculation (e.g sparse matrix multiplication) it can reconfigure the fpga and offload.
I think the problem is this misunderstands what FPGAs are good at. They're actually very bad (or at least painfully slow) at calculations like your examples. GPUs are good at that.
FPGAs excel in parallelizing very simple operations. One example where FPGAs are good is where you might want to look for a specific string in a network packet. Because FPGAs are electronic circuits you can replicate the "match a byte" logic thousands of times and have those comparisons all run in parallel and the results combined with AND & OR gates into a yes/no decision. (I think the HFT crowd do this sort of thing to preclassify network packets before forwarding candidate packets up to software layers to do the full decision making).
FPGAs excel at heterogenous tasks that are branching heavy but only use small amounts of local memory. You can call that simple but those are the bane of CPUs and GPUs.
The moment you are memory heavy the FPGA loses it's edge because CPUs and GPUs are designed around hiding memory latency really well and they overall have higher memory bandwidths.
> FPGAs excel in parallelizing very simple operations.
I don't think that's a good characterization. FPGA are good at what they end up being programmed for. And in the final analysis, everything a chip does is broken down to simple operation.
The FPGA selling point has always been around perf/W and efficiency with regard to a "set of task". An ASIC will always be faster for a specific task, and CPU will always be faster on average on everything. However, when considering say "compression" or "check-sum" as general class of algorithm, and FPGA with a set of predefined configuration could be better and cheaper.
Good at what they're being programmed for is a bit of tautology.
FPGAs being selected for performance per watt was only a fairly recent phenomenon, when they were deployed on semi large scale as password crackers/cryptocurrency miners.
Their real strength is ultimately real time processing (DSP or networking), with reconfigurability often quite valuable for networking applications. For DSP applications it's usually because a MOQ of custom silicon can't be justified.
I was thinking it would be a good fit for crypto. New algorithms could be implemented with better-than-software performance, constant-time algorithms could be ensured etc.
I honestly have more hope for cheap effinix FPGAs driving one of those 5 Gbps FTDI chips to get 400MB/s at a price that doesn't break an arm and a leg. (Custom PCB design for less than 100€)
I don't know if this is the common perception and I'm definitely a bit biased, but it seems like a bad sign for Intel. Without a paradigm shift I don't foresee anything improving for them. Their sales team must be insane though. I feel a bit "good riddance" about it too and wish ARM or RISC would become the standard for gaming and productivity (read: rendering, compiling, not Microsoft Word) PCs.
Maybe I'm too stuck in my own sector, but I haven't seen an Altera FPGA in the wild in forever. All the Altera FPGA's I've seen are over 10 years old, and anything newer is a Xilinx FPGA. Granted, my field has moved purely to MPSoC's, but it's crazy to me that I've never seen a Altera FPGA even in the conversation.
Counter point to this, we use them almost exclusively. Their Agilex 7 and Stratix 10 devices make up most of the compute on our acceleration cards. The fact they have market ready CXL IP when no one else does has been a recent win for us. We work closely with Microsoft, and they use mainly Intel/Altera too.
The first thing this reminded me of was when Intel got rid of StrongARM/XScale because they didn't think it would amount to much in the long run. Hopefully they don't regret this particular spinoff in the future.
I take exception to the usage of the word "spinoff". Intel is selling a portion of Altera. If this was a true spinoff, Intel shareholders would get shares in the new entity.
I know a Altera folks personally. From my understanding this is happening - new shares and all. They'll stay on Intel stock until the new entity is finalised (some time next year). Apparently the plan is to go private and create a new IPO 3 years from now like they did with Mobileye.
Right. In a true spinoff, there would be an event in which all Intel shareholders of record in the ex-date would receive a tax-free distribution of Y shares of the new entity for each Intel share owned.
Large acquisition rarely seems to pan out well in the tech sector. Especially when big companies try to acquire their way into an adjacent market.
Also some company seems to be significantly worst than other, MSFT/microsoft/Dell come to mind. My suspicion is those type of acquisition are mainly driven by C/executive level employee as way to hide the real struggle of the company.
Is there a report analyzing bit tech acquisition say for the last 30 years, and they economical impact ? That would be an interesting read.
Maybe it's time for a new form of regulation around acquisitions
Maybe they will finally make vhdl 2008 a part of the lite edition and stop knee capping it. Maybe create some updated parts that better fit the middle price range area instead of solely focusing on mega agilex parts.
Their competitors don't seem to do this sort of thing.
Look up Agilex 5 and 3. Announced some time last year/early this year. Low to mid range devices. Early access customers are working on designs for 5 now.
Yeah, it's interesting to compare Nvidia's strategy to Intel's. I'm sure there are quite a few Nvidia projects that have been cancelled or even acquisitions liquidated, but they all seem to be small. Every significant part of Nvidia that I can remember is something they are committed to, sometimes over multiple decades, even when the market is not there and sales are near zero. This seems to come from actually having a consistent, stable long-term vision and buy-in all the way up to Jensen Huang serving as a driving force behind acquisitions and projects, unlike Intel where the driving force seems to be bean counting and market domination related.
To give credit to Pat Gelsinger, his stated goal is to shed non-essential units and refocus on the fundamentals. But I'm not sure how well that's going.
Part of this I'm sure is management style and culture (Nvidia famously has a very strong get-it-done-at-all-costs culture for example) but I think the reason these ventures were persisted is because the founder (Jensen Huang) is still CEO and controls the majority of the company.
Intel now, without Andy Grove and the founders steering the culture[0] it seems there is no-one who is able to steer this ship past quarterly results anymore.
AMD had suffered decades of mishaps and near bankruptcy to finally find competent leadership in recent years which turned the company.
[0]: After all, one core virtue that Andy Grove had was "only the paranoid survive", which in context, was all about never getting too comfortable with your market position. Granted, they also engaged in many illegal practices as essentially a monopoly in the 90s. I guess they lost his second virtue of Competitive Mindset which as Grove saw it viewed competition as the key driver of innovation and progress. This was out the window by then, though I don't know if he was effectively leading the company by 96. He stepped down in 98, but he had health problems before that - eventually diagnosed with prostate cancer. This is all to say that Intel's culture is a mixed bag of unhealthy paranoia and being used to being #1 in their categories, which is all incoming executives and major shareholders see when they think Intel. There's no-one to faithfully steer the ship back to clear waters it seems
I think you might be attributing too much agency to the CEO and management of both companies. IMO there are a lot of external factors too.
It might be the case that Andy Grove "only the paranoid survive" was really a factor. Or simply the fact that under Andy, the tech sector was really different : X86-64 market was the dominant and growing platform of compute, they pretty much had a strong set of patent around X86-64. The initial military and gov contract give them enough run away money to build a moat around having a fab.
Also let's not forget despite all those advantage, intel was also fine 3 or time, for more than a billion each time for very anti competitive practice. It's unclear to me that intel was never the great company they think they are.
Fast forward to now, X86-64 is no longer growing as fast, having a fab is no longer a moat with TSCM around (might even be seen as a liability), the WinTel monopoly doesn't have strength he had...etc... etc... The game is just much harder now.
For nvidia, they had a triple boom market with first PC gaming, then crypto,the now AI... Hard to distinguish CEO performance from market performance.
Companies tend to graft their culture from the founders establishing it with early employees and outgrowth from there, it does say something.
Now the market moving to mobile ARM chipsets definitely damaged Intel, though Intel had years to come up with a viable alternative and ways to re-capitalize its fab infrastructure etc. They let their own moat run dry. Theories abound as to why, and since I'm not CEO nor do I have access to Intels internal records, I can only speculate, however from available information I've been able to read, it seems that they became more risk adverse as time has gone on, due in part, because there is immense pressure for any new thing to quickly produce margins similar to that of their main CPU business. Which of course, is a conundrum, as barring any major breakthrough that puts them well and above any alternative, these sorts of things to diversify into strong businesses often take alot of time.
Compare that to Nvidia: the market dynamics is what is allowing them to capitalize on 2 booms that ultimately leveraged GPUs for compute over traditional processors, but the foundation for all this was laid in ~2007 (CUDA first came out then) and Nvidia had to both improve CUDA over time and invested in research around parallelized computing. If were not for their long horizon investment in this, they would not have been primed to take advantage of both crypto and AI.
I'd argue that gaming while lucrative no doubt, is comparatively modest to what they are raking in over crypto and AI, all the while still investing in CPU / GPU integrated chipsets[0] and CPUs tailored to massive data center requirements around AI specifically[1], all of which they wouldn't be able to really take advantage of it not for their long term investment in things like CUDA and parallelized computing. I suspect quite strongly that without the founder at the helm, there would have been little internal cover to continue investing in these things at the rate Nvidia was prior to the boom times.
Yep. Nvidia spent 20 years maneuvering and creating the trends and markets they are now benefiting from. Anyone saying they've just been benefiting from a boom like it was some sort of complete externality has not been paying attention. If that was the case, ATI would still be an independent company, and Nvidia would have overextended themselves trying to chase the crypto mining market. No, it took 20 years of courting academics and iterating on technologies capable of replacing pre-CUDA "supercomputers" to get Nvidia and the AI industry to where it is today. 20 years of cajoling TSMC to take the next leap of faith on scaling processes for them. 20 years of maneuvering to avoid overextending themselves while also avoiding platforms and integrators trying to suck margin away from them. And if you look at their stock price over those 20 years, there were times when the market took a very dim view of these efforts. Times that many other CEOs would have thrown in the towel and shut down projects.
Without CUDA and Nvidia silicon, I'm not convinced the AI boom would have happened any time within this decade.
FPGAs have never made sense. They're way too expensive to use in volume. There's no practical use case for "cool, I can reprogram the chip in the field to implement different functionality". Nobody has figured out how to usefully integrate them with a CPU to make a low-volume SOC. CPUs became so fast that most applications don't need customer hardware. Regular gate arrays are cheaper and faster above minimal volume.
They seem to only have been useful for prototyping and military applications (low volume and infinite budget).
I see them used in pro/prosumer audio equipment, synthesizers, and effects, which is relatively low volume and medium-to-high budget. FPGAs (and CPLDs, µC+AFE, etc) are great for these applications because they have great capabilities you might otherwise need a pile of discrete components or a custom chip for, but it doesn’t make sense to design fully custom silicon if you’re only ever going to sell about 50-500 of something.
So sure, prototyping and military, but there are other uses as well. But none of them are super high-volume because once you’re selling millions of something you should be designing your own chips.
Consumer application and FPGAs are an oxymoron in itself. FPGAs are used in applications requiring special interfaces, special computing units or other custom requirements. If there is enough demand, SoCs are developed for these applications, but this is only useful in mid to high volume production.
Areas like the ones you gave and many more are making heavy use of FPGAs. I work in medical for example. We are using custom designed chips for special detection purposes. But when it comes to data processing and interfacing with computers, we use FPGAs.
This. I work at a research particle accelerator. All the hard real-time, super low-level signal processing and process control stuff is done on FPGAs, there is no other way around.
You somehow managed to write a post where every single sentence is absolutely wrong. FPGAs clearly make sense for prototyping ASICs. That alone makes FPGAs make sense even if it is a tiny niche market. After all, the budget of ASIC companies is big. A few hundred FPGAs for developers are a drop in the bucket compared to the cost of an ASIC.
Too expensive in volume only applies to Xilinx and Altera and with every node shrink, the amount of designs that fit on an FPGA grow while the non recurrent development costs grow for ASICs. Due to this, the maximum volume at which FPGAs remain cost competitive keeps growing with every generation.
Smart NICs make extensive use of reconfiguration because things such as protocols are not set in stone. They can change all the time. It is also possible to build designs that make extensive use of partial reconfiguration.
MPSoCs have been a thing for a long time. If you want to get into those Google Kria 260 and you will be pleasantly surprised.
When I take a look at Effinix FPGAs those are designed specifically for vision applications and massive amounts of I/O. A CPU would struggle with multiple camera streams and consume too much power.
Yeah ok but an 100k LUT FPGA Chip from Effinix costs what? 25€? You're going to need a really high volume to get an overall cost saving. 40000 FPGAs is only a million dollars. The masks of an 22nm ASIC cost 1.5 million dollars without the rest of the development costs.
And finally your last sentence contradicts the first. Next time stick with a story.
It also dampens their "US developed" technology pitch (which they had been pushing pretty hard vs other solutions that were fabbed at TSMC) I wonder if they will also give up on their third party use of their fabs.
Intel was the first "real" job I ever had and it was during Andy Grove's tenure as CEO and his "paranoid" attitude was really pervasive. I had been working on graphics chip which were the next big thing until the chip recession and then they weren't. And it followed a long series of Intel sending out tendrils outside its core microcomputer (eventually x86/x51 only) winners only to snap them back at the first sign of challenges.
I wonder if we can get better open source tool support from the new entity. That would be a win.