Pat was seemed to understand the criticality of fabrication process lead in today's day and age. Hence his push and decision to invest in IFS, plus to win over the government funding to sustain the effort.
In short, a bad or subpar chip design/architecture can be masked by having the chip fabricated on a leading edge node but not the inverse. Hence everyone is vying for capacity on TSMC's newest nodes - especially Apple in trying to secure all capacity for themselves.
Certainly feels like preempting news that Intel 18A is delayed.
Restoring Intel's foundry lead starting with 18A was central to Pat's vision and he essentially staked his job on it. 18A is supposed to enter production next year but recent rumors is that it's broken.
The original "5 Nodes in 4 Years" roadmap released in mid 2021 had 18A entering production 2H 2024. So it's already "delayed". The updated roadmap has it coming in Q3 2025 but I don't think anyone ever believed that. This after 20A was canceled, Intel 4 is only used for the Compute Tile in Meteor Lake, Intel 3 only ever made it into a couple of server chips, and Intel 7 was just renamed 10nm.
I have next to zero knowledge of semiconductor fabrication, but “Continued Momentum” does sound like the kind of corporate PR-speak that means “people haven't heard from us in a while and there's not much to show”.
I also would never have realized the 20A process was canceled were it not for your comment since this press release has one of the most generous euphemisms I've ever heard for canceling a project:
“One of the benefits of our early success on Intel 18A is that it enables us to shift engineering resources from Intel 20A earlier than expected as we near completion of our five-nodes-in-four-years plan.”
The first iteration of Intel 10nm was simply broken -- you had Ice Lake mobile CPUs in 2019 yes but desktop and server processors took another two years to be released. In 2012 Intel said they will ship 14nm in 2013 and 10nm in 2015. Not only they did fail to deliver 10nm Intel CPUs but they failed Nokia server division too, nearly killing it off in 2018, three years after their initial target. No one in the industry forgot that, it's hardly a surprise they have such trouble getting customers now.
And despite this total failure they spent many tens of on stock buybacks https://ycharts.com/companies/INTC/stock_buyback no less than ten billions in 2014 and in 2018-2021 over forty billions. That's an awful, awful lot of money to waste.
Most of the stock buybacks happened under Bob Swan though. Krzanich dug the grave of Intel but it was Swan who kicked the company in there by wasting forty billion. (No wonder he landed at a18z.)
Yes, yes, yes, of course, the infamous CPU released just so Intel middle managers can get their bonus. GPU disabled, CPU gimped, the whole thing barely worked at all. Let's call it the 0th iteration of 10nm , it was not real, there was like one laptop in China, the Lenovo IdeaPad 330-15ICN, which paper launched.
Agree with OP that Intel was probably too deep into its downward spiral. While it seems Pat tried to make changes, including expanding into GPUs, it either wasn't enough or too much for the Intel board.
Splitting Intel is necessary but probably infeasible at this point in the game. Simple fact is that Intel Foundry Services has nothing to offer against the likes of TSMC and Samsung - perhaps only cheaper prices and even then it's unproven to fab any non-Intel chips. So the only way to keep it afloat is by continuing to fab Intel's own designs, until 18A node becomes viable/ready.
That means either he knew and allowed it to happen, which is bad, or he didn't know and allowed GPU division to squander the resources, which is even worse. Either way, it was an adventure Intel couldn't afford.
Disagree on "failed on GPU" as it depends on the goal.
Sure Intel GPUs are inferior to both Nvidia and AMD flagship offerings, but they're competitive at a price-to-performance ratio. I'd argue for a 1st gen product, it was quite successful at opening up the market and enabling for cross-selling opportunities with its CPUs.
That all said, I suspect the original intent was to fabricate the GPUs on IFS instead of TSMC in order to soak up idle capacity. But plans changed along the way (for likely performance reasons) and added to the IFS's poor perception.
The issue with the GPUs is that their transistor to performance ratio is poor. The A770 has as many transistors as about a 3070ti but only performs as well as a 3060 (3060ti on a good day).
So with that, they are outsourcing production of these chips to TSMC and using nearly cutting edge processes (battlemage is being announced tomorrow and will use either TSMC 5 or 4), and the dies are pretty large. That means they are paying for dies the size of 3080s and retaling them at prices of 3060s.
It has taken Nvidia decades to figure out how to use transistors as efficiently as it does. It was unlikely for Intel to come close with their first discrete GPU in decades.
That said, it is possible that better drivers would increase A770 performance, although I suspect that reaching parity with the RTX 3070 Ti would be a fantasy. The RTX 3070 Ti has both more compute and more memory bandwidth. The only advantage the A770 has on its side is triple the L2 cache.
To make matters worse for Intel, I am told that games tend to use vendor specific extensions to improve shader performance and those extensions are of course not going to be available to Intel GPUs running the same game. I am under the impression that this is one of the reasons why DXVK cannot outperform the Direct3D native stack on Nvidia GPUs. The situation is basically what Intel did to AMD with its compiler and the MKL in reverse.
In specific, information in these extensions is here:
Also, I vaguely recall that Doom Eternal used some AMD extension that was later incorporated into vulkan 1.1, but unless ID Software updated it, only AMD GPUs will be using that. I remember seeing AMD advertising the extension years ago, but I cannot find a reference when I search for it now. I believe the DXVK developers would know what it is if asked, as they are the ones that told me about it (as per my recollection).
Anyway, Intel entered the market with the cards stacked against it because of these extensions. On the bright side, it is possible for Intel to level the playing field by implementing the Vulkan extensions that its competitors use to get an edge, but that will not help it in Direct3D performance. I am not sure if it is possible for Intel to implement those as they are tied much more closely with their competitors’ drivers. That said, this is far from my area of expertise.
I will never understand this line of reasoning. Why would anyone expect an initial offering to match or best similar offerings from the industry leader? Isn't it understood that leadership requires several revisions to get right?
Oh, poor multi billion company. We should buy its product with poor value, just to make it feel better.
Intel had money and decades of integrated GPU experience. Any new entrant to the market must justify the value to the buyer. Intel didn't. He could sell them cheap to try to make a position in the market, though I think that would be a poor strategy (didn't have financials to make it work).
I think you misunderstood me. I wasn't calling for people to purchase a sub-par product, rather for management and investors to be less fickle and ADHD when it comes to engineering efforts one should reasonably expect to take several product cycles.
Honestly, even with their iGPU experience, Arc was a pretty impressive first dGPU since the i740. The pace of their driver improvement and their linux support have both been impressive. They've offered some niche features like https://en.wikipedia.org/wiki/Intel_Graphics_Technology#Grap... which Nvidia limits to their professional series.
I don't care if they have to do the development at a loss for half a dozen cycles, having a quality GPU is a requirement for any top-tier chip supplier these days. They should bite the bullet, attempt to recoup what they can in sales, but keep iterating toward larger wins.
I'm still upset with them for cancelling the larrabee uarch, as I think it would be ideal for many ML workloads. Who needs CUDA when it's just a few thousand x86 threads? I'm sure it looked unfavorable on some balance sheet, but it enabled unique workloads.
> I don't care if they have to do the development at a loss for half a dozen cycles,
And here is the problem. You are discussing a dream scenario with unlimited money. This thread is about how CEO of Intel has retired/was kicked out (far more likely) for business failures.
In real world, Intel was in a bad shape (see margins, stock price ect) and couldn't afford to squander resources. Intel couldn't commit and thus it should adjust strategy. It didn't. Money was wasted that Intel couldn't afford to waste.
Well, seeing as GPU is important across all client segments, in workstation and datacenter, in console where AMD has been dominant, and in emerging markets like automotive self-driving, not having one means exiting the industry in a different way.
I brought up Intel's insane chiplet [non-]strategy elsewhere in the thread as an example where it's clear to me that Intel screwed up. AMD made one chiplet and binned it across their entire product spectrum. Intel made dozens of chiplets, sometimes mirror images of otherwise identical chiplets, which provides none of the yield and binning benefits of AMD's strategy. Having a GPU in house is a no-brainer, whatever the cost. Many other decisions going on at Intel were not. I don't know of another chip manufacturer that makes as many unique dies as Intel, or has as many SKUs. A dGPU is only two or three of those and opens up worlds of possibility across the product line.
Pulling out of a vital long-term project because it can't deliver a short-term return would be a bigger waste. Unless you think Intel is already doomed and the CEO should be pursuing managed decline?
It's worth mentioning that IIRC the team responsible for the Arc GPU drivers was located in Russia, and after the invasion of Ukraine they had to deal with relocating the entire team to the EU and lost several engineers in the process. The drivers were the primary reason for the absolute failure of Arc.
Intel deserves a lot of blame but they also got hit by some really shit circumstances outside of their control.
He was CEO. Chief executing officer. It's literally his job to execute, i.e. fix that stuff/ensure it doesn't happen. Get them out of Russia, poach new devs, have a backup team, delay the product (i.e. no HVM until good drivers are in sight). That's literally his job.
This only reinforces my previous point. He had good ideas, but couldn't execute.
They chose to outsource the development of their core products to a country like Russia to save costs. How was that outside of their control? It's not like it was the most stable or reliable country to do business in even before 2022...
Individual Russian software developers might be reliable but that's hardly the point. They should've just moved them to US or even Germany or something like that if they were serious about entering the GPU market, though...
e.g. There are plenty of talented engineers in China as well but it would be severely idiotic for any western company to move their core R&D there. Same applied to Russia.
I doubt they began working on ARC/XE drivers back in 2000. If the entire driver team being in Russia (i.e. Intel trying to save money) was truly the main reason why ARC failed on launch they really only have themselves to blame...
Not just in hindsight -- but by 2011 it was clear to anyone paying attention where Russia was heading (if not to war, then certainly to a long-term dictatorship). Anyone who failed to see the signs, or chose to intellectualize past them - did so willingly.
I think if you're CEO of Intel, some foresight might be in order. Or else the ability to find a solution fast when things turn impredictibly sour. What did he get a $16mil salary for?
It had been obvious for quite a while even before 2022. There were the Chechen wars, and Georgia in 2008, and Crimea in 2014. All the journalists and opposition politicians killed over the years, and the constant concentration of power in the hands of Putin. The Ukraine invasion was difficult to predict, but Russia was a dangerous place long before that. It’s a CEO’s job to have a strategic vision, there must have been contingency plans.
War in Afghanistan (2001–2021)
US intervention in Yemen (2002–present)
Iraq War (2003–2011)
US intervention in the War in North-West Pakistan (2004–2018)
Second US Intervention in the Somali Civil War (2007–present)
Operation Ocean Shield (2009–2016)
Intervention in Libya (2011)
Operation Observant Compass (2011–2017)
US military intervention in Niger (2013–2024)
US-led intervention in Iraq (2014–2021)
US intervention in the Syrian civil war (2014–present)
US intervention in Libya (2015–2019)
Operation Prosperity Guardian (2023–present)
Wars involving Russia in the 21st century:
Second Chechen War (1999–2009)
Russo-Georgian War (2008)
Russo-Ukrainian War (2014–present)
Russian military intervention in the Syrian Civil War (2015–present)
Central African Republic Civil War (2018–present)
Mali War (2021–present)
Jihadist insurgency in Burkina Faso (2024–present)
I don’t know what you are trying to say. If you have a point to make, at least be honest about it.
Also, I am not American and not an I conditional supporter of their foreign policy. And considering the trajectory of American politics it is obvious that any foreign multinational developing in the US should have contingency plans.
My point was that great powers are always in some kind of military conflict, so it's not really a deciding factor when choosing where to build an R&D.
Putin's concentration of power has been alarming, but only since around 2012, to be honest. It was relatively stable between 2000 and 2012 in general (minus isolated cases of mysterious deaths and imprisonments). Russia was business-friendly back then, open to foreign investors, and most of Putin's authoritarian laws were yet to be issued. Most of the conflicts Russia was involved in were viewed as local conflicts in border areas (Chechen separatism, disputed Georgian territories, frozen East Ukrainian conflict, etc.). Only in 2022 did the Ukraine war escalate to its current scale, and few people really saw it coming (see: thousands of European/American businesses operating in Russia by 2022 without any issue)
So I kind of see why Intel didn't do much about it until 2022. In fact, they even built a second R&D center in 2020... (20 years after the first one).
The wars or military conflicts themselves are kind of tangential. It's the geopolitical risks that come along with them.
i.e. if you are an American/European company and you are doing business in Russia you must account for the potential risks of suddenly. The sanctions after 2014 were a clear signal and Intel had years to take that into account.
> So I kind of see why Intel didn't do much about it until 2022.
I'm pretty sure that the consensus (based on pretty objective evidence) is that Intel was run by incompetent hacks prior to 2021 (and probably still is).
> thousands of European/American businesses operating in Russia by 2022
Selling your products there or having local manufacturing is not quite the same as outsourcing your R&D there due to obvious reasons...
Weren't they pretty good (price/performance) after Intel fixed the drivers during the first year or so after release? The real failure was taking so long to ship the next gen..
Are there new hardware features announced for the 16 Pro? Apple definitely would love to add an exclusive feature but it seems like negligible pickings. The "Fusion" or "tetraprism" camera is the only other one that comes to mind.
Fundamentally Apple wants to leverage their supply chain to maximize shared parts between the Pro and base iPhones. Lack of hardware innovations makes it hard to create product differentiation.
Just as Standard Oil used their position to force railroads and other distributors to only carry their oil and not of their competitors', the same case here with Google.
Arguably all these antitrust cases, while better late than never, are at least a decade late. If it was filed in the early 2010s, then possibly there could've been viable competitors to Google, Apple, Amazon, and even Meta. But now these tech titans all have unassailable positions.
> Apple probably had much more freedom becsuse of their size and power and I don't really understand why it is not possible to add a custom search engine. There is no advantage for Apple to not allow this.
I think you're giving Apple too much credit. They are too myopic and too focused on optimizing their current financials, especially under Tim Cook. To build a new search engine would mean 1) tossing away the $20B Google offers, and 2) spend potentially billions to build or acquire something viable.
Would be unacceptable to the Apple institutional shareholders. Akin to what Meta tried to do with their Reality Labs.
If this happens, then it suggests that Amazon expects fulfilling AI-related queries to significantly more taxing than the current state. Also not enough to be offset to be included in the Prime subscription.
As an end user this sounds appealing since it allows to opt in or out.
I may be too skeptical but this seems 1) grossly overpriced and 2) overhyping AI features.
Easy to forget that Microsoft has had 5 years since the release of Qualcomm SQ1-powered Surface Pro X back in 2019. Sure these Nuvia-built cores are much superior but Windows on ARM remains a WIP at best.
In short, a bad or subpar chip design/architecture can be masked by having the chip fabricated on a leading edge node but not the inverse. Hence everyone is vying for capacity on TSMC's newest nodes - especially Apple in trying to secure all capacity for themselves.