Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a smart move, reflecting Intel's own, with an eye to the datacenter where the FPGA is seen as having a bright future.


Has Intel done much with Altera? I haven’t heard much of anything come out of that partnership. (Then again, I’m not plugged in to this stuff.)


I don't use FPGAs (tooling is too poor, languages are bad, up-front costs are high) but I hang out on FPGA forums and the overwhelming consensus has been bad. Chipmakers and especially high-performance chipmakers have always been focused on high-volume and/or high-margin customers, but the Intel acquisition has made Altera worse in that regard. Their sales and support teams were integrated into Intel and now you can't get any support from them whatsoever even if you spend $MM/yr. You need to funnel even basic questions and bug reports through a distributor contact to have any chance. I forget the specifics but they made tooling even more restrictive/expensive. The only new products out of it are a few Xeons with built-in FPGA ($$$$$), good for HFT guys I guess.


Can you expand on why Intel’s move was smart (what did the Altera acquisition do for them) and why FPGAs have a bright future in the datacenter?

From what little I’ve seen in this space, FPGAs have not made large inroads in the ML space or datacenters in general. This seems partly due to their inefficient nature compared to ASICS and moreover their software.

Unless AMD is planning something really ambitious (e.g., true software-based hardware reconfiguration that doesn’t require HDL knowledge) and are confident they’ve figured it out, I’m not sure what they hope to achieve here.


> true software-based hardware reconfiguration that doesn’t require HDL knowledge

This has been a holy grail for at least two decades. Its very much like asking for a programming language that can be used by non-programmers.


> From what little I’ve seen in this space, FPGAs have not made large inroads in the ML space or datacenters in general.

I don't know that they have actually made large inroads into those spaces, but Xilinx is indeed pushing hard for that. For years now.


I'd love to know why Intel chose to buy Altera instead of the industry leader Xilinx.


Both Altera and Xilinx were on TSMC. Altera wanted an Edge over Xilinx, at the time Intel was committing ( on paper ) to their Custom Foundry. Altera switched and bet to Intel Custom Foundry. Nothing ever worked out with Intel Custom Foundry because they were not used to working with others on Foundry Process. Intel thought the problem was with Altera not being part of company and they had too much cash so they might as well buy them for better synergy. And it did, getting internal access seems to have ( on paper or slides ) speed things up with product launches and roadmap, until they hit the Intel 10nm fiasco.


I believe Altera was already manufacturing chips using Intel process prior to acquisition, while Xilinx is using TSMC?


Yes, I believe they switched off of TSMC in 2013 or so.


Altera Stratix V FPGAs actually had more market share than Virtex 7s. They were better chips. That said, the production delays around Arria 10 and Stratix 10 and the time lag caused by the Intel acquisition totally killed their market position. The only reasons to use Intel FPGAs now are (1) 64-bit floating point support or (2) if your Intel salesman gives you a really good deal.


It may have been Xilinx not wanting to get into bed with Intel. Xilinx may have wanted a degree of technical independence or freedom to carry out their own strategy that was not forthcoming from Intel.


Wonder what AMD told them.


"We will use TSMC."


Probably worth looking at where the different FPGA brands were being fabbed. Xilinx is a better fit to AMD.


> This is a smart move, reflecting Intel's own, with an eye to the datacenter where the FPGA is seen as having a bright future.

What in the world FPGAs have to do in a datacentre?


Microsoft have been using them in Bing and other projects for a while: https://www.microsoft.com/en-us/research/project/project-cat...


Word on the street is that this was a vanity project of a VP, and never resulted in performance levels that couldn't be achieved with a little bit of focused optimization of boring old CPU work (threading + SIMD).


There's been a recent trend to increasingly move more compute capabilities into NICs. This has been going on for a while, but has gained a new dimension with cloud providers. For example, with their "Nitro" system, AWS can more or less run their Hypervisor entirely on the NIC and completely offload the network and storage virtualization from their servers. This development is likely to continue. FPGAs are going to play a significant part in that because they allow the customers to reconfigure this hardware according to their needs.


Aren’t they already widely used as NICs? And I many places are beginning to offer them for ML workloads and such.


I don't think "widely", maybe in a few niches? "SmartNICs" are becoming a bit of a thing again, but those are mostly not FPGA-based as far as I know.


Virtual machines are very much a thing now, and virtualisation has made it into network cards reasonably well ... but pretty well nothing else.

In our future datacentre we want to say how many cores, connected to how much ram, how much GPU resource, some NVME etc. etc. and there's going to be a whole lot of very specialised switching and tunnelling going on. This needs to be as close to the cores/cache as possible, a good order of magnitude faster than we run our present networking stuff, and probably an area where there will be a significant pace of development ie a software defined solution would be nice.

So, a software defined north bridge, in essence. And an FPGA is pretty much the only thing we have right now that could do the job.


Because an FPGA lets you optimize your "hardware" solution to a computing problem without the hassle of fabricating a chip of your own (although the performance with an FPGA is much lower than with a custom chip).


The "datacentre" mentioned is not necessarily an enterprise datacenter or a web app backend datacenter.

Think ML, networking and other such uses...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: