Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AMD hasn't shipped their "high compute" SOMs, so there is little point in building inference around it. Using programmable logic for machine learning is a complete waste, since Xilinx never shied away from sprinkling lots of "AI Engines" on their bigger FPGAs, to the point where buying the FPGA just for the AI Engines might be worth it, because 100s of VLIW cores packs a serious punch for running numerical simulations.


AMD actually also does inference on PL, with reasonable commercial success, actually. Have a look at FINN.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: