Facebook ASIC and FPGA, two custom silicon designs that companies can gear up for specific use cases.
There's been a lot of This article was originally published in and is not translated to the original version. The whispers of Facebook's customized hardware range, on the massive graph Facebook possesses around personal data. Caffe2, an AI infrastructure deployed at http://www.caffe2.com, has not been developed yet.
FPGA is designed to be more flexible and modular in design, which is championed by Intel as a way to offer the ability to adapt to a changing learning-driven landscape. The downside that is commonly known as FPGA is that it is a piece of hardware that is complex to calibrate and modify, as well as expensive, making it less of a cover-all solution for machine learning projects.
Facebook's director of AI research tweeted about the job this morning, noting that earlier worked in chip design:
While the whispers grow louder and louder about Facebook's potential hardware efforts, this does not seem to be the same as the partial data point that the company is looking for. That would mostly exist on the server side, though Facebook is looking like other things like a smart speaker. Given the immense amount of data Facebook has, it would make sense that the company would rather look into customized hardware rather than using off-the-shelf components like those from Nvidia.
(The wildest rumor we've heard about Facebook's approach is it's a diurnal system, flipping between machine training and inference depending on the time of day and whether people are well, asleep in that region.)
Most of the other large players have found their own customized hardware. Google has reported its TPU for its operations, while Amazon is reportedly working on chips for both training and inference. Apple, too, is reportedly working on its own silicon, which could potentially rip Intel out of its line of computers. Microsoft is also diving into FPGA as a potential approach to machine learning problems.
Still, it's looking like this in FPGA and ASIC. Nvidia has a lot of control over the AI space with its GPU technology, which it can optimize for popular AI frameworks like TensorFlow. And there are also a large number of very well-funded startups exploring customized AI hardware, including Cerebras Systems, SambaNova Systems, Mythic, and Graphcore (and that is not even getting into the large amount of activity coming out of China).
One significant problem Facebook may just sit in perpetuity. Another common criticism of FPGA is that it specializes in FPGA. It's not as easy as Facebook's full hardware customization for its operations.
But nonetheless, this seems like more confirmation of Facebook's custom hardware ambitions There is no longer any point in the process of doing it.
problems 196 196 it it it it if if if if if AI AI AI AI AI AI AI AI AI AI AI AI AI AI AI AI AI AI AI AI AI AI AI AI AI ] A request for comment