Every year there are two fairs that I love. Computex in June is great because the scale of the industry it covers and Taipei is a wonderful location. Hot Chips in August is the other show that is amazing for the depth it offers on the latest technology as well as upcoming releases. This year, the list of Hot Chip presentations is almost overwhelming, and we look forward to seeing it.
Hot Chips is now in the 31st. st. and provides a wide range of technical information on the latest chip logic innovations. Almost all major chip providers and IP licensees involved in semiconductor logic designs are participating: Intel, AMD, NVIDIA, Arm, Xilinx, and IBM are on the latest list, and developers of major accelerators like Fujitsu and NEC are last year , The foundry companies and some of the big cloud providers are also taking part: Google showed its security chips last year, which nobody else will see. There are notable absences from Apple, which, although represented on the committee, were last presented at the 1
We assume that Hot Chips will be the site for microarchitecture reviews, as well as a broader ecosystem exploration of the intricacies of hardware that will overwhelm a large number of us behind the scenes. There were also parts of the conference dedicated to new designs, open source hardware and security vulnerabilities. This year, a large and wide range of impressive headlining talks are planned.
Hot Chips 31 will be held at Stanford University from 18 to 20 to August 2019. The first day, Sunday, the 18th and is usually an introductory day. The detailed discussions begin on the 19th .
|Hot Chips 31 (2019) Schedule|
|09:00||General calculations||AMD||Zen 2||Matisse|
|Arm||A next-generation cloud-to-edge infrastructure SoC with the arm of Neoverse N1 CPU and system products||Neoverse N1  IBM||IBM's Next Generation POWER Processor||POWER9 with I / O|
|11: 00||Memory||Upmem ||True Processing In Memory with DRAM accelerator|
|Princeton||A programmable embedded microprocessor for bit-scalable in-memory computing||]|
|Intel|| Intel Optane||Optane DCPMM|
|13:45||Keynote||AMD  Dr. Lisa Su||Providing the future of high-performance computing with system, software, and silicon co-optimization|
|14:45||Methodology and ML Systems||Stanford||Creating an agile hardware flow|
|MLPerf||MLPerf: A benchmark suite for machine learning by a cooperative from the academic industry|
|Zion: Facebook Next-Generation Unified Training Platform with Large Memory|
|16:45||ML Training||Huawei||A scalable unified architecture for neural netowrk computing Nano-Level to High-Performance Computing||Huawei Da Vinci|
|Intel||Deep Learning Training in Original Size – Spring Crest Deep Learning Accelerator||Spring Crest|
|Cerabras||Wafer Scale Deep Learning |
|Ha bana|| Habana Labs Approach to Scaling AI Training|
The first day begins with a general computing session with AMD, Arm, and IBM. AMD's talk will focus on its latest Zen 2 microarchitecture, which will support both the Matisse-based Ryzen 3000 series desktop processors and the Rome series server processors. We do not expect much news from this presentation as AMD is expected to launch the products before mid-August. After AMD's arm with its Neoverse N1 platform, we reported that this was announced a few weeks ago. The talk by IBM will be interesting to discuss the latest POWER processor, which is probably the optimized version of POWER9 that focuses on I / O support. In-memory computing is seen as a fundamental impediment to performance: Why is data being moved out of memory when simple ALU operations can be performed locally at the location? The goal is to save energy and potentially reduce computation time. Remove memory and DRAM access from bottlenecks. The third conversation in this session with Intel and its new Optane DC persistent storage products we are currently testing in-house.
After lunch follows the first keynote presentation of the event by AMD CEO Dr. Ing. Lisa Su. The talk will focus on how AMD achieves its next generation of performance enhancements with Zen 2 and Navi, all built to 7 nm. This presentation is more of an overview rather than a disclosure, though we may see an indication of one or two roadmaps. A number of key AMD partners are expected to attend, with a Hot Chip keynote and keynotes at CES and Computex this year highlighting a headline year in AMD history.
The Methodology and ML (Machine Learning) Systems The session will feature a talk by MLPerf, an organization that creates an industry-standard machine learning benchmark suite. By that time, MLPerf was in early revisions, and hopefully the additional on-site discussions will push the suite forward. The Facebook talk about his training platform also looks interesting, not only because Facebook does it on a large scale whenever it does.
The last session of the day is the machine learning training session. Most of the time we write about conclusions, but training still accounts for half of the hardware industry's machine learning revenue (according to Intel, Investor Day recently). Huawei is expected to release information about its Da Vinci platform while Intel will also announce details of its Spring Crest family. Some IP companies will also present themselves here.
|Hot Chips 31 (2019) Schedule|
|08:30||Embedded and Auto||Cypress||CYW89459: High Performance and Low Power WiFi and BT5.1 Combo Chip|
|Alibaba||Ouroboros: A WaveNet inference engine for TTS applications on embedded devices|
|Tesla||Computing and redundancy solution for Tesla's fully automated computer  Tesla FSD|
|10:30||ML Inference||MIPS / Wave||The MIPS Triton AI Processong Platform||Triton AI [NVIDIA||A 0.11 pJ / Op, 0.32-128 TOPS Scalable MCM-based DNN Accelerator||NVIDIA NPU|
|Xilinx||Xilinx Versal / AI Engine  Versal|
|Intel||Spring Hill – Intel's Data C Etner Inference Chip||Spring Hill|
|13:45||Keynote||TSMC||Dr. Phillip Wong||What does the next node offer us?|
|14:45||Interconnects||HPE||Gen-Z chipset for exascale fabrics||Gen-Z|
|Ayarlabs||TeraPHY : A Low-Power, High-bandwidth Optical I / O Chiplet Technology||TeraPHY|
|16:15||Packaging and Security||Intel|| Hybrid Cores in a Three-Dimensional Package||Lakefield|
|Tsinghua||Jintide: A Hardware Security Enhanced Server CPU with Xeon Cores||Jintide  17:15||Break  17:45||Graphics and AR||NVIDIA||RTX ON: The NVIDIA Turing GPU Architecture||Turing|
|AMD||19659013] 7 nm Navi GPU||Navi|
|Microsoft||The Silicon in the Heart of Hololens 2.0||Hololens|
The second day brings the juices to flow Zahlreic he current topics.
The first session, named Embedded and Auto, should be really interesting as Tesla presents information about its new Full Self-Driving Chip (FSD) developed during his time under Jim Keller. The announcement of Tesla at the last event was described in detail. We hope Tesla will provide more information about the chip on hot chips. Alibaba also has a talk in this session focusing on its embedded inference engine.
The session "Machine Learning Inference" is another ML session of the event that further strengthens the direction of the calculation towards ML over the next decade. During this session, we should familiarize ourselves with NVIDIA's dedicated multichip module inference design and introduce technologies that go beyond GPU inheritance. Xilinx will talk more about its Versal platform, and Intel will talk about Spring Hill, its data center inference chip we already know to contain Ice Lake cores and built in partnership with Facebook.
After lunch, TSMC has the keynote for the second day. Dr. Phillip Wong will talk about future technology hubs, given the fact that both TSMC and Samsung have had recent events discussing the next generations of their foundry processes. Hot Chips is already hosting a sister conference called Hot Interconnects, which will take place a few days before Hot Chips this year. As a result, the interconnect area for hot chips has only two speakers. The more interesting talk from the beginning comes from HPE (Hewlett Packard Enterprise), which is introducing its new Gen-Z chipset and adapter for large-scale fabric implementations. Gen-Z is one of the future interconnects competing with CCIX, CXL, and others.
For reasons of packaging and security, this session looks very interesting. Intel will discuss its Lakefield processor using its new Foveros packaging technology. Intel has promised to offer Lakefield in products by the end of the year, but we hope the hot-chip conversation will provide more comprehensive disclosure than we. & # 39; have heard before. The second session will be from Tsinghua University in China with its new Jintide CPU. If you've never heard of Jintide, I can not blame you, me either. However, the title of the talk indicates that it is a custom CPU design with Intel Xeon cores, indicating a custom SoC platform developed in collaboration with Intel. Very interesting!
Hot chips usually end with a bang. So at the end of a long day we'll be up to date on the latest graphics and AR technologies. NVIDIA will be talking about Turing, which has become known, so we do not expect anything new there, but AMD is expected to talk about Navi. We expect AMD to release Navi between now and Hot Chips. Therefore, here is the possibility that AMD brings nothing new. But the final conversation should provide lots of new information: Microsoft will talk about silicon in its new Hololens 2.0 design. I look forward to it!
The Hot Chips Conference will take place from 18th to 20th August . I will be there and hopefully blog as many sessions as possible live.