//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>
It’s that point of 12 months once more—Sizzling Chips will quickly be upon us. Happening as a digital occasion on August 21–23, the convention will as soon as once more current the very newest in microprocessor architectures and system improvements.
As EE Occasions’ AI reporter, I’ll in fact be looking for brand spanking new and fascinating AI chips. As in recent times, this 12 months this system has a transparent give attention to AI and accelerated computing, however there are additionally periods on networking chips, integration applied sciences, and extra. Chips offered will run the gamut from wafer–scale to multi–die excessive–efficiency computing GPUs, to cell phone processors.
The primary session on day 1 will host the most important chip firms on the planet as they current the largest GPU chips on the planet. Nvidia is up first to current its flagship Hopper GPU, AMD will current the MI200, and Intel will current Ponte Vecchio. Presenting these one after one other contrasts their type components: Hopper is a monolithic die (plus HBM), the MI200 has two huge compute chiplets, and Ponte Vecchio has dozens.
Alongside the large three, a shock entry within the at–scale GPU class: Biren. The Chinese language basic–objective graphics processing unit (GPGPU) maker, based in 2019, not too long ago lit up its first–gen 7–nm GPGPU, the BR100. All we all know thus far is that the corporate makes use of chiplets to construct the GPGPU with “the most important computing energy in China,” in line with its web site. Biren’s chip has been hailed as a breakthrough for the home IC business, because it “straight benchmarks in opposition to the most recent flagships not too long ago launched by worldwide producers.” Hopefully, the corporate’s Sizzling Chips presentation will reveal whether or not this actually is the case.
The principle machine studying processor session is on day 2. We are going to hear from Groq’s chief architect on the startup’s inference accelerator for the cloud. Cerebras may also current a deep–dive on the {hardware}–software program codesign for its second–gen wafer–scale engine.
There may also be two displays from Tesla on this class—each on its forthcoming AI supercomputer Dojo. Dojo has been offered as “the primary exascale AI supercomputer” (1.1 EFLOPS for BF16/CFP8) that makes use of the corporate’s specifically designed Tesla D1 ASIC in modules the corporate calls Coaching Tiles.
Knowledge heart AI chip firm Untether will current its model new second–gen inference structure, referred to as Boqueria. We don’t know the small print but, however we all know the chip has a minimum of 1,000 RISC–V cores (will it take Esperanto’s crown as largest industrial RISC–V design?) and that it depends on the same at–reminiscence compute structure to the first era.
AI of us may wish to look out for the tutorial session on Aug. 21 on the subject of compiling for heterogeneous techniques with MLIR.
The opposite tutorial session is on CPU/accelerator/reminiscence interconnect commonplace Compute Categorical Hyperlink (CXL). CXL simply introduced the third model of its know-how, which seems set to develop into the business commonplace since beforehand competing requirements not too long ago threw their weight behind CXL.
Elsewhere on this system, we’ll hear from Lightmatter about its Passage gadget, a wafer–scale programmable photonic communication substrate. Ranovus will current on its monolithic integration know-how for photonic and digital dies.
I’ll even be looking for Nvidia’s presentation on its Grace CPU, a presentation on a processing cloth for mind–laptop interfaces from Yale College, and keynotes from Intel’s Pat Gelsinger and Tesla Motors’ Ganesh Venkataramanan.
The advance program for Sizzling Chips 34 could be discovered right here.