New high-bandwidth memory for AI specifications from Micron have just been released, and they seem to outperform the fastest HBM currently available.
One of the worst memory downturns in history is currently being experienced by top memory manufacturer Micron Technology (MU -0.88%) and its competitors, but it is hoped that memory-hungry artificial intelligence (AI) applications would help the sector recover.
As the first company to produce 1-beta DRAM and 232-layer NAND last year, Micron had remarkably caught up to and exceeded Samsung and SK Hynix in the most commonly used kinds of DRAM and NAND flash. But in the important high-bandwidth memory (HBM) market for AI, Micron has also lagged behind its rivals.
High-capacity stacked DRAM, or HBM, is essential for quickly inferencing and training AI models, and it is seen as one of the major barriers to developing more potent AI. Due to its demand in AI processing, HBM, which currently accounts for just 1% of the DRAM industry, is predicted to increase at a 45% average growth rate or higher for several years.
Investors failed to recognize that Micron has lagged behind as this is one of the only growth drivers in the DRAM market at the moment. SK Hynix is considered to be the industry pioneer in HBM, having started its research in 2013.
On the other hand, Micron unveiled their newest HBM product on July 26, which appears to outperform the competition.
A new HBM3 Gen2 from Micron
In the July 26 release, Micron said that customers are currently sampling its second-generation HBM3, and it provided some outstanding specifications.
In order to fit more memory capacity into a condensed “cube” of stacked memory modules, Micron appears to have discovered the secret. The new HBM3 device, which has a pin speed of over 9.2 GB/s and a bandwidth of over 1.2 TB/s, can store 24GB of memory into eight layers of DRAM.
That outperforms the 12-layer, 16GB HBM cubes that are currently available, which have a top speed of only 6.4 GB/s. The fact that Micron’s new technology surpasses even the latest HBM3 from SK Hynix is possibly even more significant.
That model, which was only recently introduced in April, promised the potential to cram 24GB into a 12-layer cube.
Since Micron’s new memory cube claims to have that much capacity with only eight layers, that represents a 50% improvement in capacity per area, reducing power consumption and improving the processing speed of inference and training models. Additionally, Micron announced that it would sample a 12-layer, 36GB HBM cube in early 2024.
The recent outperformance by Micron is impressive because it is believed that SK Hynix held a significant market share and lead in HBM for AI for a number of years.
In terms of what this implications for AI applications, Micron claimed that 30% faster model training will be possible with this new memory. On the inference side, faster processing will enable more queries per day per model, enabling AI users to make better use of their pricey models. Additionally, Micron predicts that the new memory will save data center operators $550 million per five years for every 10 million GPUs due to better energy efficiency.
Is there a technological advantage for Micron?
Nowadays, producing smaller transistors in more intricate patterns is exceedingly challenging. However, it seems like Micron has an advantage over its rivals that is enabling it to go to the subsequent technology node more quickly.
Micron claimed that their 1-beta DRAM modules, which it was the first to commercially create last year, are the source of this HBM product’s technological supremacy. Micron also identified several technological elements as contributing.
One of these is the ability to increase the number of through-silicon vias (TSVs)—vertical holes that an interconnect passes through to reach each layer of the HBM cube—on the chip. Due to Micron’s ability to construct more effective data routes and improve metal density within the chip by a factor of five, the new HBM also has better thermal impedance, or heat resistance.
All of this is really significant, and it suggests some kind of scientific breakthrough that Micron has discovered under CEO Sanjay Mehrotra, who assumed office in 2017. The fact that Micron evolved from being a technical laggard to having the most cutting-edge products on the market during Mehrotra’s tenure is noteworthy, despite some bumps along the way in negotiating numerous challenging memory cycles.
Additionally, Micron’s portfolio appears to be able to advance through successive nodes, as seen by the company’s achievements in achieving 232-layer NAND ahead of competitors in July of last year, 1-beta DRAM in November, and now, it would appear, this best-in-class HBM3 product today.
But is it sufficient?
Again, HBM only makes up 1% of the memory market, which is in dire straits. Therefore, it’s unlikely that this revelation will help Micron turn around its current losses on its own.
However, there are only three major participants in the DRAM market, and due to the downturn, they have all drastically reduced their output and capital spending. The foundation for the upcoming upturn has likely been laid with all three players cutting back on production and demand appearing to have reached a low point.
Shareholders should be increasingly enthusiastic about Micron’s future given that it appears to have a technological process edge in this consolidating industry.
Should you immediately invest $1,000 in Micron Technology?
You should know this before you select Micron Technology.
Micron Technology wasn’t one of the top 10 stocks, according to the Motley Fool Stock Advisor analysis team, who recently announced their list.
Since 2002, Stock Advisor has outperformed the stock market by a factor of three. They believe that there are ten equities that are better buys right now.
Must read:The greatest riddle in AI