In the early 2000s the Flash NAND manufacturers decided we were almost at a flash scaling brick wall. They assessed that 60nm was the maximum scaling they could ever achieve, but were looking into other creative solutions to allow for higher capacities without using smaller NAND cells. Here we are more than 10 years later and we have not hit a brick wall of flash NAND scaling yet, although it’s common knowledge that we are getting close to flash scaling limitations. The manufacturers can hear the familiar drum beat of progress getting ever fainter with each new generation of NAND flash.
The problem is a simple one of process shrink combined with physics. NAND flash is a semiconductor and for years the flash foundries have been making floating gate transistors smaller and smaller. This process has allowed more NAND cells to be produced from the same amount of raw materials allowing for a lower cost. Since they are physically smaller, more can be crammed into the same space allowing for increased capacities, a process which has increased benefits with multi level and triple level cells. Increased density and lower costs are a definite win, but with a steep penalty in reliability. It finally appears that the flash scaling wall may be right around the corner.
A flash scaling wall is especially bad for the concept of the all flash arrays where physical space is especially critical. The enterprise storage landscape has been shifting to support more and more workloads on flash with many people suggesting the days of spinning media are numbered. This is only possible if prices of flash storage continue to drop, reducing the cost gap between flash and traditional hard drives. In order for prices to drop, either the cost of raw materials needs to drop, or more data needs to be crammed into the same amount of space. The demand for silicon is not going away any time soon, so no major price drops are expected. That leaves us with the need to break down the flash scaling wall.
Could manufacturers continue to push limitations and make the distance between the source and drain on the floating gate transistor even smaller? Sure. It just doesn’t make sense to do that, though. All the flash foundries would need to be retooled to make this happen. Retooling is a very costly process and just pushes the problem out. To overcome this limitation we have to look to a brand new dimension. The concept is simple. Take an existing NAND cell string and stand it vertically on its end so that you get the same density as traditional 2D NAND, but with less wafer area consumed. This type of 3D NAND effectively results in higher density.
Flash manufacturers have been working for decades to make NAND cells smaller and smaller. Unfortunately, each successive generation introduces new problems related to reliability and write endurance. These problems have been overcome by making the controller better at error correction and endurance monitoring, but that comes at a cost, too. Larger cells offset these problems allowing for enhanced write endurance while the 3D stacking allows of greater capacity. 3D NAND technology will allow triple level cells to finally have enough reliability for the enterprise landscape. In fact 3D NAND cells will make TLC a normal fixture in the enterprise flash world.
Flash storage will continue to be cheaper, more reliable, and better performing which will lead to even greater enterprise flash adoption. The idea of an all flash datacenter may seem farfetched now, but in the years to come this may be a reality. The capacity tier will be massive 3D NAND triple level cells rather than near line SAS drives. The question becomes “what is next?” for high performance and enterprise grade storage.