IT Times reporter Lin Fei
In today's digital age, the performance of storage devices has a direct impact on productivity and entertainment. Solid-state drives (SSDs) are an indispensable component of today's computers, and their technological advancements continue to improve the user experience of data storage.
However, there are options in the market between cached and uncached SSDs, which can often be confusing. The IT Times reporter analyzes the differences between them in detail from technical principles to practical applications, and consumers can choose according to their needs.
Explanation of terms There are cache SSDs
As the name suggests, caching is a storage technology used to store data temporarily, and it is a component that sits between fast and slow storage media to increase the speed of access to data by storing data that is recently or frequently used. It first appeared in CPU design, such as L1, L2, and L3 caches.
In products such as SSD, cache actually involves two different concepts: one is the physical component level, which refers to the DRAM cache chip on the SSD; The other is SLC (Single-Level Cell Cache) implemented by software control technology. Both DRAM cache chip and SLC cache technology can significantly improve the read and write performance of SSDs, but only those equipped with DRAM cache chips can be called cached SSDs.
Principle and technical analysis——
DRAM cache chips are "transit stations"
DRAM is a chip that dynamically accesses memory, and the data stored is lost when the power is off. In addition, the NAND memory chip in the SSD is a non-volatile memory type that does not affect the storage of data after a power failure.
Although the NAND chips of SSDs are commonly used in TLC (Triple-Level Cell) and QLC (Quad-Level Cell), although they perform well in terms of storage density, their read and write speeds (usually hundreds of MB/s read rate and tens to hundreds of MB/s write speeds) cannot be compared with the high-speed read and write capabilities of DRAM's several GB/s.
One of the core responsibilities of a DRAM cache in a cached SSD is to host the FTL (Flash Translation Layer) mapping table. When the system boots up, the mapping table is migrated from the NAND chip to the DRAM, and the mapping table acts as a detailed map, indicating the correspondence between the logical and physical addresses of the data, ensuring that the SSD can respond to data access requests in real time and achieve efficient addressing.
In addition, the DRAM cache acts as an efficient "broker". Unlike traditional HDDs, which can be overridden directly, NAND flash memory has a more complex writing mechanism, first reading old data, then erasing the storage cell, and finally writing new data. The intervention of DRAM cache can optimize this process, quickly stage the data to be written and the old data to be migrated, and act as a transit area for high-speed circulation, speeding up the data processing step. This mechanism brings a significant improvement to the overall read and write performance of the SSD.
SLC caching is the "fast road"
The SLC cache is like a "temporary fast road" set on the data transmission path, which temporarily simulates part of the capacity of the NAND chip into SLC mode through software control. Compared with TLC and QLC, which store 3 bits and 4 bits of data per memory cell, SLC only stores 1 bit of data per cell, which greatly simplifies the data reading and writing process, reduces the access complexity of the storage cell, and provides faster read and write performance.
When a computer performs a high-volume sequential write operation, the SLC cache area is the first to accept and process this data, as if a vehicle were speeding down an expressway without traffic jams. Once the SLC cache fills up, the data is transferred to the TLC or QLC area at the normal rate, at which point the write speed of the SSD returns to normal.
By intelligently managing data migration based on data usage frequency and access patterns, SLC caching technology not only guarantees instantaneous performance bursts under high loads, but also reduces the number of erases and writes of NAND chips, thereby extending the life of SSDs.
Different SSD manufacturers divide the fixed SLC cache in different products, usually in the range of 10%~50%, for example, about 100MB~500MB of space in the use of a 1TB SSD will be divided into SLC cache.
Hands-on experience –
Cached SSD is fast Cache-free SSD has good temperature control
Compared with non-cached SSDs, cached SSDs have obvious advantages in sequential read speed, sequential write speed, and 4K random read and write speeds, especially in scenarios such as operating system startup and application loading, and are suitable for use as system disks and game disks. At the same time, the presence of DRAM cache chips also reduces the number of direct writes to NAND chips and prolongs the service life of SSDs.
However, in the long-term read/write scenario of large files, there is not much difference between an uncached SSD and a cached SSD, and the key to this scenario lies in the SLC cache policy. A mature SLC cache strategy can not only improve burst write performance, but also bring a leaner FTL mapping structure, lower latency, and higher mixed read and write performance.
The difference in power consumption and heat generation between cached and uncached SSDs is more significant. Because of the lack of DRAM cache chips and slightly slower read and write speeds, cache-free SSDs perform well in terms of temperature control, and usually only need to put a thermal sticker on the surface to effectively dissipate heat. Cached SSDs generate more heat than non-cached SSDs in daily use, and the required heat dissipation stickers are thicker.
Buying advice –
Cached SSDs are available for gaming consoles and desktops, and cache-free SSDs are available for hard drive enclosures and laptops
Cache-free SSDs perform stably in scenarios such as office, home entertainment, and light gaming, while maintaining low operating temperatures, and are relatively inexpensive, making them suitable for use in hard disk enclosures, laptops, and NUCs.
The faster read and write speeds of cached SSDs make them suitable for use in heavy gamers and professional applications. When using, it is recommended to install a metal heatsink for the cached SSD, and use the airflow in the device to assist in heat dissipation as much as possible to avoid performance degradation or damage caused by overheating of the slow SSD.
At present, the price difference between cached SSDs and non-cached SSDs of the same manufacturer with the same capacity and interface in the market is no more than 20%.
The reason why cached SSDs are expensive is not only because they have additional DRAM cache, but also because their main control chips are different, and the NAND chips are of higher quality. In addition, manufacturers often have better thermal solutions.
At present, the price of the 990 PRO 2TB with a cached SSD with a Samsung PCIe 4 interface is 1450 yuan, and the price of the 980 PRO 2TB without a cached SSD with the same capacity interface is 1250 yuan.
The price of the Barracuda 530 2TB with a cached SSD with Seagate PCIe 4 interface is 1100 yuan, and the price of the barrata 530R 2TB without a cached SSD with the same capacity interface is 900 yuan.
For example, the price difference between the domestic brand cached SSD and the non-cached SSD using domestic NAND memory is smaller, such as the domestic brand Zhitai of Yangtze River Storage, whose high-end product line PCIe 4 interface has a cached SSD TiPro7000 2TB The price is 1160 yuan, and the price of the non-cached SSD TiPlus7100 2TB is 1100 yuan.