Test Principle and Main Points of SSD Power - Down Protection

 
SSD Power

Introduction

Solid-state hard drive if you need to use FTL to perform the logical and physical address between the conversion, if SSD reads, writes, deletes and other normal work in case of improper shutdown, may cause mapping table because it is too late to update and lost There is a fault the system can not detect SSD.


However, to improve read and write performance, usually use SDRAM to cache, if the read and write process is turned off incorrectly, SDRAM data may be too late to write Nand Flash data loss or update. The mapping table is too late to write Nand Flash The mapping table is missing.


Abnormal power outage caused as a result of the phenomenon


Abnormal SSD power outage There are usually three types of failure phenomenon:


SSD can not recover the system ID, the need to rebuild the mapping table or simply and roughly to rebuild


Many times after a power outage, SSD has a lot of "new bad blocks"


The mechanism behind the new bad block is that when an SSD is read, written or deleted without success, it will be identified as a bad block. Of course, these blocks are not bad, just because they are faulty in faulty electricity caused as a result of wrong judgment.


SDRAM data loss

Power failure protection mechanism


Each shut-off protection mechanism to understand different, different for the user, the protection mechanism is completely different, usually there will be the following two methods:


 Save all data in SDRAM

Exceptional shutdown, SDRAM All data must be fully written to Nand Flash, in general, SDRAM capacity is set to 1000% of the amount of exposed SSD capacity, for small capacity SSDs, SDRAM needs to write relatively small Nand Flash data, via super capacitor or capacitor Tantalum can continue to write data. 

However, if the SSD capacity is large enough, for example: 8TB, then SDRAM should write Nand Flash data will be very large, if you still rely on a supercapacitor or a tantalum capacitor to provide power supply, it will inevitably deal with the following three tricks in the problem:

The need for more tantalum capacitor particles to perform the protection, the actual engineering practice, this is a very serious test, the engineers face thickness, the standard size limit, PCB space is not enough for use.


even if there is enough capacity to perform the protection, when the "restart" application, the SSD will not start properly, you must first turn it off sometime before restarting, because: the SSD should put the entire tantalum capacitor after a detected power


when the use of a few years after the tantalum capacitor or the supercapacitor after ageing, when the power supply of the tantalum capacitor can not achieve the initial design value, the user still has data loss after power loss or SSD can not identify the potential risks, if the initial design That is to make unnecessary capacitors, then, will return to the death cycle.


It is gratifying that the problems of b and c are perfect solutions to solving these thorny problems and only engineers need enough experience and experience alone.


save only the SDRAM user data, without saving the mapping table

This will reduce the use of SDRAM and the use of tantalum capacitors, "Do not save the mapping table" does not mean that the mapping table is lost, just do not save the last data write update map, when the SSD restarts, look for the latest mapping table to save the new data Written for rebuilding the mapping table, the disadvantages of this approach is not a sufficient mechanism to define the reason, then rebuilding the mapping table will be longer, SSD takes some time for normal access to normal


For controllers without SDRAM format, all data is written directly to Nand Flash. When the data is lost, the data not written to Nand Flash will be returned to the host. If no additional data is needed, high-reliability requirements of the application, no SDRAM design is king, its representative is a German industrial brand artist, its only drawback is that the performance is not good enough, in fact, many applications and the need for the highest performance, and whether the performance is "sufficient".


Test methods and principles

Specific test, SSD needs as a system disk and as a disk in both test cases, so the main disk and the disk test method is the only difference is that the main disk needs to test the computer to turn off the machine. , And from the disk, only SSD can be turned off.


respectively, of the SSD as a blank disk, the data is written at 25% and 50% when writing data, the writing data for 85% and 3000, respectively, the abnormal activation of 100% test writing data, each downtime interval 3-second activation.


The principle of writing different capacity data to disk is: When the SSD writes a certain amount of data, the background started collecting garbage, garbage collection means data transfer, data transfer means that the mapping table is updated, at this point usually an abnormal power outage is a problem.


In, when the normal write data, an unusual shutdown of the SSD.


when the data is deleted when the shutdown is not correct


In windows, delete the data also need to perform eight operations, and establishing the same document, also the mapping table needs to be updated.


when the SSD read file is improperly shut down, check 3000 times, shutdown time interval of 3 seconds.


 when the normal shutdown process is off properly, check 3000 times.


And, when the normal start of the operating system shutdown incorrectly, check 3000 times.


Post a Comment

Previous Post Next Post