SSD tests


SUBMITTED BY: Guest

DATE: Jan. 1, 2014, 7:40 p.m.

FORMAT: Text only

SIZE: 18.2 kB

HITS: 7025

  1. Revision History
  2. Published: 27 Dec 2013 - first published
  3. Updated : 28 Dec 2013 - add TODO list of Samsung 840 and Crucial M500
  4. Updated : 29 Dec 2013 - add Editor's note after slashdot article
  5. Editor's note 29Dec2013
  6. Thank you for everyone's input from the Slashdot story.
  7. The additional drives for consideration is extremely useful but they will
  8. have to go through the same process of cost-benefit - followed only then by
  9. reliability - analysis that the other drives went through, with the additional
  10. handicap that the Intel S3500 has already "won" and been selected for live
  11. deployment.
  12. Which brings me to a keen point that is difficult to express when there
  13. are 275 slashdot comments to contend with. The belief that Intel paid
  14. for this report comes through loud and clear. Those who believe that
  15. are severely mistaken. Let's look at it again.
  16. Statement of fact: The S3500 SSD happens to be the sole drive which
  17. a) is cost-effective
  18. b) passed all the extreme tests
  19. c) is within budget
  20. d) was clearly marked in the online marketing as "having power loss protection"
  21. e) is not end-of-life
  22. So let us be absolutely clear:
  23. Fact: the Intel S3500 was the only drive which matched the requirements
  24. That it did so so completely comprehensively despite the extreme nature of
  25. the testing, which lasted several days whilst all other drives failed within
  26. minutes, is the real key point of this report.
  27. However that point - that success - is itself also completely irrelevant
  28. beside the fact that the testing itself provided the company that commissioned
  29. the work with an amazingly high level of confidence in "an SSD" despite their
  30. complete paranoia which had driven them to commission the testing in the first
  31. place. To make that clear:
  32. The company doesn't care about Intel: they care about a reliable drive
  33. If there were other drives that had passed or were known about or could have
  34. been found, they would have been added to the list already.
  35. Analysis of SSD Reliability during power-outages
  36. This report was originally commissioned due to the remote deployment of
  37. over 200 32gb OCZ SSDs resulting in severe data corruption in over 50%
  38. of the units. The recovery costs were far in excess of the costs saved
  39. by purchasing the cheaper OCZ units. They were replaced rapidly over a
  40. period of years by Intel SSD 320s, where, despite remote deployment of
  41. over 500 units there have only ever been three unrecoverable failures.
  42. However, the Intel 320 SSD has reached end-of-life, so a replacement was
  43. sought. Due to paranoia over the OCZs an in-depth analysis was requested.
  44. Around the time that the paranoia was hitting, a report had come out
  45. on slashdot, covering power-related corruption.
  46. It made sense therefore to attempt to replicate that report, as it was
  47. believed that the data corruption of the OCZs was related to power loss.
  48. This report therefore covers the drives selected and the testing that was
  49. carried out. We follow up with a conclusion (summary: if you care about
  50. power loss don't buy anything other than Intel SSDs - end of story) and
  51. some interesting twists.
  52. Picking drives for testing
  53. The scenario for deployment is one where huge amounts of data simply are
  54. not required. An 8gb drive would be able to store 1 month's worth of sensor
  55. data, as well as have room for a 1.5gb OS deployment. A 16gb drive stores
  56. over two months. Bizarrely, except in the Industrial arena the focus
  57. is on constant increases in data storage capacity rather than data
  58. reliability. The fact that shrinking geometries automatically results
  59. in higher susceptibility to data corruption is left for another time,
  60. however.
  61. Additionally, due to the aforementioned paranoia and assumptions that the
  62. data loss was occurring due to loss of power, the requirements to have
  63. "Power Loss Protection" were made mandatory. Power Loss Protection is
  64. usually found in Industrial and Server Grade SSDs, which are typically
  65. more expensive.
  66. So, finding low-cost low-size reliable SSD reported to have
  67. "Power Loss Protection" proved... challenging. After an exhaustive search,
  68. the following candidates were found:
  69. Crucial M4 128gb
  70. The unpronounceable Toshiba THNSNH060GCS 60gb
  71. The new Intel S3500
  72. The Innodisk 3MP Sata Slim (8gb and 16gb)
  73. The Innodisk units came in around £30, whilst all the other drives came
  74. in at between £60 and £90. Also added to the testing was the original
  75. 32gb Vertex OCZ and the Intel 320.
  76. Test procedure
  77. The original report at the FAST conference was quite hard to replicate:
  78. the report is a summary rather than containing detailed procedures or
  79. source code. A best effort was made and then extended.
  80. OS-based test. The first test devised was to boot up a full OS and to power-cycle it using a mains timer. This test turned out to be completely lame, except for its negative results proving that simply switching power on and off was not the root cause of problems.
  81. OS-based huge parallel writes. The second test was to write huge numbers of files and subdirectories in parallel. Thousands of directories and millions of small files as well as one large one were copied, sync'd then deleted using 64 parallel processes. Power was not pulled during this test.
  82. Direct disk writing. This test was closer to the original FAST report, except simplified in some ways and extended in others.
  83. Crucial M4
  84. The Crucial M4 was tested with an early prototype version of the SSD
  85. torture program. It was power-cycled approximately 1,900 times over a
  86. 48 hour period. Data was randomly written, sync'd and then read back,
  87. whilst power-cycling was done on a random basis between 8 and 25 seconds
  88. through the read-sync-write cycle. Every 30 seconds the geometry was
  89. checked and a smartctl report obtained.
  90. After approximately 1600 power-cycles, the Crucial M4's SMART report showed
  91. over 20,000 CRC errors. Within 1900 power-cycles, that number had jumped
  92. to 40,000 CRC errors and had been joined by serious LBA errors.
  93. Conclusion: epic fail. Not fit for purpose: returned under warranty.
  94. Toshiba THNSNH060GCS 60gb
  95. This drive turned out to be a little more interesting. It passed the OS-based
  96. parallel writes test with flying colours. Running for over 20 minutes, several
  97. million files and directories were created and deleted. In between each run
  98. no filesystem corruption was observed.
  99. Then came the direct-disk writing. It turns out that if the write speed is
  100. kept below around 20mbytes/sec, the Toshiba THNSNH060GCS is perfectly capable
  101. of retaining data integrity even when power is being pulled, even when there
  102. are 64 parallel threads all writing at the same time.
  103. However when the write speed exceeds a certain threshold, all bets are off.
  104. At higher write speeds, data loss when power is pulled is only a matter
  105. of time (minutes).
  106. We conclude from this that the Toshiba THNSNH060GCS does have power-loss
  107. protection circuitry and firmware, but that the internal power reservoir
  108. (presumably supercapacitors) simply isn't large enough to cover saving the
  109. entire outstanding cache of writes.
  110. Conclusion: close, but no banana.
  111. Innodisk 3MP Sata Slim
  112. There were high hopes for these drives, based on the form-factor and low cost.
  113. However, unfortunately they turned out to have rather interesting firmware
  114. issues.
  115. The observed write-then-read speeds (a write followed by a verify step)
  116. turned out to be adversely affected by the number of parallel writes. If
  117. there were no parallel writes (only one thread) then it was possible to
  118. write and then read at least 18 mbytes per second (i.e. the data was written
  119. at probably 30mbytes/sec then read at probably 45mbytes/sec, except that
  120. the timer was started at the beginning of the write and stopped at the end
  121. of the read). This speed was sustained.
  122. However, if there were even just two parallel write-read threads, the speed
  123. was sustained for approximately 15 seconds and then dropped down to 1 (one!)
  124. mbyte/sec. The more threads were introduced, the less time it took for
  125. the write-then-read speed to drop to a crawl.
  126. Paradoxically, if the torture program was suspended temporarily even for
  127. a duration of a few seconds, then when it was resumed the speed would shoot
  128. back up to 18 mbytes / sec and then once again plummet.
  129. We conclude from this that either the CPU on the Innodisk SATA Slim or the
  130. algorithms being used are just too slow to deal with parallel writes. There
  131. is clearly a RAM cache which is being filled up: the speed of writing to the
  132. NAND itself is not an issue (because if it was, then single-threaded writes
  133. would be slow as well). So it really is a firmare / CPU issue: when the
  134. cache is full of random parallel data, the firmware / CPU goes into meltdown,
  135. cannot cope, and the write speed suffers as a result.
  136. To Innodisk's credit, they actually responded and were given a copy of
  137. the SSD torture program and instructions on how to replicate the issue.
  138. It will be interesting to see how they solve this one: updates will be
  139. provided.
  140. Conclusion: wait and see.
  141. OCZ Vertex 32gb
  142. This was also interesting. The OS-based test (which was ordered to be run,
  143. despite reservations that it would be ineffective) showed absolutely ZERO
  144. data corruption. Let's repeat that. When picking one of the worst
  145. drives with the worst smartctl report ever seen that was still functional
  146. from a batch with over 50% failure rates and using it to install an OS and
  147. then leaving it to power-cycle over 100 times there was ZERO data
  148. corruption.
  149. What we can conclude from this is that power-loss had absolutely nothing to
  150. do with the data-loss. What it was then necessary to do was to devise a
  151. test which would show where the problem actually was. This test was the
  152. "OS-based huge parallel writes" test. Running this test for a mere 5 minutes
  153. (bear in mind that there was no power-cycling) resulted in immediate data
  154. corruption.
  155. Further investigation was therefore warranted. OCZ (before they went into
  156. liquidation) had been advising - without explanation - to upgrade the firmware.
  157. After working out how this can be done on GNU/Linux systems, and after
  158. observing in passing that the firmware upgrade system was using syslinux
  159. and FreeDOS, the firmware was "downgraded" to Revision 1.6.
  160. The exact same OCZ - with an incredible array of failures, CRC errors,
  161. lost sectors as reported by smartctl - when downgraded to firmware Revision
  162. 1.6 - then showed ZERO data corruption when the exact same OS-based
  163. parallel write testing was carried out.
  164. which is fascinating in itself.
  165. Further investigation then dug up an interesting nugget: it turns out that
  166. OCZ apparently had been warned by Sandforce not to enable a switch in
  167. the firmware which would result in "increased speed". OCZ, in their desperate
  168. attempt to remain "king of the speed wars" ignored the advice that doing so
  169. would result in data corruption. The results correlate with this advice:
  170. at higher speeds, data corruption is guaranteed to occur.
  171. The hypothesis here is that at higher speeds there is a bug in the firmware
  172. which results in the data being written incorrectly. What was not determined
  173. was whether that data was simply... not written at all or whether it
  174. was written in the wrong place. Given that out of the 50% failed drives a
  175. number of them actually could not be seen on the SATA bus at all, it seems
  176. likely that at high speeds, OCZs with the faulty firmware are actually capable
  177. of overwriting their own firmware! However, actually demonstrating this
  178. is beyond the scope of the tests carried out, not least because it would
  179. require wiping an entire drive, carrying out some parallel writes, then
  180. checking the entire drive to see where the writes actually ended up.
  181. This test may be added to the suite at a later date.
  182. Once the firmware was downgraded to Revision 1.6, the drive-level testing
  183. was carried out (there was no point doing so when the drive's firmware could
  184. not even maintain data integrity even when power was provided). Surprisingly,
  185. the drive fared pretty well. Sustained random speed levels were good, but
  186. data was lost intermittently when power was pulled, especially
  187. (like the Toshiba) at higher speeds.
  188. Conclusion: buy cheap, flash firmware to 1.6 if power-loss not important
  189. Intel 320 and S3500
  190. As already hinted at, these drives simply could not be made to fail, no matter
  191. what was thrown at them. The S3500 was power-cycled some 6,500 times for
  192. several days: several terabytes of random data were written and read from that
  193. drive. not a single byte of data was lost. Despite even the reads being
  194. interrupted, there was not a single time - not even once - when the S3500
  195. failed to verify the data that had been written.
  196. The only strange behaviour observed was that the write-then-read cycle
  197. speeds tended to fluctuate, sustaining around 25 to 30mbytes of write-then-read
  198. speed continuously for several minutes then dropping after 10 or so minutes
  199. to 20 or even 12 mbytes / sec for one (and only one) write-read cycle.
  200. The only possible explanation for this could be some housekeeping going
  201. on, in the firmware, which would take up CPU cycles for short durations.
  202. Conclusion: don't buy anything other than Intel SSDs
  203. Conclusion
  204. Right now, there is only one reliable SSD manufacturer: Intel.
  205. That really is the end of the discussion. It would appear that Intel is
  206. the only manufacturer of SSDs that provide sufficiently large on-board
  207. temporary power (probably in the form of supercapacitors) to cover writing
  208. back the entire cache when power is pulled, even when the on-board cache
  209. is completely full.
  210. The Toshiba drives have some power-loss protection, but it's not
  211. enough to cover an entire cache. The Innodisk team have tried hard: their
  212. datasheet shows that they are also providing power-loss protection as well
  213. as detecting when power and current drop below unsustainable levels.
  214. Given how difficult it is to even find out if Manufacturers provide this
  215. kind of capability at all it is worth giving Innodisk credit for
  216. at least making that information publicly accessible.
  217. The OCZ Management deserve everything that's happened to OCZ. They should
  218. have listened to Sandforce: the history of SSDs would have been a radically
  219. different story. The sad thing is that when the firmware is downgraded,
  220. the drives are no worse than any other consumer-grade SSD.
  221. The Crucial M4 is probably okay for general use, as are all the other drives
  222. (except the Innodisk until they fix the firmware issues to get the sustained
  223. write speeds back). And so, if it's possible to buy them cheap, and
  224. power-loss is not an issue, getting hold of second-hand OZC Vertex drives
  225. and downgrading the firmware would not be that bad an option.
  226. However, if data integrity is really important, even when power could be
  227. pulled at any time, then there really is absolutely no question: get an
  228. Intel SSD. it's as simple as that.
  229. Future
  230. On the TODO list is to write that test which wipes the drive, carries out
  231. random writes, then checks the entire drive to see if the writes went in
  232. the correct places. On the face of it this seems such an obvious thing
  233. that drives should do, but the OCZ Vertex's show that it's an
  234. assumption that cannot be made.
  235. The Innodisk drives are one to watch: the price and tiny size is well worth
  236. continuing to work with Innodisk to see if they can solve the problem of
  237. parallel-write-cache overload.
  238. Other drives may prove to be as good as the Intel S3500, however they were
  239. not tested during this research because other drives were either way outside
  240. of the budget, or it was impossible to find out from even exhaustive Internet
  241. searches as well as speaking to suppliers whether the other potential
  242. candidates had any form of power-loss protection.
  243. If anyone would like to find out if a particular make or model of drive is
  244. reliable under extreme torturing and power-interruption, contact
  245. lkcl@lkcl.net: a contract can be arranged
  246. and this report updated.
  247. Lastly, it is worth noting that this testing was only carried out for
  248. a maximum of a few days sustained writing. The long-term viability
  249. obviously has not been tested. However, given that deployment of over
  250. 500 Intel 320 SSDs has been carried out and only 3 failures observed
  251. over several years, it would be reasonable to conclude that Intel S3500s
  252. could be trusted long-term as well, bearing in mind - as a precautionary
  253. tale - that lower geometries means more unreliability for the firmware
  254. to contend with.
  255. TODO Updated: 28th Dec 2013
  256. Thank you to everyone who's recommended drives since this report was published.
  257. The initial investigation is basically over: the Intel S3500 was top of the
  258. list as it was the only one that passed. However, based on unit cost it could
  259. well be the case that the investigation is reopened.
  260. Recommended drives for consideration at a later date:
  261. * Samsung 840
  262. * Crucial M500 (first Crucial drive with power-loss capacitors)
  263. * Intel 540 series (which are apparently made differently from S3500 and 320s)
  264. Recommended tests:
  265. * Use new linux kernel 3.8 "cmd flush disable" option to check data integrity
  266. * "Power brown-outs" (reducing current intermittently) as an advanced test

comments powered by Disqus