Home, Optimizer, Benchmarks, Server Systems, Systems Architecture, Processors, Storage,
  Storage Overview, System View of Storage, SQL Server View of Storage, File Layout,

  PCI-ESASFCHDDSSD Technology RAID ControllersDirect-Attach
  SAN , Dell MD3200,  EMC AX4CX4VNXV-Max,  HP P2000EVAP9000/VSP,  Hitachi AMS 

  SSD products: SATA/SAS SSDsPCI-E SSDsFusion iOother SSD.
  Older SATA/SAS SSD material archived in SATA SSD old (pre-2012?) and SATA SSD 2012 (2012-ish)

Update 2014-03-08 Much has happened and much has not happened since this was last updated. I will summarize when I get the chance.

SATA and SAS SSDs Today

Below is a quick survey of SSD either currently available or expected in the near-term.

SSD performance is determined by several factors. One is the type, single or multiple level (SLC, MLC). The interface, ONFI 1, 2 etc determines the bandwidth that a single channel can support. The other aspect is both the number of channels and the number chips per channel. One would think that the larger capacity server grade SSDs could employ this strategy for better performance, if the controller were designed for this. It could also be that the HP is thinking server oriented storage systems will have a massive array of SSDs, and hence the bandwidth per channel is limited, so there is no pressing need to push the performance of an individual unit.

Note: the convention in memory products is to cite capacity in binary, so 1KB = 2^10 = 1,024, 1MB = 2^20 = 1,048,576 and 1GB = 2^30 = 1,073,741,824.
The convention in storage products is to cite capacity in decimal, 1M = 10^6 and 1GB = 10^9. So a 256GB (256*10^9) SSD composed of 256GB (256*2^30) NAND has 7.3% capacity for over-provisioning.


I have not used SanDisk SSDs (to my knowledge), but since they focus on OEM sales, it could be under the system/storage vendors label. Their new CloudSpeed models are below.

Product NameInterfaceEnduranceCapacitiesSequential Read/WriteRandom Read/Write
CloudSpeed EcoSATA1 DWPD x 3yr240/480/ 960GB450/400MB/s80K/15K IOPS
CloudSpeed AscendSATA1 DWPD x 5yr240/480/ 960GB450/400MB/s80K/15K IOPS
CloudSpeed UltraSATA3 DWPD x 5yr200/400/ 800GB450/400MB/s80K/25K IOPS
CloudSpeed ExtremeSATA10 DWPD x 5yr100/200/ 400/800GB450/400MB/s75K/25K IOPS

The CloudSpeed datasheet mentions the use of "Advanced Signal Processing, which dynamically adjusts flash parameters throughout the life of the SSD, to reliably extract significantly more life from cost-effective MLC flash ..." Still, the Eco and Ascend 1 full data write per day is alot for ordinary MLC, assuming that the 240GB (decimal) model has 256GB (binary) raw capacity. The Eco line is rated for 3 years, and the Ascend for 5 years. I wonder their products have a 9-channel controller, so the 240GB model is comprised of 9 x 32GB = 288GB channels instead of 8x32GB?

The 3 DWPD for 5 years in the Ultra model might be due to the greater use of reserved capacity (less user capacity). The Extreme might be high-endurance MLC?

Note that SanDisk CloudSpeed cites endurance in full writes per day. So the Eco and Ascend 240GB models have an endurance of 240GB per day and the 480 GB models support 480GB per day. Other current generation SSDs typically cite 50-70GB per day for all models (128-512GB) which seems rather strange because one would expect that the 512GB model should have similar relative endurance (% of capacity) meaning greater endurance (in GB).



Crucial currently lists the M500 (mid-2013) and M550 (early 2014) consumer SSDs on their website. The P400e is no longer listed at Crucial. There are also enterprise class SSDs listed on the Micron website. See Anandtech Crucial M550 Review (18-Mar-2014) for the newer model and Crucial/Micron m500 Review (9-Apr-2013) for the older model.

Crucial m550 specs (Released 2014-03-xx?) SATA

Crucial m550 128GB256GB512GB1TB
Raw Capacity 128?256?512?1024?
Seq Read 550MB/s
Seq Write 350MB/s500MB/s
Random Read (4K) 90K IOPS95K IOPS
Random Write (4K)75K IOPS80K IOPS85K IOPS
Endurance72TB (~66GB/day)?
Price 2014-03$100$169$337$531

The M550 uses the Marvell 88SS9189 controller while the M500 uses the M88SS9187 controller.

Crucial m500 specs (Released 2013-04-10?) SATA, 20nm MLC, MTTF 1.2M-hours

Crucial m500 120GB240GB480GB960GB
Raw Capacity 128?256?512?1024?
Seq Read (128K) 500MB/s500MB/s500MB/s500MB/s
Seq Write (128K) 130MB/s250MB/s400MB/s400MB/s
Random Read (4K) 62K IOPS72K IOPS80K IOPS80K IOPS
Random Write (4K)35K IOPS60K IOPS80K IOPS80K IOPS
Read latency (max)5ms5ms5ms5ms
Write latency(max)25ms25ms25ms25ms
Read latency (typ)160µs160µs160µs160µs
Write latency(typ)40µs40µs40µs40µs
Form FactorsAllAllAll2.5in
Price 2012-12$130$220$400$600

Below are specs for Crucial and Micron current and upcoming SATA AND SAS SSDs. The P400e, using regular MLC but with more over-provisioning + RAIN, is also sold on the Crucial website. The P400m and P410m with endurance MLC is listed on the Micron web site as in production, but perhaps only to OEM customers?

The older m4 was on 25nm MLC uses 64Gbit NAND with 8KB pages. The new m500 on 20nm uses 128Gbit with 16KB pages. I have some concerns on the 16KB being used for SQL Server. Of course this is a consumer product. Perhaps the enterprise products will stay with 25nm or older and 8KB pages? or is this a non-issue?

Below is an image from the Anandtech M550 review that is helpful in understanding NAND organization.


Intel SSD DC S3500 Series Update 1H-2013?

Sometime after Intel launched the DC S3700 with 25nm HET MLC, the DC3500 followed (Intel's datasheet say 2013 Apr) with 20nm MLC.

Intel DC S3500 80/120/160/240/300/480/600/800GB
Sequential Read500MB/s
Sequential Write450MB/s
Random 4KB Read (QD32) 75K IOPS
Random 4KB Write (QD32)11.5K IOPS
Random 8KB Read (QD32) 47.5K IOPS
Random 8KB Write (QD32)5.5K IOPS
Write Endurance45/70/100/140/170/225/275/330/450 TBW

Endurance works to just over 500 write cycles?

Intel SSD DC S3700 Series Update 2012-12

Intel is stressing consistency of performance. It is possible to achieve spectacular performance with an array of NAND flash, and more by deferring garbage collection. However with write activity, eventually it will necessary to perform garbage collection. For many consumer oriented SSDs, performance falls off a cliff during GC, even if it is only brief in duration. By giving up peak performance with deferred GC, it is possible to avoid a severe fall-off during GC.

In a proper database server, our expectation is to have a massive array of SSDs. We do not need any where near the peaks performance capability of SATA SSDs today. So this would be a very good strategy. Given that this is an enterprise class drive, would not an SAS interface have been better?

See Intel website Intel Solid-State Drive DC S3700 Series, AnandTech The Intel SSD DC S3700 (200GB) Review and The Intel SSD DC S3700: Intel's 3rd Generation Controller Analyzed

Intel DC S3700 100GB200GB400GB800GB
Raw Capacity 132?264528?1056?
Seq Read (up to) 500MB/s500MB/s500MB/s500MB/s
Seq Write (up to) 200MB/s365MB/s460MB/s460MB/s
Random 4KB Read (up to) 75K IOPS75K IOPS75K IOPS75K IOPS
Random 4KB Write (up to)19K IOPS32K IOPS36K IOPS36K IOPS
Random 8KB Read (up to) 47.5K IOPS47.5K IOPS47.5K IOPS47.5K IOPS
Random 8KB Write (up to)9.5K IOPS16.5K IOPS19.5K IOPS20K IOPS
Write Endurance 1.825PB3.65PB7.3PB14.6PB
Price 2012-12$235$470$940$1880

The Intel SSD DC 3700 specification document cites typical Read 50µs, Write 65µs. Uncorrectable bit error rate is 1 sector per 1017bit read. MTBF is 2 million hours. The NAND flash in Intel 25nm high-endurance technology (HET) MLC. Write endurance is cited as 10 drive writes per day for 5 years. This would imply 18,250 write cycles at the user capacity, and somewhat less by raw capacity.

Quality of Service for 99.9% is Read/Write 500µs at Queue Depth 1 and 1ms read, 10ms write at QD 32. QoS for 99.9999% is read/write 5ms at QD 1 and read 5ms write 20ms at QD 32.

All capacities are available in 2.5in FF, with the 200 and 400GB models also available in 1.8in. See AnandTech Exploring the Relationship Between Spare Area and Performance Consistency in Modern SSDs. This The Intel SSD 710 (200GB) Review has a discussion of HET MLC NAND.


OCZ Intrepid 3600 and 3800 2014-03

I have not tested the new OCZ Enterpise products (last was the ZDrive2?). The Intrepid 3000 series have SATA 6Gbps interface and 100/200/400/800GB capacities, presumably with 128/256/512/1024 raw capacities. The 3600 is regular 19nm MLC and the 3800 is enterprise MLC (eMLC, but not to be confused with embedded MLC). Sustained 4K random writes ranges from 20-40K IOPS, better than the consumer oriented Vector and Vertex. Endurance on the 3600 is 1 complete data write per day (DWPD) and the 3800 at 4DWPD, which is significantly higher than the consumer products.

OCZ Intrepid 3000 100GB200GB400GB800GB
Raw Capacity 128?256?512?1024?
Seq Read (up to) 510MB/s530MB/s540MB/s540MB/s
Seq Write (up to)250MB/s420MB/s480MB/s480MB/s
Sustained Random Read (4K QD32) 91K IOPS91K IOPS89K IOPS89K IOPS
Sustained Random Write (4K QD32)20K IOPS31K IOPS38K IOPS40K IOPS


OCZ Vector 150 and Vertex 460 Update 2014-03

In the last year, OCZ filed for bankruptcy, perhap due in part to trying to do too many products. Toshiba acquired OCZ assests and is retaining the brand. Product-wise, the Vector 150 supercedes the original Vector. The Vertex 4 has been superceded first by the Vertex 450 and then the 460.

The Vector 150 specification lists 19nm MLC Flash and the Vector 460 lists 19nm Toshiba MLC. Both use the Barefoot 3 controller. The Vector 450 was 20nm Flash also on the Barefoot 3 controller. The Vector 150 cites 50GB/day write endurance over 5 years, which is 90TB (1826 days) and 2.3M-hr MTTF. The Vertex 460 cites 20GB/day over 3 years for 20TB and 2M-hr MTBF.

OCZ Vector 150  120GB240GB480GB
Raw Capacity  128?256?512?
Seq Read (up to)  550MB/s550MB/s550MB/s
Seq Write (up to) 450MB/s530MB/s530MB/s
Random Read (4K QD32)  80K IOPS90K IOPS100K IOPS
Random Write (4K QD32) 95K IOPS95K IOPS95K IOPS
Steady State Write 12K IOPS21K IOPS26K IOPS


OCZ Vertex 460  120GB240GB480GB
Raw Capacity  128?256?512?
Seq Read (up to)  530MB/s540MB/s545MB/s
Seq Write (up to) 420MB/s525MB/s525MB/s
Random Read (4K QD32)  80K IOPS85K IOPS95K IOPS
Random Write (4K QD32) 90K IOPS90K IOPS90K IOPS
Steady State Write 12K IOPS21K IOPS23K IOPS

At first I thought the higher write endurance of the Vector 150 over the original Vector was due to signal processing in the controller. As NAND cell voltage level change over time, the controller adjusts for this. But the Vector 150 and Vertex 460 use the same controller, so it must cherry picking in the NAND?


Samsung SM843T/SV843 Data Center SSDs

The SAMSUNG SSD website is Samsung SSD, then click Data Center SSD.

Form Factor 2.5 & 1.8in2.5in
User Capacity 120/240/480GB960GB
MTBF 2M hours
Uncorrectable Bit Error Rate1 in 1017
Read Latency (99.9% QoS)170µs
Write Latency (99.9% QoS)<3ms (<500µs)
Random Read 89K IOPS
Random Write 14K IOPS (35K)
Seq Read 530MB/s
Seq Write 360MB/s430MB/s
4K Random WPD 1.8 WPD (5.4)3.6 WPD (10.5)
64K Sequential WPD 11 WPD22 WPD

Latency measured with FIO 4K Random QD=8 () When over provisioning usable capacity to 100/200/400/800GB. WPD = Drive writes per day for 5 years.

The Samsung web site has more technical details on the 830 SSD