PCI-E, SAS, FC,
RAID Controllers, Direct-Attach,
SAN, Dell MD3200, EMC AX4, CX4, VNX, V-Max, HP P2000, EVA, P9000/VSP, Hitachi AMS
SSD products: SATA/SAS SSDs, PCI-E SSDs, Fusion iO, other SSD
Below is a quick survey of SSD either currently available or expected in the near-term.
This looks interesting. Can't wait to see it. Intel Previews Its PCIe 3.0-Based RAID Cards
See the AnandTech Micron P320h PCIe SSD (700GB) Review review for details. Also see the ssd review micron-p320h-hhhl-700gb-pcie-enterprise-ssd-review. The Micron P320h is SLC. The pricing from CDW is about $15K per TB. This is definitely an interesting product.
|Raw Capacity (GB)||512||1024|
|Write Endurance (PB)||25||50|
|Random Read 4K||785K IOPS||785K IOPS|
|Random Write 4K||205K IOPS||205K IOPS|
It is too bad Micron does not have an MLC product. For a transaction processing database, we might isolate a hot table to fit well with SLC pricing. However with MLC prices approaching $5K per TB for enterprise (or just better than consumer) grade products, it is feasible to put the entire database (except for archival tables) on SSD. In this case, the SLC write endurance is not necessary.
Below is the Micron controller, interfacing PCI-E directly to the NAND interface. There are 32 channels on the NAND side. Of course, a SATA SSD can have 8 NAND channels. So 4 SATA SSDs would have 32 NAND channels. The difference is that ONFI 2.0 allows up to 133MB/s and 2.1 allows 166 or 200MB/s per NAND channel, so 8 channels is more than the bandwidth of a single 6Gbps SAS lane (600MB/s net). The real benefit of interfacing directly from PCI-E to NAND is that this avoids the extra bandwidth matching step.
Using the figure below and the 350GB net capacity of the smaller model, the 0.875 factor for RAIN (RAI on NAND instead of disks) implies 400GB capacity prior to RAIN. The 0.78 factor for over-provisioning implies a raw capacity of 512GB. This would imply that the 350GB net capacity is binary, not decimal?
Micron also has a PCI-E SDD in 2.5in HDD form factor at 175 and 350GB.
|Write Endurance (PB)||25||50|
|Random Read 4K||415K IOPS||415K IOPS|
|Random Write 4K||145K IOPS||145K IOPS|
The Intel 910 Series finally came out earlier this year. The 910 uses an LSI PCI-E to SAS controller followed by SAS to NAND controllers.
|Raw Capacity (GB)||768?||1536?|
|Write Endurance (PB)||5-7||10-14|
|Random Read 4K||90K IOPS||180K IOPS|
|Random Write 4K||38K IOPS||75K IOPS|
Some technology enthusiasts like to talk about the theoretical "advantages" of not being encumbered by the extra interface transition through SAS. I would point out that the LSI SAS controller is a very mature server product. Too many product vendors cite impressive performance numbers for their PCI-E SSD on the basis of a single adapter in the system.
In the server environment, performance from a single card is only one of several metrics. It is also critically important that the device have very low overhead. The objective is for the application (the SQL Server database engine in our case) to run with as little disruption as possible. The complete server system will have several PCI-E SSD devices. If the IO performance does not scale over multiple devices, then it is of limited value in servers. LSI has deep experience on server side as LSI SAS controllers are used in almost server systems. They are using the controller of choice for TPC benchmarks with proven ability to scale.
The NAND is 25nm with High Endurance Technology (HET). Write endurance is 2.5PB per 200GB module at 4KB and 3.5PB at 8KB write IO. This is 30X over the consumer SSDs?
Read and write latency is quoted as < 65us. The 800GB model can support 1500MB/s large block write if higher power is available. The architecture of the 800GB is shown below (from Intel Solid-State Drive 910 Series Product Specification).
The PCI-E bridge chip has an SAS interface on the backend. There are 4 SAS channels to a SAS/NAND ASIC. Each NAND module is 200GB. So the 400GB module has 2 SAS channels? It would seem that the Intel 910 uses half of the x4 SAS at 400GB and a full x4 SAS at 800GB?
There are several sub-brands of the LSI Nytro WarpDrive.
The MegaRAID Application Acceleration Cards has NAND flash plus software to cache hard drives on the x4 SAS port. The Application Acceleration Card is the straight PCI-E SSD. There are 200 and 400GB SLC and 400, 800 and 1600GB MLC versions. LSI also has Nytro XD Application Acceleration Storage Solution, a software product for caching SAN or direct-attach storage on SSD.
|Raw Capacity (GB)||?||?|
|Write Endurance (PB)||?||?|
|Random Read 4K||238K IOPS||218K IOPS|
|Random Write 4K||133K IOPS||75K IOPS|
|Random Read 8K||189K IOPS||183K IOPS|
|Random Write 8K||137K IOPS||118K IOPS|
I think the LSI came out in 2011, but I was not paying attention. The SLP-300 is rated at 1,400MB/s read, 1,200MB/s write for 64K sequential. 4K random is 150K read and 190K write. Interface is x8 PCI-E gen 2. Latency is < 50usec.
OCZ PCI-E SSDs moved OCZ PCI-E SSDs
Anandtech details the Micron P320h PCIe SLC SSD with 350 & 700GB capacities. Specifications are 3GB/s read, 2GB/s write, 750K read & 298K write IOPS (140K for 350GB and 200K for 700GB models?). The IOPS numbers vary by source, so it is probably a complicated matter that is not being documented fully. Endurance is 25 & 50 petabytes respectively.
A feature of the P320h is Redundant Array of Independent NAND (RAIN). I believe this to be the correct evolutionary shift from HDD RAID, as the SSD is not fundamentally a single device, but rather multiple devices. So the right place for redundancy is inside the SSD. Computerworld reports RAIN is implemented as 7 channels of data plus 1 channel for parity. Read and Write latency is 50 μsec.
The table below is from the Micron p320 datasheet. I am not sure how the numbers work out. I am thinking there are currently SLC NAND 8-Gbit chips in production. It is possible to put 4 or 8 chips in one package. Then 8 chips of 8Gbit each is 64Gb or 8GB (so presumably NAND density is at the package level?). It would seem reasonable that an enterprise class SSD be over-provisioned, plus ECC and RAIN, so 64 x 8GB (= 512GB) packages in a 320GB product. Then the 700GB part would have to be 128 x 8GB or 64 x 16GB?
|Capacity||NAND Process||NAND Density||Package Count||Die per Package|
The Micron website says the P320h is sampling as of June 2011 with production in Q3. Computerworld reports that Micron intends to price around $16 per GB or $16K per TB. There is no mention of an MLC product.
Various websites, AnandTech etc, discuss specifications for the upcoming Intel 710 and 720 series SSD, but the Intel website does not mention these on their SSD products page yet. The 720 is PCIe 2.0 SLC product with 2.2GB/s read, 1.8GB/s write, 180K IOPS read, 56K IOPS write.
The HP specification sheet for their PCIe IO Accelerators also provides information on the Fusion-IO driver memory usage.
The amount of free RAM required by the driver depends on the size of the blocks used when writing to the drive. The smaller the blocks, the more RAM required. Here are the guidelines for each 80GB of storage:
Block Size (bytes)
Give credit to the HP technical people really know what key information that should be documented is. This is too important to leave to the simpletons in !@#$%^&*. (IBM also puts out great Redbooks, give credit to IBM for spending money on important material, not just marketing rubbish).