Home, Optimizer, Benchmarks, Server Systems, Systems Architecture, Processors, Storage,
  Storage Overview, System View of Storage, SQL Server View of Storage, File Layout,

PCI-ESASFCHDDSSD Technology RAID ControllersDirect-Attach
  SAN,  Dell MD3200,  EMC AX4CX4VNXV-Max,  HP P2000EVAP9000/VSP,  Hitachi AMS 
  SSD products: SATA/SAS SSDs , PCI-E SSDsFusion iOother SSD

Wikipedia has SSD as Solid-state drive. I have also seen reference as solid-state disk. But there is no drive and no disk, so I am calling it solid-state-device.

Update 2011-10

Solid-State Devices (not disks, not drives) Technology

After years of anticipation and false starts, the SSD is finally ready to take a feature role in database server storage. There were false starts because NAND flash is very different from hard disks and cannot be simply dropped into a storage device and infrastructure built around hard disk characteristics. Too many simple(ton) people became entranced on only seeing the featured specifications of NAND-based SSD, usually random IOPS and latency. It is always the details in small print (or outright omitted) that are critical. Now, enough of the supporting technologies to use NAND-based SSD in database storage systems are in place and more are coming soon.

Random IO performance has long been the laggard in computer system performance. Processor performance has improved along the 40% per year rate of Moore's law. Memory capacity has grow at around the 27% per year rate (memory bandwidth has kept pace, but not memory latency). Hard disk drive capacity for a while grew at 50%-plus per year. Even HDD sequential transfer rates has increased at a healthy pace, from around 5MB/s to 200MB/s over the last 15 years. However random IOPS has only tripled over the same 15 year period from 5400RPM to 15K. The wait for SSD to finally break the random IOPS stranglehold has been long, but is finally taking place.

We should expect three broad lines of progress in the next few years. One is the use of SSD to supplement or replace HDD is key functions. Second is a complete redesign of storage system architecture around SSD capabilities with consideration that high-capacity HDD is still useful. Third, that it is time to completely rethink the role of memory and storage in server system architecture, and perhaps database architecture with respect to data and log.

A quick survey of SSD products is helpful to database professionals because of the critical dependency on storage performance. However, it quickly becomes apparent that it is also necessary to provide at minimum a brief explanation of the underlying NAND flash, including the proliferations SLC, MLC and eMLC. Next are the technologies necessary to implement high-performance storage from NAND flash. The Open NAND Flash Interface ONFI industry workgroup is important in this regard. This progresses to the integration of SSD in storage systems, including form factor and interface strategies. From here were can form a picture of the SSD products available, and develop a plan to implement SSD where appropriate.

Non-Volatile Memory

To take the place of hard drives in a computer system, the storage technology prefers non-volatile memory, in which information is retained on power shutdown. Of the NV-memory technologies, NAND flash is most prevalent in hard-disk alternative/replacement storage devices. NOR flash has special characteristics, suitable for in-place code execution. Other non-volatile memories include Magneto-resistive RAM, Spin-Torque Transfer, and Memristor. Phase-Change Memory has promise in low granularity, and lower read latency.

NAND Flash

The Micron NAND website is a good source of information on NAND. Wikipedia has a description of Flash Memory, explaining the fundamentals and the difference between NAND and NOR. The diagrams below from Cyferz show NOR wiring on the left and NAND wiring on the right.

x x

A key difference is that NAND has fewer bit (signal) and ground lines, allowing for higher density, hence lower cost per bit (well today it does not make sense to talk about price per bit, so price per Gbit helps eliminate the leading zeros.

Multi-Level Cell

Sometime in 1997?, Intel published a paper on multi-level cell for NOR Flash, called StrataFlash. At some point, MLC made its way to NAND supporting 2-bits per cell. There is currently a 3-bit cell in development, but this may be more for low performance applications. MLC has significantly longer program (write) time than SLC.

The Intel's 3rd generation SSD, with 25nm NAND from the Intel-Micron Flash Technologies (IMFT) joint venture will be out soon. The IMFT 34nm 2-bit per cell 4GB 172mm2 and 24nm 2-bit per cell 8GB 167mm2 die (from Anandtech) are shown below.

x   x
IMFT 34nm 2-bit per cell 4GB 172mm2 and 24nm 2-bit per cell 8GB 167mm2 die

x
IMFT 34nm 3-bit per cell 4GB 126mm2

A significant portion of the die is for logic?

Numonyx SLC and MLC NAND Specifications

Numonyx (now Micron) has public specification sheets for their NAND chips.

Organizationx8x16
Page SizeTypeDensity PageSizeBlockSparePageSpareBlockSpare
Small pageSLC128M-1G 512 byte16b16K512256 words8 words8K word256 word
Large pageSLC1G-16G 2 Kbyte64b128K4K1K words32 words64K word2 Kword
Very Large pageSLC8G-32G 4 Kbyte128b256K8K(?)    
Very Large pageMLC16G-64G 4 Kbyte224b512K28K    

TypeDensityRandom AccessPage ProgramBlock eraseONFI
SLC128M-1G12μs200μs2ms?
SLC2-16G25μs200μs1.5ms1.0
SLC8-64G25μs500μs1.5ms?
MLC16-64G60μs800μs2.5ms?

A time for each subsequent byte/word is cited as 25 ns, for a 40MHz clock frequency. SLC is typically rated for 100K cycles, and MLC for 5,000 cycles. The (older) lower capacity SLC chips have 512 byte pages.

NAND Organization

I am not sure about this, but I understand the NAND chip itself could be referred to as a target and is also a Logical Unit. A single package could have one or more (up to 8?) die, hence is each die is addressed by the LUN? The chip is divided into planes, the die in the above pictures have 4 or 8 planes? which may also be a logical unit? or is a logical unit below a plane? Below a logical unit plane(?) is a block, and then the page. NAND organization: plane? logical unit (chip?), 2 planes (may support interleaved addressing), block, page. Target is one or more LU.

x

See Micron Choosing the Right NAND

x

The two figures below are from the Micron document NAND 201, by Jim Cooke, September 2011. The first is a 2Gb NAND from 2006. The second is a 32Gbit NAND in 2010.

x

In the figure below, the 32Gb 25nm Micron SLC NAND Flash is one LUN, comprised of 2 planes. Each plane is 16Gbit, comprised of 2048 blocks. Each block is 8Mbit or 1M bytes + 56K additional, comprised of 128 pages. Each page is 8K bytes (or 64K bits) + 448 bytes additional,
x

Block Erase, Garbage Collection and Write Amplification

After NAND became the solid-state component of choice, the industry started to learn the many quirks and nuances of NAND SSD behavior. NAND must be erased an entire block at a time (2,000μs?). A write (or program) must be to an erased block.

x

The block write requirement has significant impact write performance. Write to MLC is far slower than for SLC. Write performance issues of MLC can be solved with over provisioning.

The Wikipedia Write Amplification explain in detail on the additional write overhead due to garbage collection. Write Amplification = Flash Writes/Host Writes. Small random writes increases WA. Write amplification can be kept to a minimum with over-provisioning.

x

The block write requirement has significant impact write performance. Writes to SLC was already not fast to begin with, writes to MLC is much slower than SLC (800 versus 200-500μs) and on top of this, the implication of the block erase requirement can result in erratic write performance depending on the availability of free blocks. The write performance issues caused by the block erase requirement can be solved with over provisioning.

Below are slides from the Intel Developer Forum 2010 "Enterprise Solid State Drive (SSD) Endurance", Scott Doyle and Ashok Narayanan.

x

x

NAND SSD may exhibit a "bathtub" effect in read-after-write performance. The intuitive expectation is that mixed read-write performance should be close to a linear interpolation between the read and write performance specifications. Without precautions, the mixed performance may be sharply lower than both the pure read and pure write performance specifications.

This example is cited by a STEC Benchmarking Enterprise SSDs report.

x

Wear and MTBF

Flash NAND also has wear limits. Originally this was 100,000 cycles for SLC and 5-10K for MLC. The write longevity issues of MLC seem to be sufficiently solved with wear leveling and other strategies. SLC SSD may become relegated to a specialty market.

The fact that NAND SSD has a write-cycle limit suggests that database administration could be adjusted to accommodate this characteristic. If there were some means of determining that an SSD is near the write-cyle limit, active data could be migrated off, and the SSD could be assigned to static data. In an OLTP Database, tables could be partitioned splitting active and archival data. In data warehouses, the historical data should static.

Flash Translation Layer

The characteristics of NAND flash such as block erasure and wear limits, a simple direct mapping of logical to physical pages is not feasible. Instead there is a Flash Translation Layer in between. Numonyx provide a brief description here. The FTL is implemented in SSD controller(?), and determines the characteristics of the SSD. Below is a block diagram of FTL between the file system and NAND.

x

Another diagram from the Micron/Numonyx NAND Flash Translation Layer (NFTL) 4.5.0 document. This document has a detailed description of the Flash Abstract Layer, or Translation Module which incorporates functionality for bad block management, wear leveling and garbage collection.

x

The strategy for writing to NAND somewhat resembles the database log, and the NetApp Write Anywhere File Layout (WAFL), which is an indication that perhaps a complete re-design of the database data and log architecture could be better suited to solid-state storage.

Error Detection and Correction

NAND density is currently at 128 or 256Gbit density per die for 2-bit cells, meaning 64G cells. This is 16GB on one die! SLC is now at 128Gbit? (Never mind, apparently the Numonyx SLC 64Gbit product is 8 x 8Gbit die stacked. Still very impressive at both the die and package level.) One aspect of such high densities is that bit error rates are high. All (high-density?) NAND storage require sophisticated error detection and correction. The degree of EDC varies for enterprise and consumer markets.

High Endurance Enterprise NAND

The Micron website describes High-Endurance NAND as

"Enterprise NAND is a high-endurance NAND product family optimized for intensive enterprise applications. Breakthrough endurance, coupled with high capacity and high reliability (through low defect and high cycle rates), make Enterprise NAND an ideal storage solution for transaction-intensive data servers and enterprise SSDs.

Our MLC Enterprise NAND offers an endurance rate of 30,000 WRITE/ERASE cycles, or six times the rate of standard MLC, and SLC Enterprise NAND offers 300,000 cycles, or three times the rate of standard SLC. These parts also support the ONFI 2.1 synchronous interface, which improves data transfer rates by four to five times compared to legacy NAND interfaces."

Enterprise MLC is available upto 256Gbit, and SLC to 128Gbit. I will try to get more information on this.

eMMC?

Below is an interesting combination of SLC and MLC.

x

Open NAND Flash Interface

ONFI "define standardized component-level interface specifications as well as connector and module form factor specifications for NAND Flash."

ONFI 1.0

The Michael Abraham, Micron presentation ONFI 2 Source Synchronous Interface Break the I/O Bottleneck explains both ONFI 1.0 (2006) and 2.x versions (if the above link does not work, try ONFI presentations.) Below is a summary of the Abraham presentation.

In the original ONFI specification, the NAND array had parallel read that could support 330MB/s bandwidth (8KB in 25us) with SLC?, but the interface bandwidth was 40MB/sec (the slidedeck mentions 25ns clock, corresponding to 40MHz, but the ONFI webite says 1.0 is 50MB/s). Then accounting Array Read and Data Output is 25 + 211us for SLC and 50 + 211us for MLC for net bandwidth 34 and 30MB/s. Net write bandwidth is 17MB/s and 7MB/s respectively. Below is the single channel IO.

   ReadWrite
DevicePlanesData
Size
Array
Read
Data
Output
Total
Read
Data
Input
Array
Program
Total
Write
SLC 4KB page28KB25μs211μs34MB/s211μs250μs17MB/s
MLC 4KB page28KB50μs211μs30MB/s211μs900μs7MB/s

Note that the write latency is very high relative to hard disk sequential writes, as is transaction log writes. I believe the purpose of DRAM cache on the SSD controller is to hide this latency.

While the bandwidth and latency for NAND at the chip level is not spectacular, both could be substantially improved at the device level with more die per channel, more channels, or both as illustrated below.

x

Note: I am puzzled by the tables below, I am thinking that the number of channels and the die per channels axis was inadvertently switched. If the signalling bandwidth is 40MB/s, then 4 channels is required for a maximum of 160MB/s, but it does take multiple die per channel to reach the channel bandwidth of 40MB/s.

SLC 2-Plane Performance: Die per channel vs. # of channels

 ReadWrite
# of channels12481248
1 die per ch3440404019384040
2 die per ch6880808038768080
4 die per ch13616016016076152160160

MLC 2-Plane Performance: Die per channel vs. # of channels

 ReadWrite
# of channels12481248
1 die per ch304040407142840
2 die per ch6080808014285680
4 die per ch1201601601602856112160

SLC could achieve near peak performance with 4 channels and 2 die per channel. MLC could also achieve peak read performance with 4 channels and 2 die per channel, but peak write performance required 8 die per channel.

ONFI 2.x Specification

ONFI 2.0 defines a synchronous interface, improving IO channel to 200MB/sec and allowing 16 die per channel. The version 2.0 (2008) allowed speeds greater than 133MB/s. Version 2.1 (2009) increased this to 166 & 200MB/s, plus other enchancements, including in ECC. (The current Micron NAND parts catalog list 166MT/s as available). Read performance is improved for a single die and for multiple die. Write performance did not improved much for a single die, but did for multiple die on the same channel. Version 2.2 was other features. ONFI 2.3 add EZ-NAND to offload ECC responsibility from the host controller.

Below are the net bandwidth calculations for ONFI 2.x.

   ReadWrite
DevicePlanesData
Size
Array
Read
Data
Output
Total
Read
Data
Input
Array
Program
Total
Write
SLC 4KB page28KB25μs43μs120MB/s43μs250μs28MB/s
MLC 4KB page28KB50μs43μs88MB/s43μs900μs8MB/s

Synchronous SLC 2-Plane Performance: Die per channel vs. # of channels

 ReadWrite
# of channels12481234
1 die per ch1202002002002856112200
2 die per ch24040040040056112224400
4 die per ch480800800800112224448800

Synchronous MLC 2-Plane Performance: Die per channel vs. # of channels

 ReadWrite
# of channels12481248
1 die per ch881762002008163264
2 die per ch176352400400163264128
4 die per ch3527048008003264128256

Almost all SSDs on the market in 2010 are ONFI 1.0. SSDs using ONFI 2.0 are expected soon(?) with >500MB/s capability?

ONFI 3.0 Specification

The future ONFI 3.0 with increase the interface to 400MT/s.

Non-Volatile Memory Host Controller Interface

The existing interfaces to the storage system today were all designed around the characteristics of disk drives, naturally because the storage system was comprised disk drives. As expected, this is not the best match to the requirements and features of non-volatile memory storage. The Non-Volatile Memory Host Controller Interface (NVMHCI) specification will define "a register interface for communication with a non-volatile memory subsystem" and "also defines a standard command set for use with the NVM device." NVMHCI specification should be complete this year, with product in 2012.

A joint Intel and IDT presentation by Amber Huffman and Peter Onufryk at Flash Memory Summit 2010 discusses Enterprise NVMHCI. In the storage system today, there is a controller on the hard drive (the chip on the hard drive PCB), with an SAS or SATA interface to the HBA.

x

The argument is that the HBA and controller should be integrated into a controller on the SSD, with PCI-E interface upstream. Curiously, IDT mentions nothing about building natie PCI-E flash controller, considering that they are a specialty silicon controller vendor.

x

Below is the Enterpise NVMHCI view. The RAID controller now has PCI-E interfaces on both upstream and downstream sides. I had previously proposed that RAID functionality should be pushed in to the SSD itself.

x

SSD with PCI-E Interface

Kam Eshghi also of Integrated Device Technology has a FMS 2010 presentation "Enterprise SSDs with Unrivaled Performance A Case for PCIe SSDs" endorsing the PCI-E interface. The diagrams below are useful to illustrated the form factor. Below is a RAID PCI-E implementation using a standard RAID controller with PCI-E on the front-end and SATA or SAS on the back-end, an Flash controller with a SATA interface, and NAND chips.

x

In the next example, the host provides management services, consuming resources,

x

and finally, a Flash controller with native PCI-E interface (and RAID capability?).

x

The desire to connect solid-state storage directly to the PCI-E interface is understandable. My issue is the current standard PCI-E form factor is not suitable for easy access. There is the Compact PCI form factor (not yet defined for PCI-E?) where the external and PCI connections are at opposite ends of the card, instead of at two adjacent sides. This would be much more suitable for storage devices. Some provision should also be made from greater flexibility in storage capacity expansion with the available PCI-E ports.

SSD SATA/SAS Form Factor

There is a joint presentation by LSI and Seagate arguing that the SAS/SATA interface does limit SSD performance, and has excellent infrastructure for module expansion and ease of access.

The current trend with SSD with SATA/SAS interfaces is the 2.5in HDD form factor. The standard 3.5in HDD form factor is far too large for SSD. For that matter, the 3.5in form factor has become too big for HDD as well. The standard defined heights for 2.5in drives are 14.8mm, 9.5mm, and 7mm. Only enterprise drives now use the 14.8mm height, as notebook drives are all 9.5mm or thinner.

The 7mm height drive used in thin notebooks have limited capacity (250GB?), but might be ideal for SSD. The standard 2U rack holds 24 x 14.8mm drives, but could hold perhaps 50 x 7mm SSD units?

(Update) Apparently Oracle/Sun has already implemented the high-density strategy. The F5100 implements upto 80 Flash Modules (2.5in, 7mm form factor, SATA interface) in a 1U enclosure for 1.92TB capacity. I suppose the Flash Modules are two deep. A hard drive enclosure is already heavy enough with 1 rank of disks, but 2 deep for a flash enclosure is very practical. And to think there are still storage vendors peddling 3U 3.5in enclosures!

Gary Tressler of IBM proposes that SSD should actually adopt the 1.8 form factor. Presumably there would only be a single SSD capacity. The storage enclosure with have very many slots, and we could just plug in however many we need to.

SSD Controllers Today

I believe STEC is one of the component suppliers for Enterprise-grade SSD, especially with SAS interface, while most SSDs are SATA. EMC just announced Samsung as a second source. SandForce seems to be a popular SSD controller source for many SSD suppliers.

The Storage Review SSD Reference Guide provides a helpful list of SSD Controllers vendors. These include the Intel PC29AS21BA0, JMicron, Marvel, Samsung, SandForce and Toshiba.

x

The Intel SSD controller below.

x

SandForce SSD Processor

SandForce makes SSD processors used by several SSD vendors. The client SSD processor is the SF-1200. Random Write IOPS is 30K for bursts, 10K sustained, both at 4K blocks. The SF-1500 is the enterprise controller. The performance numbers are similar. Both support ONFI 50MT/s, SATA 3Gbps and can correct 24 bytes (bits?) per 512-byte sector. The SF-1500 is listed as also supporting eMLC, has unrecoverable read errors less than 1 in 1017, with Reliability MTTF 10M operating hours and supports 5-year enterprise life cycle (100% duty). The SF-1200 has unrecoverable read errors less than 1 in 1016, reliability MTTF is 2M operating hours and supports 5-year consumer life cycle with 3-5K cycles.

SSD vendors with the SandForce processor include Corsair and OCZ.

SandForce SF-2500 & SF-2600 Enterprise SSD Processors

The new SandForce 2000 processor line became available in early 2011. The SF-2000 series supports the ONFI 2 166MT/s. The Enterprise processor is the SF-2500 & 2600 line. SATA 6Gbps and below are supported. The SF-2500 is SATA, supporting only 512B sectors? The SF-2600 also supports 4K sectors, has a SATA interface, but can work behind a SAS/SATA bridge.

 Sequential (128K)Random (4K)
ProcessorReadWriteReadWrite
(Burst)
Write
(Sustained)
SF-1200260MB/s260MB/s30K30K10K
SF-1500260MB/s260MB/s30K30K30K?
SF-2500500MB/s500MB/s60K60K60K?

The SandForce 2000 controller diagram.

x

Notes on the SandForce SF-2500 and 25600 from the SandForce web site:
"RAISE (Redundant Array of Independent Silicon Elements) technology. RAISE provides the protection and reliability of RAID on a single drive without the 2x write overhead of parity."
Max capacity: 512GB using 32Gb or 64Gb/die components.
Performance (sustained): 500MB/s at 128KB blocks, up to 60K IOPS at 4KB, read and write.
Flash type: MLC, eMLC, SLC, 3xnm, 2xnm (Asynch, Toggle, ONFi2 up to 166MT/s)
Sector size: SF-2500 512B, SF-2600 520, 524, 528 & 4K+DIF/
Reliability: ECC up to 55 bits correctable per 512-byte sector (BCH). Unrecoverable read errors: less than 1 sector per 1017 bits read.

The client SF-2200 version cites mostly the same specifications except Unrecoverable read errors: less than 1 sector per 1016 bits read.

x

x

Here is an article referenced on the ONFI website Advances in Nonvolatile Memory Interfaces Keep Pace with the Data Volume