Home, Query Optimizer, Benchmarks, Server Systems, Processors, Storage, Scripts, ExecStats

Toms Hardware Samsung Releases SZ985 Z-SSD In 240GB And 800GB Capacities (2018-01. Samsung Z-NAND has low latency, 12-12µs vs. 10µs for Intel Optane (3D XPoint). Prior high-performance NAND are in the 90-115 µs

Toshiba has XL-Flash (FMS 2018, Aug)

The Memory Guy 64-Layer 3D NAND Chips Revealed at ISSCC (2017-02),


All are 3-bits per cell.


This will be the parent page for storage material. It is being reorganized into topic specific sections.

Storage 2017 (2017-03),

Transaction IO Performance on Violin (2015-02),

SAN IO Performance Problems (2015-02),   Storage Update 2014 (2014-06),

Storage Performance 2013 (2013-03),   Enterpise Storage including V-Max (2013-05),   VNX(2013-02)

IO Queue Depth Control (2013-10)   IO Queue Depth Strategy (2010-10).   IO Cost Structure (2008-09).

Storage by Topic

Protocols: PCI-ESASFC (and FC HBA)

Components: Hard Disk DrivesSSD Technology RAID Controllers,

SSD products: SATA/SAS SSDs (2012-12 Updated),  PCI-E SSDsFusion iOother SSD

Storage Systems: Direct-Attach,  SAN (General)

Hitachi AMSVSP

Intel C5500 (Jasper Forest), Storage 2010 developments.

Vendors: Dell, HP, IBM, EMC, Hitachi, others
This will be the same material as in the Direct-Attach and SAN sections above, except it will be organized by vendors. I will do this after the category sections are complete.

RAID: there is plenty of material from other sources on RAID levels, so I will not cover this for now.

Storage Articles - Older

Below are the older storage articles. Excerpt from the original and new updates will be incorporated into the new structure-sections above.

Storage Overview, System View of Storage, SQL Server View of Storage, File Layout,

Storage Performance for Data Warehouse (2010-10)

Storage Configuration: Part I, Part II, Part III, Part IV, Appendix (2010-04?)

Storage Performance (2009)

HP StorageWorks 2000 sa and fc G2 Modular Storage Arrays

Unorganized stuff from 2011 and 2012.

NVM Express

Sometime in the next year, we should start seeing products with the NVM Express interface. A significant part of NVMe is the overhaul of the IO software stack. A IO API call to a SATA storage via ACHI (the SATA ports on a PC motherboard) squanders CPU-cycles in non-cacheable register reads, and other inefficiencies. None of this was an issue with SATA hard disk performance. Even a single recent generation SATA SSDs capable of 550MB/s and 90K IOPS did not overload the limits of AHCI.

For a large array of HDDs, the more efficient software stack of SAS and FC HBA's is very important. It is not just a matter of being able drive very high IOPS, but also with minimal impact on the host, so that the host CPUs can perform their primary mission, such as running the database engine, without being heavily disrupted by interrupts. This was sufficient to handle the IOPS possible from very large HDD arrays. One thousand disks at 200K IOPS are not a problem?

But for SSD arrays, 1M+ IOPS is not difficult, so the time has come to overhaul the software stack. There is an emphasis on not just efficiency but also scaling on multi-core processors and NUMA system architecture.


Does Intel Xeon E5 with X540 10GbE Data Direct IO (DDIO) direct to cache, no memory ?