Home, Query Optimizer, Benchmarks, Server Systems, Processors, Storage, Scripts, ExecStats
News:
Anandtech
The Samsung
983 ZET (Z-NAND) SSD Review:... (2019-02).
48L SLC Z-NAND, 64Gb, 3µs read latency, 100µs program, 2/4KB page size
64L TLC, 512Gb, 60µs read latency, 700µs program, 16KB page size
"The random read latency stats for the 983 ZET clearly set it apart from the rest of the flash-based SSDs
and put it in the same league as the Optane SSD.
The Optane SSD's average latency of just under 9µs is still better than the 16µs from the 983 ZET,
but the tail latencies for the two are quite similar."
Toms Hardware Samsung 983 ZET SSD Review: ... (2019-03).
Toms Hardware Samsung Releases SZ985 Z-SSD In 240GB And 800GB Capacities (2018-01). Samsung Z-NAND has low latency, 12-20µs vs. 10µs for Intel Optane (3D XPoint). Prior high-performance NAND are in the 90-115 µs
Toshiba has XL-Flash (FMS 2018, Aug)
The Memory Guy 64-Layer 3D NAND Chips Revealed at ISSCC (2017-02),
All are 3-bits per cell.
IEEE EDS – SCV Seminar, Recent Trends in Memory Technology Reliability, August 8, 2017, Bob Gleixner, Micron Technology, Inc
Storage
This will be the parent page for storage material. It is being reorganized into topic specific sections.
Storage 2017 (2017-03),
Transaction IO Performance on Violin (2015-02),
SAN IO Performance Problems (2015-02), Storage Update 2014 (2014-06),
Storage Performance 2013 (2013-03), Enterpise Storage including V-Max (2013-05), VNX(2013-02)
IO Queue Depth Control (2013-10) IO Queue Depth Strategy (2010-10). IO Cost Structure (2008-09).
Storage by Topic
Protocols: PCI-E, SAS, FC (and FC HBA)
Components: Hard Disk Drives, SSD Technology, RAID Controllers,
SSD products: SATA/SAS SSDs (2012-12 Updated), PCI-E SSDs, Fusion iO, other SSD
Storage Systems: Direct-Attach, SAN (General)
Vendor\Class | Entry | Mid-range | Enterprise |
---|---|---|---|
Dell | MD3200 |   | |
EMC | CLARiiON AX4 | CLARiiON CX4 VNX(2) |
V-Max |
HP | P2000 | EVA | P9000 |
Hitachi |   | AMS | VSP |
IBM | DS3400 | V7000 | DS8000 |
NetApp | FAS2000 | FAS3100 | FAS6000 |
Intel C5500 (Jasper Forest), Storage 2010 developments.
Vendors: Dell, HP, IBM, EMC, Hitachi, others
This will be the same material as in the Direct-Attach and SAN sections above,
except it will be organized by vendors.
I will do this after the category sections are complete.
RAID: there is plenty of material from other sources on RAID levels, so I will not cover this for now.
Storage Articles - Older
Below are the older storage articles. Excerpt from the original and new updates will be incorporated into the new structure-sections above.
Storage Overview, System View of Storage, SQL Server View of Storage, File Layout,
Storage Performance for Data Warehouse (2010-10)
Storage Configuration: Part I, Part II, Part III, Part IV, Appendix (2010-04?)
Storage Performance (2009)
HP StorageWorks 2000 sa and fc G2 Modular Storage Arrays
Unorganized stuff from 2011 and 2012.
NVM Express
Sometime in the next year, we should start seeing products with the NVM Express interface. A significant part of NVMe is the overhaul of the IO software stack. A IO API call to a SATA storage via ACHI (the SATA ports on a PC motherboard) squanders CPU-cycles in non-cacheable register reads, and other inefficiencies. None of this was an issue with SATA hard disk performance. Even a single recent generation SATA SSDs capable of 550MB/s and 90K IOPS did not overload the limits of AHCI.
For a large array of HDDs, the more efficient software stack of SAS and FC HBA's is very important. It is not just a matter of being able drive very high IOPS, but also with minimal impact on the host, so that the host CPUs can perform their primary mission, such as running the database engine, without being heavily disrupted by interrupts. This was sufficient to handle the IOPS possible from very large HDD arrays. One thousand disks at 200K IOPS are not a problem?
But for SSD arrays, 1M+ IOPS is not difficult, so the time has come to overhaul the software stack. There is an emphasis on not just efficiency but also scaling on multi-core processors and NUMA system architecture.
Does Intel Xeon E5 with X540 10GbE Data Direct IO (DDIO) direct to cache, no memory ?