Home, Cost-Based Optimizer, Benchmarks, Server Systems, Systems Architecture, Processors, Storage,
  PCI-E, SAS, FC, HDD, SSD, RAID Controllers, Direct-Attach,
  SAN, Dell MD3200, CLARiiON AX4, CX4, V-Max, HP P2000, EVA, P9000/VSP, Hitachi AMS

Intel C5500/C3500 (Jasper Forest) Storage Processors

When Jasper Forest was first mentioned as a Nehalem with integrated PCI-E gen2, I did not see the point. The standard single socket Nehalem system is comprised of the processor, the high-speed IOH and legacy ICH. Jasper Forest reduces the part count to an integrated processor plus high-speed IOH, with the PCH (a special ICH for the C5500 series) as a separate part. Memory is additional for both. Sure there might be some cost savings and board size as well.

But the Intel processors are full custom, meaning a very large team of mask designer spend months doing a hand layout. Are minor cost saving worth the effort of a custom die? Intel does not even like to make multiple die with different cache sizes. It is extraordinarily unusual for Intel to produce a custom version of their flagship processor, the exception being the Xeon EX line, and perhaps Westmere is dual-core and six-core flavors.

The C5500 has 1, 2 and 4-core SKUs, but perhaps there are only two actual distinct die. The three special features of C5500 are Non-Transparent Bridge, DMA with CRC and RAID 5/6 assist capability and , Asynchronous DRAM Refresh. These are special features for storage systems that are should not be needed for server and workstation systems.

  1. PCI-E Non-Transparent Bridge
  2. DMA with CRC and RAID 5/6 assist
  3. Asynchronous DRAM Refresh

Intel C5500/C3500 (Jasper Forest) Overview

In the Processors section, I described the Intel Nehalem and Westmere microprocessors. There is a special variant of Nehalem 45nm for embedded devices code name Jasper Forest, but now Xeon C5500 and C5300, with PCI-E integrated.

Below is a representation of a single socket system. The system has a single quad-core processor with 3 memory channels, 16 PCI-E gen2 lanes, and 8 effective PCI-E gen1 lanes. This is sufficient to power an entry system with 4 dual-port 8Gbps FC HBAs (4 front-end and 4 back-end). Of course, today the SAN should discard FC on the back-end, so perhaps one x4 SAS back-end, and an option of SAS, FC or iSCSI on the front-end.

SAS

Below is a 2-way C5500 system.

SAS

PCI-E Non-Transparent Bridge (NTB)

What is interesting is the description of the non-transparent PCI-E bridge. The IDF2009 slidedeck says this enables 1) failover memory-mirroring for redundant systems and 2) dais chaining.

PCI Express* non-transparent bridge (NTB) acts as a gateway that enables high performance, low overhead communication between two intelligent subsystems, the local and the remote subsystems. The NTB allows a local processor to independently configure and control the local subsystem, provides isolation of the local host memory domain from the remote host memory domain while enabling status and data exchange between the two domains.

With the normal PCI-E transparent bridge, all devices on both sides are discovered and configured by the local host. Discovery stops at the NTP. Below are two independent UP systems connected through the NTB port. The NTB port can also be connected to a Root (non-NTB) port.

SAS

I am assuming that this capability was put in for SAN systems. The EMC CLARiiON CX3 and CX4 systems were already employing some component with similiar capability for their CMI link between service processors. The Intel web site says HP is using it in their next generation storage systems, but not which line. Hopefully this will replace their EVA lineup. I am expecting EMC and other SAN vendors will adopt the C5500 as well.

The Intel Core 2 architecture processors in the current CLARiiON C4 line have adequate core performance (if EMC did not use special crippled versions) but lack IO bandwidth for todays server systems, and have no scaling beyond 2 SP.

Of course, as always, getting new functionality to work is not a simple matter, and it probably takes time to get things to work. The Intel C5500 data sheet and the IDF 2009 session EMBS001 "Take the Lead with Jasper Forest, the Future Intel Xeon Processor for Embedded and Storage" presented by Lerie Kane and Hang Nguyen describes these features in more detail, so I will only summarize.

NTB Redundant System Diagram

SAS

NTB enables failover memory mirroring between systems. Recall that SAN systems were really two or more computer systems with redundant or mirrored memory capability. Ideally the storage system is always available, but above all, the contents of the write-cache must be protected. Before NTB and what EMC was using in the Clariion, this traffic would have to be sent over FC or Infini-Band.

The NTB feature also allows systems to be chained, as shown in the diagram below. A detailed explanation was not given.

SAS

There is a Direct Address Translation protocol to allow hosts on each side to access the address space (memory) of the host on the other side.

SAS

NTB functionality is accessed through a software stack with NTB drivers.

SAS

Enhanced DMA and RAID Assist

Two other features are Enhanced Crystal Beach 3 (CB3) DMA and hardware RAID 5/6 assist. The DMA function can calculate CRC outside of the processor core. It can also do block fill, zero out or fill pages also outside of the processor core. RAID 5 and 6 calculations are also performed here.

Asynchronous DRAM Refresh

There is a pin on the processor that should be triggered on detection of power failure or system lockup. Once triggered, DRAM is put into self-refresh, with a battery backup, preserving the contents of memory.