HPE Apollo 4200 Reference Manual Download Page 12

Reference guide 

Page 12 

 

Figure 6:

 HPE Apollo 4200 System rear view with “2 small form factor rear drive cage plus 2 PCIe card expander” option 

Configuration guidance 

This section covers how to create a SUSE Enterprise Storage cluster to fit your business needs. The basic strategy of building a cluster is  
this: with a desired capacity and workload in mind, understand where performance bottlenecks are for the use case, and what failure domains the 
cluster configuration introduces. After choosing hardware, SUSE Enterprise Storage Administration and Deployment Guide is an excellent place 
to start for instructions on installing software. 

General configuration recommendations 

 

The slowest performer is the weakest link for performance in a pool. Typically, OSD hosts should be configured with the same quantity, type, 
and configuration of storage. There are reasons to violate this guidance (pools limited to specific drives/hosts, federation being more 
important than performance), but it’s a good design principle. 

 

A minimum recommended size cluster would have at least six compute nodes. The additional nodes provide more space for unstructured 
scale, help distribute load per node for operations, and make each component less of a bottleneck. When considering rebuild scenarios, look at 
the capacity of a node in relation to available bandwidth. Higher density nodes work better in larger, faster clusters, while less dense nodes 
should be used in smaller clusters. 

 

If the minimum recommended cluster size sounds large, consider whether SUSE Enterprise Server is the right solution. Smaller amounts of 
storage that don’t grow at unstructured data scales could stay on traditional block and file or leverage an object interface on a file-focused 
storage target. 

 

SUSE Enterprise Server clusters can scale to hundreds of petabytes, and you can easily add storage as needed. However, failure domain 
impacts must be considered as hardware is added. Design assuming elements will fail at scale. 

SSD journal usage 

If data requires significant write or PUT bandwidth, consider SSDs for data journaling. 

Advantages 

 

Separation of the highly sequential journal data from object data—which is distributed across the data partition as RADOS objects land in their 
placement groups—means significantly less disk seeking. It also means that all bandwidth on the spinning media is going to data I/O, 
approximately doubling bandwidth of PUTs/writes. 

 

Using an SSD device for the journal keeps storage relatively dense because multiple journals can go to the same higher bandwidth device 
while not incurring rotating media seek penalties. 

Disadvantages 

 

Each SSD in this configuration is more expensive than a spinning drive that could be put in the slot. Journal SSDs reduce the maximum 
amount of object storage on the node. 

 

Tying a separate device to multiple OSDs as a journal and using XFS—the default file system the 

Ceph-deploy

 tool uses—means that loss of 

the journal device is a loss of all dependent OSDs. With a high-enough replica and OSD count this isn’t a significant additional risk to data 
durability, but it does mean architecting with that expectation in mind.  

 

Summary of Contents for Apollo 4200

Page 1: ...SUSE Enterprise Storage on HPE Apollo 4200 4500 System Servers Sept 1 2017 Choosing HPE density optimized servers as SUSE Enterprise Storage building blocks Reference guide...

Page 2: ...y Ceph 5 Solution 7 SUSE Enterprise Storage v4 7 Hewlett Packard Enterprise value for a Ceph storage environment 8 SUSE value 8 Server platforms 8 Configuration guidance 12 General configuration recom...

Page 3: ...urce unified block file and object storage solution that Has software that offers practical scaling from one petabyte to well beyond a hundred petabytes of data Lowers upfront solution investment and...

Page 4: ...age solutions are a challenge to manage and control at massive scale Management silos and user interface limitations make it harder to deploy new storage into business infrastructure Why SUSE Enterpri...

Page 5: ...le disk based target mechanisms These applications are able to leverage the distributed storage technology provided by SUSE Enterprise Storage as a disk backup device The advantages of this architectu...

Page 6: ...rs Three or more OSD nodes recommended Apollo 4000 servers One or more RGW typically ProLiant DL360 severs Optional iSCSI gateway one or more ProLiant DL360 server Density optimized Apollo 4000 server...

Page 7: ...her reduce capital and operational costs for your storage infrastructure with a truly unified block object and files solution with production ready Ceph filesystem CephFS adding native filesystem acce...

Page 8: ...ted as not affecting IOPS on spinning media for low performance impact and is transparent to the operating system for ease of use This means any drive supported on the server can be used giving much m...

Page 9: ...Voltage Memory 16 DIMM slots Maximum of 1024GB 16 x 64GB per server tray OS drive controller drives HPE Dynamic Smart Array B140i SATA RAID controller for the two server node SFF drives and M 2 drives...

Page 10: ...rack Uses Gen9 HPE Flexible Slot Power Supplies which provides support for 800 W 48 VDC and 277 VAC environments in addition to standard AC environments for 800 W and 1400 W Platinum and 800 W Titani...

Page 11: ...up to 3 x PCIe 3 0 8x slots FlexibleLOM slots with a single processor On System Management HPE ILO 4 Management Engine Optional HPE iLO Advanced Cluster Management optional HPE Insight Cluster Managem...

Page 12: ...recommended cluster size sounds large consider whether SUSE Enterprise Server is the right solution Smaller amounts of storage that don t grow at unstructured data scales could stay on traditional bl...

Page 13: ...n cause the loss of multiple chunks Keeping cluster nodes single function makes it simpler to plan CPU and memory requirements for both typical operation and failure handling Extra RAM on an OSD host...

Page 14: ...cation on the cluster network for every actual I O It is recommended to bond all 10GbE links with LACP and segment the public and backend traffic via VLANs It is recommended to reserve a separate 1GbE...

Page 15: ...r 1 761878 B21 HPE H244br FIO Smart HBA 2 726821 B21 HPE Smart Array P440 4G Controller 1 727258 B21 HPE 96 w Megacell Batt Cntrl Bd 1 808967 B21 HPE A4500 Mini SAS Dual P440 Cable Kit 2 655710 B21 HP...

Page 16: ...ctured data performance capabilities of SAN and NAS are often less important than cost per gigabyte of storage at scale Management of the quantity of storage and sites is complicated and guaranteeing...

Page 17: ...pools contain many PGs and many PGs can map to one OSD Pool Logical human understandable partitions for storing objects Pools set ownership access to objects the number of object replicas the number o...

Page 18: ...Inc Intel Xeon is a trademark of Intel Corporation in the U S and other countries The OpenStack Word Mark is either a registered trademark service mark or trademark service mark of the OpenStack Foun...

Reviews: