HPE Apollo 4200 Reference Manual Download Page 13

Reference guide 

Page 13 

 

OSDs can’t be hot swapped with separate data and journal devices. 

 

Additional setup and planning is required to efficiently make use of SSDs. 

 

Small object I/O tends to benefit much less than larger objects. 

Configuration recommendations 

 

For bandwidth, four spinning disks to one SSD is a recommended performance ratio for block storage. It’s possible to go with more spinning to 
solid state to improve capacity density, but this also increases the number of OSDs affected by an SSD failure.  

 

SSDs can become a bottleneck with high ratios of disks to SSD journals; balance SSD ratios vs. peak spinning media performance. Ratios larger 
than eight spinning disks to one SSD are typically inferior to just co-locating the journal with the data. 

 

Even where application write performance is not critical, it may make sense to add an SSD journal purely for rebuild/rebalance bandwidth 
improvements. 

 

Journals don’t require a lot of capacity, but larger SSDs do provide extra wear leveling. Journaling space reserved by SUSE Enterprise Server 
should be 10–20 seconds of writes for the OSD the journal is paired with. 

 

A RAID 1 of SSDs is not recommended outside of the monitor nodes. Wear leveling makes it likely SSDs will be upgraded at similar times. The 
doubling of SSDs per node also reduces storage density and increases price per gigabyte. With massive storage scale, it’s better to expect 
drive failure and plan such that failure is easily recoverable and tolerable. 

 

Erasure coding is very flexible for choosing between storage efficiency and data durability. The sum of your data and coding chunks should 
typically be less than or equal to the OSD host count, so that no single host failure can cause the loss of multiple chunks. 

 

Keeping cluster nodes single function makes it simpler to plan CPU and memory requirements for both typical operation and failure handling. 

 

Extra RAM on an OSD host can boost GET performance on smaller I/Os through file system caching. 

Choosing hardware 

The SUSE Enterprise Storage Administration and Deployment Guide provides minimum hardware recommendations. In this section, we expand 
and focus this information around the reference configurations and customer use cases. 

Choosing disks 

Choose how many drives are needed to meet performance SLAs. That may be the number of drives to meet capacity requirements, but may 
require more spindles for performance or cluster homogeneity reasons. 

Object storage requirements tend to be primarily driven by capacity, so plan how much raw storage will be needed to meet usable capacity and 
data durability. Replica count and data to coding chunk ratios for erasure coding are the biggest factors determining usable storage capacity. 
There will be additional usable capacity loss from journals co-located with OSD data, XFS/Btrfs overhead, and logical volume reserved sectors.  
A good rule of thumb for three-way replication is 1:3.2 for usable to raw storage capacity ratio. 

Some other things to remember around disk performance: 

 

Replica count or erasure encoding chunks mean multiple media writes for each object PUT. 

 

Peak write performance of spinning media without separate journals is around half due to writes to journal and data partitions going to the 
same device. 

 

With a single 10GbE port, the bandwidth bottleneck is at the port rather than controller/drive on any fully disk-populated HPE Apollo 4510 
Gen9 server node.  

 

At smaller object sizes, the bottleneck tends to be on the object gateway’s ops/sec capabilities before network or disk. In some cases, the 
bottleneck can be the client’s ability to execute object operations. 

 

Summary of Contents for Apollo 4200

Page 1: ...SUSE Enterprise Storage on HPE Apollo 4200 4500 System Servers Sept 1 2017 Choosing HPE density optimized servers as SUSE Enterprise Storage building blocks Reference guide...

Page 2: ...y Ceph 5 Solution 7 SUSE Enterprise Storage v4 7 Hewlett Packard Enterprise value for a Ceph storage environment 8 SUSE value 8 Server platforms 8 Configuration guidance 12 General configuration recom...

Page 3: ...urce unified block file and object storage solution that Has software that offers practical scaling from one petabyte to well beyond a hundred petabytes of data Lowers upfront solution investment and...

Page 4: ...age solutions are a challenge to manage and control at massive scale Management silos and user interface limitations make it harder to deploy new storage into business infrastructure Why SUSE Enterpri...

Page 5: ...le disk based target mechanisms These applications are able to leverage the distributed storage technology provided by SUSE Enterprise Storage as a disk backup device The advantages of this architectu...

Page 6: ...rs Three or more OSD nodes recommended Apollo 4000 servers One or more RGW typically ProLiant DL360 severs Optional iSCSI gateway one or more ProLiant DL360 server Density optimized Apollo 4000 server...

Page 7: ...her reduce capital and operational costs for your storage infrastructure with a truly unified block object and files solution with production ready Ceph filesystem CephFS adding native filesystem acce...

Page 8: ...ted as not affecting IOPS on spinning media for low performance impact and is transparent to the operating system for ease of use This means any drive supported on the server can be used giving much m...

Page 9: ...Voltage Memory 16 DIMM slots Maximum of 1024GB 16 x 64GB per server tray OS drive controller drives HPE Dynamic Smart Array B140i SATA RAID controller for the two server node SFF drives and M 2 drives...

Page 10: ...rack Uses Gen9 HPE Flexible Slot Power Supplies which provides support for 800 W 48 VDC and 277 VAC environments in addition to standard AC environments for 800 W and 1400 W Platinum and 800 W Titani...

Page 11: ...up to 3 x PCIe 3 0 8x slots FlexibleLOM slots with a single processor On System Management HPE ILO 4 Management Engine Optional HPE iLO Advanced Cluster Management optional HPE Insight Cluster Managem...

Page 12: ...recommended cluster size sounds large consider whether SUSE Enterprise Server is the right solution Smaller amounts of storage that don t grow at unstructured data scales could stay on traditional bl...

Page 13: ...n cause the loss of multiple chunks Keeping cluster nodes single function makes it simpler to plan CPU and memory requirements for both typical operation and failure handling Extra RAM on an OSD host...

Page 14: ...cation on the cluster network for every actual I O It is recommended to bond all 10GbE links with LACP and segment the public and backend traffic via VLANs It is recommended to reserve a separate 1GbE...

Page 15: ...r 1 761878 B21 HPE H244br FIO Smart HBA 2 726821 B21 HPE Smart Array P440 4G Controller 1 727258 B21 HPE 96 w Megacell Batt Cntrl Bd 1 808967 B21 HPE A4500 Mini SAS Dual P440 Cable Kit 2 655710 B21 HP...

Page 16: ...ctured data performance capabilities of SAN and NAS are often less important than cost per gigabyte of storage at scale Management of the quantity of storage and sites is complicated and guaranteeing...

Page 17: ...pools contain many PGs and many PGs can map to one OSD Pool Logical human understandable partitions for storing objects Pools set ownership access to objects the number of object replicas the number o...

Page 18: ...Inc Intel Xeon is a trademark of Intel Corporation in the U S and other countries The OpenStack Word Mark is either a registered trademark service mark or trademark service mark of the OpenStack Foun...

Reviews: