IBM 787264U Скачать руководство пользователя страница 10

Optimized for virtualization and database applications with maximum 
memory and compute capacity in a blade 

Please see the Legal Information section for important notices and information. 

10. 

 

 

Light Path Diagnostics 

Light path diagnostics enables a technician to quickly identify and locate a failed or failing 
system component, such as a specific blower module or memory DIMM. This enables quick 
replacement of the component, which helps increase server uptime and lower servicing costs.  

The front of each blade server—and the chassis itself—has an LED indicator light to show 
possible component failures. This lets the servicer identify the failing component without the 
need to or remove the blade server from the chassis. The light path diagnostics panel tells the 
servicer which component of the affected server requires attention.  

In addition, many components have their own identifying LEDs. For example, each of the 
memory modules has an LED next to the socket, as do both processors. This allows the servicer 
to easily identify exactly which component needs servicing. By following the “light path,” the 
component can be replaced quickly, and without guesswork. (

Note:

 In the event of a failed 

DIMM, the system will restart and mark the DIMM as bad while offline, thus allowing the system 
to continue running, with reduced memory capacity, until serviced.) 

 

Gigabit Ethernet Controller 

The HX5 includes a 

dual-port

 integrated 

Broadcom BCM5709S 

Gigabit Ethernet controller for 

up to 10X higher maximum throughput than a 10/100 Ethernet controller. The controller offers 

TOE

 (TCP Offload Engine) support, as well as 

failover

 and 

load balancing

 for better 

throughput and system availability. It also supports highly secure remote power management 
using 

IPMI 2.0

, plus Wake on LAN

®

 and PXE (Preboot Execution Environment) Flash interface. 

If 

2

 ports aren’t enough, optional 

2-port 

or

 4-port Ethernet

 expansion cards can be used for 

additional ports. For example, the 

CIOv

 slot can hold a 

2-port

 card, the 

CFFh

 slot supports a 

4-

port

 card, and a 

bridge module

 can add another 

2 ports

, for a total of 

10

 Gigabit Ethernet ports 

per HX5 blade. 

   

High-Performance Adapter Slots 

The HX5 blade server includes 

two PCIe

 adapter slots. 

They support

 

CFF

 (compact form factor) 

cards: 

one

 standard-speed 

CIOv

 and

 one

 high-speed 

CFFh

 card. 

Note:

 The SSD expansion 

card is installed in the upper I/O slot. When SSDs are installed, both I/O slots are still available 
for use. 

Adding a second HX5 blade for a 2-wide 4-socket server doubles the number of PCIe card slots.  

The 

BladeCenter

 

PCI Express I/O Expansion Unit 3

 (BPE3) adds 

2 standard full-height/full-

length x16 

physical

/x8 

electrical (

4GBps

PCIe

 

Gen 1

 expansion card slots, supporting 

adapters of up to 

25W

 apiece to an HS22. One BPE3 can be connected per HS22 blade, for a 

total of 

3

 available slots (

1

 in the blade and 

2

 in the expansion unit). 

Note:

 The BPE3 reserves 

the high-speed CFFh expansion connector in the HS22, leaving only the CIOv slot available. 

 

Similarly, the optional 

BladeCenter PCI Express Gen 2 Expansion Blade (BPE4)

 adds 

1

 

standard

 full-height/full length 

and

 1 

standard

 full-height/half-length x16

 physical/

x8

 

mechanical (

8GBps

) PCIe Gen 2 expansion card slot per HS22 blade. These slots support two 

industry-standard PCIe adapters, up to 

75W

 per adapter. The 

BladeCenter PCI Express Gen 2 

Expansion Blade 

offers a unique stacking feature that allows clients to stack up to 

4

 expansion 

blades per HS22 blade, offering up to an additional 8 PCIe slots per HS22 blade. 

Note:

 Unlike 

the BPE3, the BPE4 does 

not

 reserve the high-speed CFFh expansion connector in the HS22, 

leaving 

both

 slots available in the server. 

If I/O slots are a greater need than processors or memory, attaching multiple I/O expansion units 
to one blade server is much more cost-effective than installing multiple blade servers for the 
same number of adapter slots.

 

Adapters can be used to add fabrics to BladeCenter switch modules, including 10Gb Ethernet, 
additional Gigabit Ethernet controllers, CNA, Fibre Channel, InfiniBand, SAS, etc.  

 

  

 

 

BladeCenter Chassis 

IBM’s blade architecture offers 

five

 choices

 of compatible and interoperable chassis in which to 

use various blade servers. Each chassis serves different customer needs. The 

BladeCenter S

 is 

a small, entry-level chassis designed for office environments. The original chassis (refreshed 
with the latest Advanced Management Modules and power supply modules) offers maximum 
density, great flexibility and a wide variety of expansion options at an entry-level price. The next-
generation 

BladeCenter H

 chassis offers all of BladeCenter’s capabilities, and adds high-

performance features, including 10Gb fabric support. If you need a 

ruggedized

 chassis (for 

example, government/military or telecom), 

BladeCenter T

 offers special features optimized for 

those environments. The next-generation 

BladeCenter HT

 is a high-performance 

ruggedized

 

telecommunications platform, also supporting 10Gb fabrics. 

HX5 is supported in the BladeCenter 

Содержание 787264U

Страница 1: ...oftware licensing by virtualizing a 4 processor server into many VMs rather than using multiple 2 processor servers Huge amounts of memory also enable more or larger VMs and larger databases especially databases stored entirely in memory Reducing an entire server into a little over 5U of rack space i e up to 14 servers in 9U does not mean trading away features and capabilities for smaller size The...

Страница 2: ...new level combining up to 12 hot swap SAS SATA HDDs with optional SAS card and up to 6 blade servers and 4 switches Not only can this save significant data center space and therefore the cost of floor space and rack hardware compared to 1U servers it also consolidates switches bridges and cables for reduced complexity and lower cabling costs and it allows clients to manage everything in the soluti...

Страница 3: ...onfiguration optimized for compute intensive workloads with up to 4 processors 32 cores 256GB of memory 4 PCIe cards 16 I O ports and 4 SSDs or for memory intensive workloads with up to one server blade and one MAX5 memory expansion blade 2 processors 16 cores 320GB of memory 4 PCIe cards 16 I O ports and 4 SSDs A choice of processor speeds 1 86 or 2 0GHz and shared L3 cache sizes 12MB 18MB or 24M...

Страница 4: ...ter construction planning and the sizing of power and cooling needs as well as allowing you to use available power more efficiently The HX5 supports remote presence concurrent KVM cKVM and concurrent media cMedia access by multiple administrators at once via the IMM IBM Systems Director is included for proactive systems management and works with both the blade s internal IMM and the chassis manage...

Страница 5: ...ch your data center needs and the appropriate interconnect using a common management point and 5 I O fabrics to choose from Extract the most from your third party management solutions by utilizing the BladeCenter Open Fabric Manager It s collaborative enabling you to harness the power of the industry to deliver innovation that matters Get flexibility from a myriad of solutions created by Blade org...

Страница 6: ...Hz with 64 bit extensions reduced power draw per core 23 75W 4 8 GTps QPI speed 800MHz memory access dual integrated memory controllers 18MB of shared L3 cache and Intel Hyper Threading technology supported in BladeCenter H HT and S chassis 105W 6 core Xeon processor model E6540 at 2 0GHz with 64 bit extensions low power draw per core 17 7W 5 86 GTps QPI speed 1066MHz memory access dual integrated...

Страница 7: ... but no additional memory for the second processor the second processor has to access the memory from the first processor remotely resulting in longer latencies and lower performance The latency to access remote memory is almost 75 higher than local memory access So the goal should be to always populate both processors with memory Alternatively you can add a MAX5 memory expansion blade containing ...

Страница 8: ...ing from 2 sockets to 4 sockets with ease doubling the memory and storage capacity and I O slots in the process A 2 node configuration is achieved simply by connecting 2 30mm blades together via the Quick Path Interconnect QPI ports for 4 sockets and 32 DIMMs 256GB Alternatively you can scale via MAX5 adding memory capacity to increase performance in an I O intensive enironment This configuration ...

Страница 9: ...sive transactional workloads requiring extreme IOPS performance such as database video on demand and caching IBM offers High IOPS SSD PCIe Adapters equivalent to the IOPS output of approximately 500 5 3 5 inch 600GB 15K SAS HDDs with 99 lower latency 30 µs and 7 7x the bandwidth of a HDD On a performance per watt basis these adapters outperform HDDs by up to 445x 6 Because these adapters go in PCI...

Страница 10: ...alled both I O slots are still available for use Adding a second HX5 blade for a 2 wide 4 socket server doubles the number of PCIe card slots The BladeCenter PCI Express I O Expansion Unit 3 BPE3 adds 2 standard full height full length x16 physical x8 electrical 4GBps PCIe Gen 1 expansion card slots supporting adapters of up to 25W apiece to an HS22 One BPE3 can be connected per HS22 blade for a t...

Страница 11: ...PMI v2 0 compliant management software e g xCAT Other mandatory and optional IPMI functions in the blade s IMM The IMM via the management module alerts IBM Systems Director to anomalous environmental factors such as voltage and thermal conditions even if the server has failed Other systems management features offered for the combination of blade server and chassis include Predictive Failure Analys...

Страница 12: ... tasks remotely that would otherwise require a visit to each system These tasks may include such things as formatting a hard disk drive updating system firmware or deploying a Windows or Linux operating system Predictive Failure Analysis PFA enables the AMM and the IMM to detect impending failure of supported components processors memory expansion cards switch blower and power supplies and hard di...

Страница 13: ...t is an innovative call home feature that allows System x and BladeCenter servers to automatically report hardware problems to IBM support which can even dispatch onsite service 7 if necessary to those customers entitled to onsite support under the terms of their warranty or an IBM Maintenance Agreement Service and Support Manager resides on a server and provides electronic support and problem man...

Страница 14: ...h IOPS SS Class SSD PCIe Adapter the 320GB High IOPS SS Class SSD PCIe Adapter the 320GB High IOPS MS Class SSD PCIe Adapter and the 320GB High IOPS SD Class SSD PCIe Adapter as well as the 640GB High IOPS MLC Duo Adapter These adapters offer near DRAM performance with extremely high data retention up to 25 years and RAID grade data protection with 160GB or 320GB capacities ServeRAID Controllers S...

Страница 15: ...on 15 HX5 Images Front View Interior View Light Path Diagnostics Panel Hexagonal Ventilation Holes Low Profile Hot Swap Handles Status cKVM Panel 4 socket HX5 16 DDR3 DIMM Sockets Processor Heat Sink Processor Heat Sink 1 8 inch SSD Internal USB Embedded Hypervisor Scalability Connector Redundant Midplane Connections Dual 1GbE LOM 10GbE Option 2 I O Slots 1 CIOv 1 CFFh 1 8 inch SSD ...

Страница 16: ...4 8 GTps 42x of processors standard maximum per blade and per 2 node system 1 2 up to 4 total 2 node configuration 42x 61x 64x 65x 82x 86x 2 2 up to 4 total 2 node configuration 63x 66x 67x 68x 6Dx 83x 84x E6x E8x Internal L3 cache 24MB 82x 83x 84x 86x E8x 18MB 42x 63x 64x 65x 66x 67x 68x 6Dx E6x 12MB 61x E6510 Chipset Intel 7500 IBM eX5 Scalability ports 4 QPI Ports per chassis 3 EXA ports per MA...

Страница 17: ...rd preloaded with RHEV H hypervisor 67x 1 optional preloaded with hypervisor all other models of optical drives standard None one in BladeCenter chassis of diskette drives standard None one standard in BladeCenter H chassis Internal tape drives supported None SAN attached SSD drive interface SATA Integrated disk controller SATA controller integrated into the processor Optional RAID controller LSI ...

Страница 18: ... 11 64 bit with or without Xen SLES 10 64 bit with or without Xen VMware ESX ESXi vSphere Hypervisor 4 0 Extended long life support Selected blades are supported for long life Contact IBM sales for details Length of limited warranty 3 years parts and labor onsite The Bottom Line The HX5 offers maximum bang for the buck by incorporating industry leading features in a tiny package Price Performance ...

Страница 19: ... using 4 BladeCenter PCI Express Gen 2 Expansion Blades BPE4 blades Manageability and Availability IBM Systems Director systems management software including IBM Systems Director Active Energy Manager IBM Service and Support Manager Integrated Management Module IPMI 2 0 compliance including highly secure remote power control cKVM Advanced management capabilities Interface to one or two Advanced Ma...

Страница 20: ... HS22V HX5 LS22 LS42 JS12 PS700 PS701 PS702 Cluster HPC 3 1 1 2 2 1 3 3 3 3 Modeling Simulation 2 1 1 1 3 1 3 2 3 3 High Performance DB 2 2 1 1 1 3 2 2 3 3 Business Intelligence 2 1 1 1 2 1 2 2 1 3 Search 3 1 1 1 2 1 3 3 1 2 Content 2 2 2 2 2 1 3 3 2 2 Communities 1 2 2 2 1 1 2 3 2 2 Commerce 2 1 1 1 2 1 1 2 1 2 Collaboration 2 3 2 3 2 1 2 3 2 2 ERP SCM 1 2 1 1 1 2 1 1 1 2 CRM 2 2 2 2 2 2 2 2 1 3 ...

Страница 21: ... service names may be trademarks or service marks of others IBM reserves the right to change specifications or other product information without notice References in this publication to IBM products or services do not imply that IBM intends to make them available in all countries in which IBM operates IBM PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND EITHER EXPRESS OR IMPLIED INCLUD...

Отзывы: