background image

EMC

®

 VMAX3

 Family

Product Guide

VMAX 100K, VMAX 200K, VMAX 400K
with HYPERMAX OS

REVISION 6.5

Содержание VMAX 100K

Страница 1: ...EMC VMAX3 Family Product Guide VMAX 100K VMAX 200K VMAX 400K with HYPERMAX OS REVISION 6 5 ...

Страница 2: ...PUBLICATION AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE USE COPYING AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE Dell EMC and other trademarks are trademarks of Dell Inc or its subsidiaries Other trademarks may be the property of their respective owners Published in the USA EMC C...

Страница 3: ... Disaster Restart GDDR 51 SMI S Provider 51 VASA Provider 51 eNAS management interface 52 ViPR suite 52 ViPR Controller 52 ViPR Storage Resource Management 52 vStorage APIs for Array Integration 53 SRDF Adapter for VMware vCenter Site Recovery Manager 54 SRDF Cluster Enabler 54 EMC Product Suite for z TPF 54 SRDF TimeFinder Manager for IBM i 55 AppSync 55 Open systems features 57 HYPERMAX OS suppo...

Страница 4: ...s 88 FAST SRDF coordination 89 FAST TimeFinder management 90 External provisioning with FAST X 90 Native local replication with TimeFinder 91 About TimeFinder 92 Interoperability with legacy TimeFinder products 92 Targetless snapshots 96 Secure snaps 96 Provision multiple environments from a linked target 96 Cascading snapshots 97 Accessing point in time copies 98 Mainframe SnapVX and zDP 98 Remot...

Страница 5: ...1 SRDF AR 3 site solutions 161 TimeFinder and SRDF A 162 TimeFinder and SRDF S 163 SRDF and EMC FAST coordination 163 Data Migration 165 Overview 166 Data migration solutions for open systems environments 166 Non Disruptive Migration overview 166 About Open Replicator 170 PowerPath Migration Enabler 172 Data migration using SRDF Data Mobility 172 Data migration solutions for mainframe environments...

Страница 6: ...Individual licenses 196 Ecosystem licenses 197 Mainframe licenses 198 License packs 198 Individual license 199 CONTENTS 6 Product Guide VMAX 100K VMAX 200K VMAX 400K with HYPERMAX OS ...

Страница 7: ...cle mode 129 SRDF A SSC cycle switching legacy mode 130 SRDF A MSC cycle switching multi cycle mode 131 Write commands to R21 devices 132 Planned failover before personality swap 137 Planned failover after personality swap 138 Failover to Site B Site A and production host unavailable 138 Migrating data and removing the original secondary array R2 142 Migrating data and replacing the original prima...

Страница 8: ...rt error message format Disk Adapter failure 184 z OS IEA480E service alert error message format SRDF Group lost SIM presented against unrelated resource 184 z OS IEA480E service alert error message format mirror 2 resynchronization 185 z OS IEA480E service alert error message format mirror 1 resynchronization 185 eLicensing process 188 54 55 56 57 58 59 60 61 62 FIGURES 8 Product Guide VMAX 100K ...

Страница 9: ... requirements 36 eNAS configurations by array 38 Unisphere tasks 48 ProtectPoint connections 61 VVol architecture component management capability 69 VVol specific scalability 70 Logical control unit maximum values 75 Maximum LPARs per port 76 RAID options 85 Service Level compliance legend 88 Service Levels 89 SRDF 2 site solutions 103 SRDF multi site solutions 105 SRDF features by hardware platfo...

Страница 10: ...License suites for mainframe environment 198 52 TABLES 10 Product Guide VMAX 100K VMAX 200K VMAX 400K with HYPERMAX OS ...

Страница 11: ... representatives Related documentation The following documentation portfolios contain documents related to the hardware platform and manuals needed to manage your software and storage system configuration Also listed are documents for external components which interact with your VMAX3 Family array EMC VMAX3 Family Site Planning Guide for VMAX 100K VMAX 200K VMAX 400K with HYPERMAX OS Provides plan...

Страница 12: ...base Storage Analyzer concepts and functions EMC Unisphere 360 for VMAX Release Notes Describes new features and any known limitations for Unisphere 360 for VMAX EMC Unisphere 360 for VMAX Installation Guide Provides installation instructions for Unisphere 360 for VMAX EMC Unisphere 360 for VMAX Online Help Describes the Unisphere 360 for VMAX concepts and functions EMC Solutions Enabler VSS Provi...

Страница 13: ...rn codes EMC ProtectPoint Implementation Guide Describes how to implement ProtectPoint EMC ProtectPoint Solutions Guide Provides ProtectPoint information related to various data objects and data handling facilities EMC ProtectPoint File System Agent Command Reference Documents the commands error codes and options EMC ProtectPoint Release Notes Describes new features and any known limitations EMC M...

Страница 14: ...s how to use the TimeFinder Utility to condition volumes and devices EMC GDDR for SRDF S with ConGroup Product Guide Describes how to use Geographically Dispersed Disaster Restart GDDR to automate business recovery following both planned outages and disaster situations EMC GDDR for SRDF S with AutoSwap Product Guide Describes how to use Geographically Dispersed Disaster Restart GDDR to automate bu...

Страница 15: ...vironment EMC SRDF Controls for z TPF Product Guide Describes how to perform remote replication operations in the z TPF operating environment EMC TimeFinder Controls for z TPF Product Guide Describes how to perform local replication operations in the z TPF operating environment EMC z TPF Suite Release Notes Describes new features and any known limitations Special notice conventions used in this do...

Страница 16: ...alic Used for full titles of publications referenced in text Monospace Used for l System code l System output such as an error message or script l Pathnames filenames prompts and syntax l Commands and options Monospace italic Used for variables Monospace bold Used for user input Square brackets enclose optional values Vertical bar indicates alternate selections the bar means or Braces enclose cont...

Страница 17: ... Live Chat l EMC Live Chat Open a Chat or instant message session with an EMC Support Engineer eLicensing support To activate your entitlements and obtain your VMAX license files visit the Service Center on https support EMC com as directed on your License Authorization Code LAC letter emailed to you l For help with missing or incorrect entitlements after activation that is expected functionality ...

Страница 18: ...PERMAX OS support for mainframe on page 74 l VMware Virtual Volumes on page 69 l Unisphere 360 on page 49 HYPERMAX 5977 810 784 5 4 Updated VMAX3 Family power consumption and heat dissipation table See VMAX3 Family specifications on page 24 l For a 200K dual engine system Max heat dissipation changed from 30 975 to 28 912 Btu Hr l Added note to Power and heat dissipation topic HYPERMAX 5977 691 68...

Страница 19: ...OS 5977 596 583 plus Q2 Service Pack ePack 3 0 New content l Data at Rest Encryption on page 39 l Data erasure on page 43 l Cascaded SRDF solutions on page 108 l SRDF Star solutions on page 109 HYPERMAX OS 5977 596 583 2 0 New content Embedded NAS eNAS HYPERMAX OS 5977 497 471 1 0 First release of the VMAX 100K 200K and 400K arrays with EMC HYPERMAX OS 5977 HYPERMAX OS 5977 250 189 a FAST X requir...

Страница 20: ...Preface 20 Product Guide VMAX 100K VMAX 200K VMAX 400K with HYPERMAX OS ...

Страница 21: ...This chapter summarizes VMAX3 Family specifications and describes the features of HYPERMAX OS Topics include l Introduction to VMAX3 with HYPERMAX OS 22 l VMAX3 Family 100K 200K 400K arrays 23 l HYPERMAX OS 34 VMAX3 with HYPERMAX OS 21 ...

Страница 22: ...dded NAS eNAS eliminating the physical hardware l Data at Rest Encryption for those applications that demand the highest level of security l Service Level SL provisioning with FAST X for external arrays XtremIO Cloud Array and other supported 3rd party storage l FICON iSCSI Fibre Channel and FCoE front end protocols l Simplified management at scale through Service Levels reducing time to provision...

Страница 23: ... VMAX3 array features include l Hybrid mix of traditional regular hard drives and solid state flash drives or all flash configurations l System bay dispersion of up to 82 feet 25 meters from the first system bay l Each system bay can house either one or two engines and up to six high density disk array enclosures DAEs per engine n Single engine configurations up to 720 6 Gb s SAS 2 5 drives 360 3 ...

Страница 24: ...Fabric 56Gbps per port InfiniBand Dual Redundant Fabric 56Gbps per port InfiniBand Dual Redundant Fabric 56Gbps per port Table 4 Cache specifications Feature VMAX 100K VMAX 200K VMAX 400K Cache System Min raw 512GB 512GB 512GB Cache System Max raw 2TBr with 1024GB engine 8TBr with 2048GB engine 16TBr with 2048GB engine Cache per engine options 512GB 1024GB 512GB 1024GB 2048GB 512GB 1024GB 2048GB T...

Страница 25: ...O module required 3 min of 1 Ethernet I O module required eNAS I O modules supported GbE 4 x 1GbE Cu 10GbE 2 x 10GbE Cu 10GbE 2 x 10GbE Opt FC 4 x 8 Gbs NDMP back up max 1 FC NDMP Software Data Mover GbE 4 x 1GbE Cu 10GbE 2 x 10GbE Cu 10GbE 2 x 10GbE Opt FC 4 x 8 Gbs NDMP back up max 1 FC NDMP Software Data Mover GbE 4 x 1GbE Cu 10GbE 2 x 10GbE Cu 10GbE 2 x 10GbE Opt FC 4 x 8 Gbs NDMP back up max ...

Страница 26: ...Ba b Flash Flash SAS 960GBc b 1 92TBc b Flash 960GBc b 1 92TBc b Flash 960GBc b 1 92TBc b Flash BE interface 6Gbps SAS 6Gbps SAS 6Gbps SAS RAID options all drives RAID 1 RAID 5 3 1 RAID 5 7 1 RAID 6 6 2 RAID 6 14 2 RAID 1 RAID 5 3 1 RAID 5 7 1 RAID 6 6 2 RAID 6 14 2 RAID 1 RAID 5 3 1 RAID 5 7 1 RAID 6 6 2 RAID 6 14 2 a Capacity points and drive formats available for upgrades b Mixing of 200GB 400G...

Страница 27: ...re VMAX 100K VMAX 200K VMAX 400K System bay dispersion Up to 82 feet 25m between System Bay 1 and System Bay 2 Up to 82 feet 25m between System Bay 1 and any other System Bay Up to 82 feet 25m between System Bay 1 and any other System Bay Table 15 Preconfiguration Feature VMAX 100K VMAX 200K VMAX 400K 100 Thin Provisioned Yes Yes Yes Preconfigured at the factory Yes Yes Yes Table 16 Host support F...

Страница 28: ...ompression support option SRDF Feature VMAX 100K VMAX 200K VMAX 400K GbE 10 GbE Yes Yes Yes 8Gb s FC Yes Yes Yes 16Gb s FC Yes Yes Yes VMAX3 with HYPERMAX OS 28 Product Guide VMAX 100K VMAX 200K VMAX 400K with HYPERMAX OS ...

Страница 29: ...ay 64 128 256 10 GbE FCoE ports Maximum engine 32 32 32 Maximum array 64 128 256 10 GbE SRDF ports Maximum engine 16 16 16 Maximum array 32 64 128 GbE SRDF ports Maximum engine 32 32 32 Maximum array 64 128 256 Embedded NAS ports GbE Ports Maximum ports Software Data Mover 8 12 12 Maximum ports array 16 48 96 10 GbE Cu or Optical ports Maximum ports Software Data Mover 4 6 6 Maximum ports array 8 ...

Страница 30: ...1574 55 1879 75 287 86 287 86 575 72 1180 91 a Capacity points and drive formats available for upgrades b Capacity points and drive formats available on new systems and upgrades c Mixing of 200GB 400GB 800GB or 1 6TB Flash capacities with 960GB or 1 92TB Flash capacities on the same array is not currently supported Table 20 3 5 disk drives Platform Support VMAX 100K 200K 400K Nominal capacity GB 2...

Страница 31: ...47 30 008 6 9 9 23 529 30 690 a Power values and heat dissipations shown at 35 C reflect the higher power levels associated with both the battery recharge cycle and the initiation of high ambient temperature adaptive cooling algorithms Values at 26 C are reflective of more steady state maximum values during normal operation b Power values for system bay 2 and all subsequent system bays where appli...

Страница 32: ... PDU a L line or phase N neutral G ground Table 25 Input power requirements three phase North American International Australian Specification North American 4 wire connection 3 L 1 G a International 5 wire connection 3 L 1 N 1 G a Input voltageb 200 240 VAC 10 L L nom 220 240 VAC 10 L N nom Frequency 50 60 Hz 50 60 Hz Circuit breakers 50 A 32 A Power zones Two Two Minimum power requirements at cus...

Страница 33: ...0 4 3 In data centers that employ intentional radiators such as cell phone repeaters the maximum ambient RF field strength should not exceed 3 Volts meter Table 26 Minimum distance from RF emitting devices Repeater power levela Recommended minimum distance 1 Watt 9 84 ft 3 m 2 Watt 13 12 ft 4 m 5 Watt 19 69 ft 6 m 7 Watt 22 97 ft 7 m 10 Watt 26 25 ft 8 m 12 Watt 29 53 ft 9 m 15 Watt 32 81 ft 10 m ...

Страница 34: ...e current snapshot technology Secure snaps prevent administrators or other high level users from intentionally or unintentionally deleting snapshot data In addition secure snaps are also immune to automatic failure resulting from running out of Storage Resource Pool SRP or Replication Data Pointer RDP space on the array Secure snaps on page 96 provides more information Data at Rest Encryption Data...

Страница 35: ...protocol Management IM Separates infrastructure tasks and emulations By separating these tasks emulations can focus on I O specific work only while IM manages and executes common infrastructure tasks such as environmental monitoring Field Replacement Unit FRU monitoring and vaulting N A ED Middle layer used to separate front end and back end I O processing It acts as a translation layer between th...

Страница 36: ...ap persist shared Embedded Management The eManagement container application embeds management software Solutions Enabler SMI S Unisphere for VMAX on the storage array enabling you to manage the array without requiring a dedicated management host With eManagement you can manage a single storage array and any SRDF attached arrays To manage multiple storage arrays with a single control pane use the t...

Страница 37: ... storage in one infrastructure l Eliminate the gateway hardware reducing complexity and costs l Simplify management Consolidated block and file storage reduces costs and complexity while increasing business agility Customers can leverage rich data services across block and file storage including FAST service level provisioning dynamic Host I O Limits and Data at Rest Encryption eNAS solutions and ...

Страница 38: ...ay Maximum 256 TB 1 5 PB 3 5 PB a Data movers are added in pairs and must support the same configuration b One I O module per eNAS instance per standard block configuration c Backup to tape is optional and does not count as a possibility for the one I O module requirement Replication using eNAS The following replication methods are available for eNAS file systems l Asynchronous file system level r...

Страница 39: ...d encryption for VMAX arrays by using SAS I O modules that incorporate AES XTS inline data encryption These modules encrypt and decrypt data as it is being written to or read from disk thus protecting your information from unauthorized access even when disk drives are removed from the array D RE supports either an internal embedded key manager or an external enterprise grade key manager accessible...

Страница 40: ...ption key management functions such as secure key generation storage distribution and audit l RSA BSAFE cryptographic libraries Provides security functionality for RSA eDPM Server embedded key management and the EMC KTP client external key management l Common Security Toolkit CST Lockbox Hardware and software specific encrypted repository that securely stores passwords and other sensitive key mana...

Страница 41: ...ment Unencrypted data Management traffic Encrypted data External KMIP Key Manager IP Unique key per physical drive Key management KMIP Client MMCS Key Trust Platform KTP TLS authenticated KMIP traffic External Key Managers D RE s external enterprise grade key management is provided by Gemalto SafeNet KeySecure and IBM Security Key Lifecycle Manager Keys are generated and distributed using the best...

Страница 42: ...e array causes the SSV tests to fail Compromising the entire MMCS only gives an attacker access if they also successfully compromise SSC There are no backdoor keys or passwords to bypass D RE security Key operations D RE provides a separate unique Data Encryption Key DEK for each drive in the array including spare drives The following operations ensure that D RE uses the correct key for a given dr...

Страница 43: ... purposing an array l EMC Data Erasure Single Drives Overwrites data on individual SAS and Flash drives l EMC Disk Retention Enables organizations that must retain all media to retain failed drives l EMC Assessment Service for Storage Security Assesses your information protection policies and suggests a comprehensive security strategy All erasure services are performed on site in the security of t...

Страница 44: ...erts EMC Customer Support to arrange for corrective action if necessary With the deferred service sparing model often times immediate action is not required Physical memory error correction and error verification HYPERMAX OS corrects single bit errors and report an error code once the single bit errors reach a predefined threshold In the unlikely event that physical memory replacement is required ...

Страница 45: ... and restores the system mirrored cache contents from the saved data while checking data integrity The system resumes normal operation when the SPSes are sufficiently recharged to support another vault If any condition is not safe the system does not resume operation and notifies Customer Support for diagnosis and repair This allows Customer Support to communicate with the array and restore normal...

Страница 46: ...VMAX3 with HYPERMAX OS 46 Product Guide VMAX 100K VMAX 200K VMAX 400K with HYPERMAX OS ...

Страница 47: ...abler 49 l Mainframe Enablers 50 l Geographically Dispersed Disaster Restart GDDR 51 l SMI S Provider 51 l VASA Provider 51 l eNAS management interface 52 l ViPR suite 52 l vStorage APIs for Array Integration 53 l SRDF Adapter for VMware vCenter Site Recovery Manager 54 l SRDF Cluster Enabler 54 l EMC Product Suite for z TPF 54 l SRDF TimeFinder Manager for IBM i 55 l AppSync 55 Management Interfa...

Страница 48: ... system preferences user authorizations and link and launch client registrations Storage View and manage storage groups and storage tiers Hosts View and manage initiators masking views initiator groups array host aliases and port groups Data Protection View and manage local replication monitor and manage replication pools create and view device groups and monitor and manage migration sessions Perf...

Страница 49: ...other If the wizard determines that the target array can absorb the added workload it automatically creates all the auto provisioning groups required to duplicate the source workload on the target array Unisphere 360 Unisphere 360 is an on premise management solution that provides a single window across arrays running HYPERMAX OS at a single site It allows you to l Add a Unisphere server to Unisph...

Страница 50: ...eparate sites l EMC Consistency Groups for z OS Ensures the consistency of data remotely copied by SRDF feature in the event of a rolling disaster l AutoSwap for z OS Handles automatic workload swaps between arrays when an unplanned outage or problem is detected l TimeFinder SnapVX With Mainframe Enablers V8 0 and higher SnapVX creates point in time copies directly in the Storage Resource Pool SRP...

Страница 51: ...c event detection and end to end automation of managed technologies GDDR removes human error from the recovery process and allows it to complete in the shortest time possible The GDDR expert system is also invoked to automatically generate planned procedures such as moving compute operations from one data center to another This is the gold standard for high availability compute operations to be ab...

Страница 52: ...ect to the underlying arrays ViPR exposes the APIs so any vendor partner or customer can build new adapters to add new arrays This creates an extensible plug and play storage environment that can automatically connect to discover and map arrays hosts and SAN fabrics ViPR enables the software defined data center by helping users l Automate storage for multi vendor block and file storage environment...

Страница 53: ...dashboard view displays information to support decisions regarding storage capacity The Watch4net dashboard consolidates data from multiple ProSphere instances spread across multiple locations It gives you a quick overview of the overall capacity status in your environment raw capacity usage usable capacity used capacity by purpose usable capacity by pools and service levels The EMC ViPR SRM Produ...

Страница 54: ...er Clusters software The Cluster Enabler plug in architecture consists of a CE base module component and separately available plug in modules which provide your chosen storage replication technology SRDF CE supports l Synchronous mode on page 123 l Asynchronous mode on page 123 l Concurrent SRDF solutions on page 107 l Cascaded SRDF solutions on page 108 EMC Product Suite for z TPF The EMC Product...

Страница 55: ... IASPs let you control SRDF or TimeFinder operations on arrays attached to IBM i hosts including l Display and assign TimeFinder SnapVX devices l Execute SRDF or TimeFinder commands to establish and split SRDF or TimeFinder devices l Present one or more target devices containing an IASP image to another host for business continuance BC processes Access to extended features control operations inclu...

Страница 56: ... VMware VMFS and NFS datastores and File systems l Replication Technologies SRDF SnapVX VNX Advanced Snapshots VNXe Unified Snapshot RecoverPoint XtremIO Snapshot and ViPR Snapshot Management Interfaces 56 Product Guide VMAX 100K VMAX 200K VMAX 400K with HYPERMAX OS ...

Страница 57: ...ures This chapter describes open systems specific functionality provided with VMAX3 arrays l HYPERMAX OS support for open systems 58 l Backup and restore to external arrays 59 l VMware Virtual Volumes 69 Open systems features 57 ...

Страница 58: ...ray l Maximum storage groups port groups and masking views is 64 000 array l Maximum devices addressable through each port is 4 000 HYPERMAX OS does not support meta devices thus it is much more difficult to reach this limit For more information on provisioning storage in an open systems environment refer to Open Systems specific provisioning on page 79 For the most recent information consult the ...

Страница 59: ...nt solution uses Data Domain and HYPERMAX OS features to provide protection On the Data Domain system l vdisk services l FastCopy On the storage array l FAST X tiered storage l SnapVX The combination of ProtectPoint and the storage array to Data Domain workflow enables the Application Administrator to l Back up and protect data l Retain and replicate copies l Restore data l Recover applications Da...

Страница 60: ...is ensures that an application consistent snapshot is preserved on the Data Domain system Application administrators can select a specific backup when restoring data and make that backup available on a selected set of primary storage devices Operations to restore the data and make the recovery or restore devices available to the recovery host must be performed manually on the primary storage throu...

Страница 61: ... a typical ProtectPoint solution The following table lists requirements for connecting components in the ProtectPoint solution Table 31 ProtectPoint connections Connected Components Connection Type Primary Application Host to primary VMAX array FC SAN Primary Application Host to primary Data Domain system IP LAN Primary Recovery Host to primary VMAX array FC SAN Primary Recovery Host to primary Da...

Страница 62: ...nder SnapVX operations that copy the production data to the backup devices for transfer to the Data Domain Restore Devices Native VMAX3 devices used for full LUN level copy of a backup to a new set of devices is desired Restore devices are masked to the recovery host Backup Devices Targets of the TimeFinder SnapVX snapshots from the production devices Backup devices are VMAX3 thin devices created ...

Страница 63: ...n Note The Application Administrator must ensure that the application is in an appropriate state before initiating the backup operation This ensures that the copy or backup is application consistent In a typical operation l The Application Administrator uses ProtectPoint to create a snapshot l ProtectPoint moves the data to the Data Domain system l The primary storage array keeps track of the data...

Страница 64: ...k provides storage storage storage vdisk provides storage vdisk provides storage 1 On the Application Host the Application Administrator puts the database in hot backup mode 2 On the primary storage array ProtectPoint creates a snapshot of the storage device The application can be taken out of hot backup mode when this step is complete 3 The primary storage array analyzes the data and uses FAST X ...

Страница 65: ...es which can be made available to the recovery host For either type of restoration the Application Administrator selects the backup image to restore from the Data Domain system Object level restoration For object level restoration the Application Administrator l Selects the backup image on the Data Domain system l Performs a restore of a database image to the recovery devices The Storage Administr...

Страница 66: ...s OS and application specific tools and commands to restore specific objects Full application rollback restoration For a full application rollback restoration after selecting the backup image on the Data Domain system the Storage Administrator performs a restore to the primary storage restore or production devices depending on which devices need a restore of the full database image from the chosen...

Страница 67: ...ides vdisk provides storage storage 1 The Data Domain system writes the backup image to the encapsulated storage device making it available on the primary storage array 2 The Application Administrator creates a SnapVX snapshot of the encapsulated storage device and performs a link copy to the primary storage device overwriting the existing data on the primary storage 3 The restored data is present...

Страница 68: ... Devices Encapsulated vdisk dev0 vdisk dev1 vdisk dev2 vdisk dev3 Primary Storage Data Domain Recovery host vdisk provides vdisk provides vdisk provides vdisk provides vdisk provides storage storage vdisk provides vdisk provides vdisk provides storage storage vdisk provides storage vdisk provides storage Open systems features 68 Product Guide VMAX 100K VMAX 200K VMAX 400K with HYPERMAX OS ...

Страница 69: ...used to logically group VVols SCs are based on the grouping of Virtual Machine Disks VMDKs into specific Service Levels SC capacity is limited only by hardware capacity At least one SC per storage system is required but multiple SCs per array are allowed SCs are created and managed on the array by the Storage Administrator Unisphere and Solutions Enabler CLI support management of SCs l Protocol En...

Страница 70: ...and Solutions Enabler refer to their respective installation guides For instructions on installing the VASA Provider refer to the EMC VMAX VASA Provider Release Notes The steps required to create a VVol based virtual machine are broken up by role Procedure 1 The VMAX Storage Administrator uses either Unisphere for VMAX or Solutions Enabler to create and present the storage to the VMware environmen...

Страница 71: ...c Create the VM Storage policies d Create the VM in the VVol datastore selecting one of the VM storage policies Open systems features VVol workflow 71 ...

Страница 72: ...Open systems features 72 Product Guide VMAX 100K VMAX 200K VMAX 400K with HYPERMAX OS ...

Страница 73: ...ecific functionality provided with VMAX arrays l HYPERMAX OS support for mainframe 74 l IBM z Systems functionality support 74 l IBM 2107 support 75 l Logical control unit capabilities 75 l Disk drive emulations 76 l Cascading configurations 76 Mainframe Features 73 ...

Страница 74: ...e Manager Data at Rest Encryption on page 39 provides more information IBM z Systems functionality support VMAX arrays support the latest IBM z Systems enhancements ensuring that the VMAX can handle the most demanding mainframe environments VMAX arrays support l zHPF including support for single track multi track List Prefetch bi directional transfers QSAM BSAM access and Format Writes l zHyperWri...

Страница 75: ...m values Table 34 Logical control unit maximum values Capability Maximum value LCUs per director slice or port 255 within the range of 00 to FE LCUs per VMAX splita 255 Splits per VMAX array 16 0 to 15 Devices per VMAX split 65 280 LCUs per VMAX array 512 Devices per LCU 256 Logical paths per port 2 048 Logical paths per LCU per port see Table 35 on page 76 128 VMAX system host address per VMAX ar...

Страница 76: ...r long distances using a small number of high speed lines called interswitch links ISLs A maximum of two switches may be connected together within a path between the CPU and the VMAX array Use of the same switch vendors is required for a cascaded configuration To support cascading each switch vendor requires specific models hardware features software features configuration settings and restriction...

Страница 77: ...CHAPTER 5 Provisioning This chapter provides an overview of storage provisioning Topics include l Thin provisioning 78 Provisioning 77 ...

Страница 78: ...ows you to l Create host addressable thin devices TDEVs using Unisphere for VMAX or Solutions Enabler l Add the TDEVs to a storage group l Run application workloads on the storage groups When hosts write to TDEVs the physical storage is automatically allocated from the default Storage Resource Pool Thin devices TDEVs Note VMAX3 arrays support only thin devices Thin devices TDEVs have no storage al...

Страница 79: ...rage group All devices in that storage group share that limit When applications are configured you can associate the limits with storage groups that contain a list of devices A single storage group can only be associated with one limit and a device can only be in one storage group that has limits associated Up to 4096 host I O limits can be defined Consider the following when using host I O limits...

Страница 80: ...group is either cascaded or stand alone Cascaded storage group A parent storage group comprised of multiple storage groups parent storage group members that contain child storage groups comprised of devices By assigning child storage groups to the parent storage group members and applying the masking view to the parent storage group the masking view inherits all devices in the corresponding child ...

Страница 81: ... 002353 Mainframe specific provisioning In Mainframe Enablers the Thin Pool Capacity THN Monitor periodically examines the consumed capacity of data pools It automatically checks user defined space consumption thresholds and triggers an automated response tailored to the site requirements You can specify multiple thresholds of space consumption When the percentage of space consumption reaches the ...

Страница 82: ...Provisioning 82 Product Guide VMAX 100K VMAX 200K VMAX 400K with HYPERMAX OS ...

Страница 83: ...ovides an overview of Fully Automated Storage Tiering Topics include l Fully Automated Storage Tiering 84 l Service Levels 88 l FAST SRDF coordination 89 l FAST TimeFinder management 90 l External provisioning with FAST X 90 Storage Tiering 83 ...

Страница 84: ... higher performance using fewer drives l Factors in RAID protections to ensure write heavy workloads go to RAID 1 and read heavy workloads go to RAID 6 l Delivers variable performance levels using Service Levels The Service Level is set on the storage group to configure the performance expectations for the thin devices on the group FAST monitors the storage group s performance relative to the Serv...

Страница 85: ...ce for all mission critical and business critical applications Maintains a duplicate copy of a device on two drives If a drive in the mirrored pair fails the array automatically uses the mirrored partner without interruption of data availability n Withstands failure of a single drive within the mirrored pair n A drive rebuild is a simple copy from the remaining drive to the replaced drive n The nu...

Страница 86: ...he RAID 1 protection level The benefits are equal or superior to those provided by RAID 10 or striped meta volumes l FAST Storage Resource Pools one default FAST Storage Resource Pool is pre configured on the array This process is automatic and requires no setup Depending on the storage environment SRPs can consist of either FBA or CKD storage pools or a mixture of both in mixed environments FAST ...

Страница 87: ...the Storage Resource Pools used for initial allocation If the preferred drive technology is not available allocation reverts to the default behavior and uses any available Storage Resource Pool for allocation FAST enforces SL compliance within the Storage Resource Pool by restricting the available technology allocations For example the Platinum SL cannot have allocations on 7K RPM disks within the...

Страница 88: ...ey can Provisioning storage takes multiple steps and careful calculations to meet the performance requirements for a workload or application Service Levels dramatically simplify this time consuming and inexact process Service Levels are pre configured service definitions applied to VMAX3 storage groups each designed to address the performance needs of a specific workload VMAX3 arrays are delivered...

Страница 89: ...Levels Table 38 Service Levels Service Level Performance type Use case Diamond Ultra high HPC latency sensitive Platinuma Very high Mission critical high rate OLTP Golda High Very heavy I O Database logs data sets Silvera Price Performance Database data sets virtual applications Bronze Cost optimized Backup archive file Optimized default No Service Level is defined The most active data is placed o...

Страница 90: ...X OS features such as SRDF and SnapVX Benefits FAST X provides the following benefits l Simplifies management of virtualized multi vendor or EMC storage by allowing features such as replication to be managed solely through the VMAX3 array l Allows data mobility and migration between heterogeneous storage arrays and between heterogenous arrays and VMAX3 l Offers Virtual Provisioning benefits to ext...

Страница 91: ...ER 7 Native local replication with TimeFinder This chapter describes local replication features Topics include l About TimeFinder 92 l Mainframe SnapVX and zDP 98 Native local replication with TimeFinder 91 ...

Страница 92: ...an all be to the same snapshot of the source volume or they can be multiple target volumes linked to multiple snapshots from the same source volume Note A target volume may be linked only to one snapshot at a time Snapshots can be cascaded from linked targets and targets can be linked to snapshots of linked targets There is no limit to the number of levels of cascading and the cascade can be broke...

Страница 93: ...essions run on the same device l The management software Solutions Enabler Unisphere for VMAX or Mainframe Enablers used to control local replication n Solutions Enabler and Unisphere for VMAX do not support interoperability between SnapVX and other local replication session on FBA or CKD devices Figure 13 on page 94 provides detailed local replication interoperability support for FBA devices by u...

Страница 94: ...Figure 13 Local replication interoperability FBA devices Native local replication with TimeFinder 94 Product Guide VMAX 100K VMAX 200K VMAX 400K with HYPERMAX OS ...

Страница 95: ...Figure 14 Local replication interoperability CKD devices Native local replication with TimeFinder Interoperability with legacy TimeFinder products 95 ...

Страница 96: ...restored state the restore session can be terminated The snapshot data is preserved during the restore process and can be used again should the snapshot data be required for a future restore Secure snaps Introduced with HYPERMAX OS 5977 Q2 2017 SR secure snaps is an enhancement to the current snapshot technology Secure snaps prevent administrators or other high level users from intentionally or un...

Страница 97: ...ata If accessing through VPLEX ensure that you follow the procedure outlined in the technical note EMC VPLEX LEVERAGING ARRAY BASED AND NATIVE COPY TECHNOLOGIES available on support emc com Once the relink is complete volumes can be remounted Snapshot data is unchanged by the linked targets so the snapshots can also be used to restore production data Cascading snapshots Presenting sensitive data t...

Страница 98: ...ed and host access to the target must be derived from the SnapVX metadata A background process eventually defines the tracks and updates the thin device to point directly to the track location in the source device s Storage Resource Pool Mainframe SnapVX and zDP Data Protector for z Systems zDP is a mainframe software solution that is deployed on top of SnapVX on VMAX3 arrays zDP delivers the capa...

Страница 99: ... track image whenever possible while ensuring they each continue to represent a unique point in time image of the source volume Despite the space efficiency achieved through shared allocation to unchanged data additional capacity is required to preserve the pre update images of changed tracks captured by each point in time snapshot zDP implementation is a two stage process the planning phase and t...

Страница 100: ...Native local replication with TimeFinder 100 Product Guide VMAX 100K VMAX 200K VMAX 400K with HYPERMAX OS ...

Страница 101: ... solutions This chapter describes EMC s remote replication solutions Topics include l Native remote replication with SRDF 102 l SRDF Metro 145 l RecoverPoint 156 l Remote replication using eNAS 156 Remote replication solutions 101 ...

Страница 102: ...o l Restart operations after a disaster with zero data loss and business continuity protection l Restart operations in cluster environments For example Microsoft Cluster Server with Microsoft Failover Clusters l Monitor and automate restart operations on an alternate local or remote server l Automate restart operations in VMware environments SRDF operates in the following modes l Synchronous mode ...

Страница 103: ...rimary site l RPO seconds before the point of failure l Unlimited distance See Write operations in asynchronous mode on page 127 Primary Secondary Unlimited distance Asynchronous R1 R2 SRDF Metro Host or hosts cluster read and write to both R1 and R2 devices Each copy is current and consistent Write conflicts between the paired SRDF devices are managed and resolved Up to 125 miles 200 km between a...

Страница 104: ...ates in 2 site solutions that use SRDF DM in combination with TimeFinder See SRDF AR on page 160 Site A Host Site B R1 R2 TimeFinder TimeFinder SRDF background copy Host SRDF Cluster Enabler CE l Integrates SRDF S or SRDF A with Microsoft Failover Clusters MSCS to automate or semi automate site failover l Complete solution for restarting operations in cluster environments MSCS with Microsoft Failo...

Страница 105: ...ech Book l EMC SRDF Adapter for VMware Site Recovery Manager Release Notes IP Network SAN Fabric SAN Fabric SRDF mirroring SAN Fabric SAN Fabric Site A primary IP Network Site B secondary vCenter and SRM Server Solutions Enabler software Protection side vCenter and SRM Server Solutions Enabler software Recovery side ESX Server Solutions Enabler software configured as a SYMAPI server SRDF multi sit...

Страница 106: ...dary R21 site to a tertiary R2 site l First hop is SRDF S Second hop is SRDF A See Cascaded SRDF solutions on page 108 Site A Site C Site B SRDF S SRDF A R1 R2 R21 SRDF Star 3 site data protection and disaster recovery with zero data loss recovery business continuity protection and disaster restart l Available in 2 configurations n Cascaded SRDF Star n Concurrent SRDF Star l Differential synchroni...

Страница 107: ...tore the R11 device from either of the R2 devices You can restore both the R11 and one R2 device from the second R2 device Use concurrent SRDF to replace an existing R11 or R2 device with a new device To replace an R11 or R2 migrate data from the existing device to a new device using adaptive copy disk mode and then replace the existing device with the newly populated device Concurrent SRDF can be...

Страница 108: ...F configurations data from a primary R1 site is synchronously mirrored to a secondary R21 site and then asynchronously mirrored from the secondary R21 site to a tertiary R2 site Cascaded SRDF provides l Fast recovery times at the tertiary site l Tight integration with TimeFinder product family l Geographically dispersed secondary and tertiary sites If the primary site fails cascaded SRDF can conti...

Страница 109: ... potential recovery sites Differential resynchronization is used between the secondary and the tertiary sites l Cascaded SRDF Star Data is mirrored first from the primary site to a secondary site and then from the secondary to a tertiary site Both the secondary and tertiary sites are potential recovery sites Differential resynchronization is used between the primary and the tertiary site Different...

Страница 110: ... R2 devices in two remote arrays In the following image l Site B is a secondary site using SRDF S links from Site A l Site C is a tertiary site using SRDF A links from Site A l The normally inactive recovery links are SRDF A between Site C and Site B Figure 20 Concurrent SRDF Star R11 R2 R2 SRDF S SRDF A SRDF A recovery links Site B Active Inactive Site A Site C Concurrent SRDF Star with R22 devic...

Страница 111: ...ails the cascaded SRDF Star solution can incrementally establish an SRDF A session between primary site and the asynchronous tertiary site Cascaded SRDF Star can determine when the current active R1 cycle capture contents reach the active R2 cycle apply over the long distance SRDF A links This minimizes the amount of data that must be moved between Site B and Site C to fully synchronize them The f...

Страница 112: ...irs required to incrementally establish an SRDF A session between Site A and Site C in case Site B fails The following image shows cascaded R22 devices in a cascaded SRDF solution Figure 23 R22 devices in cascaded SRDF Star R11 R22 R21 SRDF S SRDF A SRDF A recovery links Site B Active Inactive Site A Site C Remote replication solutions 112 Product Guide VMAX 100K VMAX 200K VMAX 400K with HYPERMAX ...

Страница 113: ...DF device is configured with TimeFinder to provide local replicas at each site SRDF four site solutions for open systems The four site SRDF solution for open systems host environments replicates FBA data by using both concurrent and cascaded SRDF topologies Four site SRDF is a multi region disaster recovery solution with higher availability improved protection and less downtime than concurrent or ...

Страница 114: ...K arrays running Enginuity 5876 with an Enginuity ePack Note When you connect between arrays running different operating environments limitations may apply Information about which SRDF features are supported and applicable limitations for 2 site and 3 site solutions is available in the SRDF Interfamily Connectivity Information This interfamily connectivity allows you to add the latest hardware pla...

Страница 115: ... 2 4 8 Gb s 16 Gb s on 40K 2 4 8 16 Gb s 16 Gb s 16 Gb s GbE port speed 1 10 Gb s 1 10 Gb s 1 10 Gb s 1 10 Gb s Max SRDF ports director 32 8 16e 6 Min SRDF A Cycle Time 1 sec 3 secs with MSC 1 sec 3 secs with MSC 1 sec 3 secs with MSC 1 sec 3 secs with MSC SRDF Delta Set Extension Supported Supported Supported Supported Transmit Idle Enabled Enabled Enabled Enabled Fibre Channel Single Round Trip ...

Страница 116: ...cted to an array running Enginuity 5876 supports a maximum of 64 RDF groups The director on the HYPERMAX OS side associated with that port supports a maximum of 186 250 64 RDF groups e If hardware compression is enabled the maximum number of ports per director is 12 HYPERMAX OS and Enginuity compatibility Arrays running HYPERMAX OS cannot create a device that is exactly the same size as a device w...

Страница 117: ...devices R1 devices are the member of the device pair at the source production site R1 devices are generally Read Write accessible to the host R2 devices are the members of the device pair at the target remote site During normal operations host I O writes to the R1 device are mirrored over the SRDF links to the R2 device In general data on R2 devices is not available to the host while the SRDF rela...

Страница 118: ...in an SRDF Concurrent Star solution Figure 26 R11 device in concurrent SRDF Site A Source Site B Target Site C Target R11 R2 R2 R21 devices R21 devices operate as l R2 devices to hosts connected to array containing the R1 device and l R1 device to hosts connected to the array containing the R2 device R21 devices are typically used in cascaded 3 site solutions where l Data on the R1 site is synchro...

Страница 119: ...Star solutions to decrease the complexity and time required to complete failover and failback operations l Let you recover without removing old SRDF pairs and creating new ones Figure 28 R22 devices in cascaded and concurrent SRDF Star SRDF S Site A SRDF A Site C SRDF S Site B Active links Host Host Site B Site A Site C SRDF A SRDF A SRDF A Cascaded STAR Concurrent STAR Inactive links R22 R2 R11 R...

Страница 120: ...nly Write Disabled The R1 device responds with Write Protected to all write operations to that device l Not Ready The R1 device responds Not Ready to the host for read and write operations to that device R2 device states An R2 device presents one of the following states to the host connected to the secondary array l Read Only Write Disabled The secondary R2 device responds Write Protected to the h...

Страница 121: ...ce receives the updates propagated across the SRDF links and can accept SRDF host based software commands l Not Ready The R2 device cannot accept SRDF host based software commands but can still receive updates propagated from the primary array l Link blocked LnkBlk Applicable only to R2 SRDF mirrors that belong to R22 devices One of the R2 SRDF mirrors cannot receive writes from its associated R1 ...

Страница 122: ...inder or EMC Compatible flash operations SRDF modes of operation SRDF modes of operation address different service level requirements and determine l How R1 devices are remotely mirrored across the SRDF links l How I Os are processed l When the host receives acknowledgment of a write operation relative to when the write is replicated l When writes owed between partner devices are sent across the S...

Страница 123: ... are in the same SRDF A MSC session Cycle switching is controlled by SRDF host software to maintain consistency Refer to SRDF A MSC cycle switching on page 130 for more information Adaptive copy modes Adaptive copy modes l Transfer large amounts of data without impact on the host l Transfer data during data center migrations and consolidations and in data mobility environments l Allow the R1 and R...

Страница 124: ...cannot successfully mirror data to the R2 device the next host write to the R1 device causes the device to become Not Ready to the host connected to the primary array l SRDF group level link domino mode If the last available link in the SRDF group fails the next host write to any R1 device in the SRDF group causes all R1 devices in the SRDF group become Not Ready to their hosts Link domino mode is...

Страница 125: ...dancy and fault tolerance The relationship between the resources on a director CPU cores and ports varies depending on the operating environment HYPERMAX OS On arrays running HYPERMAX OS l The relationship between the SRDF emulation and resources on a director is configurable n One director multiple CPU cores multiple ports n Connectivity ports in the SRDF group is independent of compute power num...

Страница 126: ...onsistency of devices within a group by monitoring data propagation from source devices to their corresponding target devices If consistency is enabled and SRDF detects any write I O to a R1 device that cannot communicate with its R2 device SRDF suspends the remote mirroring for all devices in the consistency group before completing the intercepted I O and returning control to the application In t...

Страница 127: ... operations in asynchronous mode In asynchronous mode SRDF A host write I Os are collected into delta sets on the primary array and transferred in cycles to the secondary array SRDF A sessions behave differently depending on l Whether they are managed individually Single Session Consistency SSC or as a consistency group Multi Session Consistency MSC n In Single Session Consistency SSC mode the SRD...

Страница 128: ...nly 2 cycles on the R2 side n On the R1 side One Capture One or more Transmit n On the R2 side One Receive One Apply Cycle switches are decoupled from committing delta sets to the next cycle When the preset Minimum Cycle Time is reached the R1 data collected during the capture cycle is added to the transmit queue and a new R1 capture cycle is started There is no wait for the commit on the R2 side ...

Страница 129: ...mit cycles on the R1 side multi cycle mode The following image shows multi cycle mode l Multiple cycles one capture cycle and multiple transmit cycles on the R1 side and l Two cycles receive and apply on the R2 side Figure 31 SRDF A SSC cycle switching multi cycle mode Primary Site Secondary Site Capture cycle Apply cycle N M Transmit cycle Receive cycle Capture N Transmit N M R1 R2 Receive N M Tr...

Страница 130: ...t different times Data in the Capture and Transmit cycles may differ between the two SRDF A sessions SRDF A MSC cycle switching SRDF A MSC l Coordinates the cycle switching for all SRDF A sessions in the SRDF A MSC solution l Monitors for any failure to propagate data to the secondary array devices and drops all SRDF A sessions together to maintain dependent write consistency l Performs MSC cleanu...

Страница 131: ...g Enginuity 5773 to 5876 have only two cycles on the R1 side legacy mode In legacy mode the following conditions must be met before an MSC cycle switch can take place l The primary array s transmit delta set must be empty l The secondary array s apply delta set must have completed The N 2 data must be marked write pending for the R2 devices Write operations in cascaded SRDF In cascaded configurati...

Страница 132: ...ximum cache utilization threshold or the system write pending limit is exceeded the array exhausts its cache By default the SRDF A session drops if array cache is exhausted You can keep the SRDF A session running for a user defined period You can assign priorities to sessions keeping SRDF A active for as long as cache resources allow If the condition is not resolved at the expiration of the user d...

Страница 133: ...nginuity 5876 If the array on one side of an SRDF device pair is running HYPERMAX OS and the other side is running a Enginuity 5876 or earlier the SRDF A session runs in Legacy mode l DSE is disabled by default on both arrays l EMC recommends that you enable DSE on both sides Transmit Idle During short term network interruptions the transmit idle state describes that SRDF A is still tracking chang...

Страница 134: ...ng host I O rates with the SRDF link bandwidth and throughput capabilities when l The host I O rate exceeds the SRDF link throughput l Some SRDF links that belong to the SRDF A group are lost l Reduced throughput on the SRDF links l The write pending level on an R2 device in an active SRDF A session reaches the device write pending limit l The apply cycle time on the R2 side is longer than 30 seco...

Страница 135: ...l pacing may not take effect if all SRDF A links are lost Write pacing and Transmit Idle Host writes continue to be paced when l All SRDF links are lost and l Cache conditions require write pacing and l Transmit Idle is in effect Pacing during the outage is the same as the transfer rate prior to the outage SRDF read operations Read operations from the R1 device do not usually involve SRDF emulatio...

Страница 136: ...a image is the most current The array at the R2 side may not yet know that data currently in transmission on the SRDF links has been sent l If the remote host reads data from the R2 device while a write I O is in transmission on the SRDF links the host will not be reading the most current data EMC strongly recommends that you allow the remote host to read data from the R2 devices while in Read Onl...

Страница 137: ... image shows a 2 site SRDF configuration before the R1 R2 personality swap Figure 35 Planned failover before personality swap Production host Remote host Site B Site A SRDF links suspended R1 R2 swap Applications stopped R1 R2 l Applications on the production host are stopped l SRDF links between Site A and Site B are suspended l If SRDF CG is used consistency is disabled The following image shows...

Страница 138: ...sume production processing as soon as the applications are restarted on the failover host connected to Site B Unlike the planned failover operation an unplanned failover resumes production at the secondary site but without remote mirroring until Site A becomes operational and ready for a failback operation The following image shows failover to the secondary site after the primary site fails Figure...

Страница 139: ...F link comes back up at which point writes continue Reads are not affected Note Switching to SRDF S mode with the link limbo parameter configured for more than 10 seconds could result in an application database or host failure if SRDF is restarted in synchronous mode Permanent link loss SRDF A If all SRDF links are lost for more than link limbo or Transmit Idle can manage l All of the devices in t...

Страница 140: ...ent image and can be used to restart the applications After the primary array has been repaired you can return production operations to the primary array by following procedures described in SRDF recovery operations on page 137 If the failover to the secondary site is an extended event the SRDF A solution can be reversed by issuing a personality swap SRDF A can continue operations until a planned ...

Страница 141: ...SRDF In concurrent SRDF topologies you can non disruptively migrate data between arrays along one SRDF leg while remote mirroring for protection along the other leg Once the migration process completes the concurrent SRDF topology is removed resulting in a 2 site SRDF topology Replacing R2 devices with new R2 devices You can manually migrate data as shown in the following image including l Initial...

Страница 142: ...cing the original R1 devices with new R1 devices including l Initial 2 site topology l The interim 3 site migration topology l Final 2 site topology After migration the new primary array is mirrored to the original secondary array EMC support personnel are available to assist with the planning and execution of your migration projects Remote replication solutions 142 Product Guide VMAX 100K VMAX 20...

Страница 143: ... R2 devices at the same time Note Before you begin verify that your specific hardware models and Enginuity or HYPERMAX OS versions are supported for migrating data between different platforms The following image shows an example of replacing both R1 and R2 devices with new R1 and R2 devices at the same time including l Initial 2 site topology Remote replication solutions Migration using SRDF Data ...

Страница 144: ...onality including disaster recovery and other advanced SRDF features In cases where full SRDF functionality is not available you can move your data across the SRDF links using migration only SRDF The following table lists SRDF common operations and features and whether they are supported in SRDF groups during SRDF migration only environments Table 44 Limitations of the migration only mode SRDF ope...

Страница 145: ...s do not affect the migration group they are allowed without suspending migration Out of family Non Disruptive Upgrade NDU Not supported SRDF Metro HYPERMAX OS 5977 691 684 introduced SRDF Metro In traditional SRDF R1 devices are Read Write accessible R2 devices are Read Only Write Disabled In SRDF Metro configurations l R2 devices are Read Write accessible to hosts l Hosts can write to both the R...

Страница 146: ... Solutions Enabler 8 1 or higher or Unisphere for VMAX 8 1 or higher SRDF Metro requires a license on both arrays Storage arrays running HYPERMAX OS can simultaneously support SRDF groups configured for SRDF Metro operations and SRDF groups configured for traditional SRDF operations Key differences SRDF Metro l In SRDF Metro configurations n R2 device is Read Write accessible to the host n Host s ...

Страница 147: ...s for intelligently choosing on which side to continue operations when the bias only method may not result in continued host availability to a surviving non biased array The Witness option is the default SRDF Metro provides two types of Witnesses Array and Virtual Witness Array HYPERMAX OS or Enginuity on a third array monitors SRDF Metro determines the type of failure and uses the information to ...

Страница 148: ...an establish or a restore operation l Activate SRDF Metro Device pairs transition to the ActiveActive pair state when n Device federated personality and other information is copied from the R1 side to the R2 side n Using the information copied from the R1 side the R2 side sets its identify as an SRDF Metro R2 when queried by host I O drivers n R2 devices become accessible to the host s When all SR...

Страница 149: ...as defaults to the R1 device After device creation bias side can be changed from the default R1 to the R2 side l The initial bias device will be exported as the R1 in all external displays and commands l The initial non bias device will be exported as the R2 in all external displays and commands l Changing the bias changes the SRDF personalities of the two sides of the SRDF device pair The followi...

Страница 150: ...ides which side of the Metro group remains accessible to hosts giving preference to the bias side The Array Witness method allows for choosing on which side to continue operations when the Device Bias method may not result in continued host availability to a surviving non biased array The Array Witness must have SRDF connectivity to both the R1 side array and R2 side array SRDF remote adapters RA ...

Страница 151: ...essible from both the R1 and R2 arrays HYPERMAX OS sets the R1 side as the bias side the R2 side as the non bias side and the state of the device pairs becomes ActiveBias Virtual Witness vWitness Virtual Witness vWitness is an additional resiliency option introduced in HYPERMAX OS 5977 945 890 and Solutions Enabler or Unisphere for VMAX V8 3 vWitness has similar capabilities to the Array Witness m...

Страница 152: ...erform the following l Add a new vWitness to the configuration This will not affect any existing vWitnesses Once the vWitness is added it is enabled for participation in the vWitness infrastructure l Query the state of a vWitness configuration l Suspend a vWitness If the vWitness is currently servicing an SRDF Metro session this operation requires a force flag This puts the SRDF Metro session in a...

Страница 153: ...1 remains accessible to host X S1 S2 W S1 failed S2 remains accessible to host S1 S2 W S1 remains accessible to host S2 suspends X S1 S2 W S1 and S2 remain accessible to host S1 wins future failures S2 calls home X X S1 S2 W S1 and S2 remain accessible to host S2 wins future failures S1 calls home X S1 S2 W R1 side of device pair R2 side of device pair Witness Array vWitness SRDF links X Failure o...

Страница 154: ...mains accessible to host S2 suspends S2 calls home X X X X X X X X S1 S2 W S1 suspends S2 failed S1 calls home X S1 S2 W S1 suspends S2 suspends S1 and S2 call home X X X X X Deactivate SRDF Metro To terminate a SRDF Metro configuration simply remove all the device pairs deletepair in the SRDF group Note The devices must be in Suspended state in order to perform the deletepair operation When all t...

Страница 155: ...n SRDF Metro configurations have federated personalities When a device is removed from an SRDF Metro configuration the device personality can be restored to its original native personality The following restrictions apply to restoring the native personality of a device which has federated personality as a result of a participating in a SRDF Metro configuration l Requires HYPERMAX OS 5977 691 684 o...

Страница 156: ...se Fibre Channel infrastructure to replicate data aysnchronously The systems provide failover of operations to a secondary site in the event of a disaster at the primary site Previous implementations of RecoverPoint relied on a splitter to track changes made to protected volumes This implementation relies on a cluster of RecoverPoint nodes provisioned with one or more RecoverPoint storage groups l...

Страница 157: ...to Recovery Manager FARM FARM allows you to automatically failover a selected sync replicated VDM on a source eNAS system to a destination eNAS system FARM also allows you to monitor sync replicated VDMs and to trigger automatic failover based on Data Mover File System Control Station or IP network unavailability that would cause the NAS client to lose access to data Remote replication solutions R...

Страница 158: ...Remote replication solutions 158 Product Guide VMAX 100K VMAX 200K VMAX 400K with HYPERMAX OS ...

Страница 159: ...CHAPTER 9 Blended local and remote replication This chapter describes TimeFinder integration with SRDF l SRDF and TimeFinder 160 Blended local and remote replication 159 ...

Страница 160: ...mote images of production data across multiple devices and arrays Note The SRDF A single session solution guarantees dependent write consistency across the SRDF links and does not require SRDF CG SRDF A MSC mode requires host software to manage consistency among multiple sessions Note Some TimeFinder operations are not supported on devices protected by SRDF For more information refer to the Soluti...

Страница 161: ...he host connected to the secondary array at Site B In the 2 site solution SRDF operations are independent of production processing on both the primary and secondary arrays You can utilize resources at the secondary site without interrupting SRDF operations Use SRDF AR 2 site solutions to l Reduce required network bandwidth using incremental resynchronization between the SRDF target sites l Reduce ...

Страница 162: ... Reduce network cost and improve resynchronization time for long distance SRDF implementations l Provide disaster recovery testing point in time backups decision support operations third party software testing and application upgrade testing or the testing of new applications Requirements restrictions In a 3 site SRDF AR multi hop solution SRDF S host I O to Site A is not acknowledged until Site B...

Страница 163: ...requires that both arrays run the same operating environment code family either 5876 or 5977 Reads are not propagated across the SRDF links thus without SRDF FAST coordination for workloads with heavy read operations the R2 side can be substantially less busy than the R1 side You can enable coordination on both sides of the SRDF links in a 2 site and multi site SRDF topologies FAST SRDF coordinati...

Страница 164: ...ity 5876 With Enginuity 5876 you can enable disable SRDF FAST VP coordination on a storage group symfast associate command even when there are no SRDF devices in the storage group Blended local and remote replication 164 Product Guide VMAX 100K VMAX 200K VMAX 400K with HYPERMAX OS ...

Страница 165: ...ation This chapter describes data migration solutions Topics include l Overview 166 l Data migration solutions for open systems environments 166 l Data migration solutions for mainframe environments 176 Data Migration 165 ...

Страница 166: ...array running Enginuity 5876 with required ePack source array and an array running HYPERMAX OS 5977 811 784 or higher target array Consult with Dell EMC for required ePack for source arrays running Enginuity 5876 In addition refer to the NDM support matrix available on eLab Navigator for array operating system version support host support and multipathing support for NDM operations If regulatory o...

Страница 167: ...OS l Requires no additional hardware in the data path The following graphic shows the connections required between the host single or cluster and the source and target array and the SRDF connection between the two arrays Figure 49 Non Disruptive Migration zoning The App host connection to both arrays uses FC and the SRDF connection between arrays uses FC GigE It is recommended that migration contr...

Страница 168: ...sible to the controlling host that runs the migration commands l If the application and NDM commands need to run on the same host several gatekeeper devices must be provided to control the array In addition in the daemon_options file the gatekeeper use gk_use option must be set for dedicated use only as follows 1 In the var symapi config daemon_options file add the line storapid gk_use dedicated_o...

Страница 169: ...urce array If the initiators are already in an IG on the target array the operation will be blocked unless the IG on the target array has the same name as the source array and the IG must have the exact same initiators child groups and port flags as the source array In addition the consistent lun flag setting on the source array IG must also match the IG flag setting on the target array n The name...

Страница 170: ...get array the following rules apply during migration l Any source array device that has an odd number of cylinders is migrated to a device on the target array that has Geometry Compatibility Mode GCM l Any source array meta device is migrated to a non meta device on the target array About Open Replicator Open Replicator enables copying data full or incremental copies from qualified arrays within a...

Страница 171: ...MAX OS support up to 512 pull sessions For pull operations the volume can be in a live state during the copy process The local hosts and applications can begin to access the data as soon as the session begins even before the data copy process has completed These features enable rapid and efficient restoration of remotely vaulted volumes and migration from other storage platforms Copy on First Acce...

Страница 172: ...y Note PowerPath Multipathing must be installed on the host machine The following documentation provides additional information l EMC Support Matrix PowerPath Family Protocol Support l EMC PowerPath Migration Enabler User Guide Data migration using SRDF Data Mobility SRDF Data Mobility DM uses SRDF s adaptive copy mode to transfer large amounts of data without impact to the host SRDF DM supports d...

Страница 173: ...onnel are available to assist with the planning and execution of your migration projects Figure 52 Migrating data and removing the original secondary array R2 Site A Site B Site C SRDF migration Site A Site C Site A Site B R1 R2 R11 R2 R1 R2 R2 Replacing R1 devices with new R1 devices The following image shows replacing the original R1 devices with new R1 devices including l Initial 2 site topolog...

Страница 174: ...nning and execution of your migration projects Figure 53 Migrating data and replacing the original primary array R1 Replacing R1 and R2 devices with new R1 and R2 devices You can use the combination of concurrent SRDF and cascaded SRDF to replace both R1 and R2 devices at the same time Data Migration 174 Product Guide VMAX 100K VMAX 200K VMAX 400K with HYPERMAX OS ...

Страница 175: ...ssist with the planning and execution of your migration projects Figure 54 Migrating data and replacing the original primary R1 and secondary R2 arrays Site A Site B Site C SRDF migration Site D Site C Site D Site A Site B R1 R2 Site B R1 R11 R2 R2 R2 R21 Migration only SRDF In some of the cases you can migrate your data with full SRDF functionality including disaster recovery and other advanced S...

Страница 176: ...g a replication or migration activity from a regular device to a thin device in which software tools such as Open Replicator and Open Migrator copied all zero unused space to a target thin volume Space reclamation deallocates data chunks that contain all zeros Space reclamation is most effective for migrations from standard fully provisioned devices to thin devices Space reclamation is non disrupt...

Страница 177: ...r functions of z OS Migrator Figure 55 z OS volume migration Volume level data migration facilities move logical volumes in their entirety z OS Migrator volume migration is performed on a track for track basis without regard to the logical contents of the volumes involved Volume migrations end in a volume swap which is entirely non disruptive to any applications using the data on the volumes Volum...

Страница 178: ...ation functions z OS Migrator performs dataset migrations with full awareness of the contents of the volume and the metadata in the z OS system that describe the datasets on the logical volume Figure 56 z OS Migrator dataset migration Thousands of datasets can either be selected individually or wild carded z OS Migrator automatically manages all metadata during the migration process while applicat...

Страница 179: ...APPENDIX A Mainframe Error Reporting This appendix describes mainframe environmental errors l Error reporting to the mainframe host 180 l SIM severity reporting 180 Mainframe Error Reporting 179 ...

Страница 180: ...that device are not reported until the failure is fixed If a second failure is detected for a device while there is a pending error reporting condition in effect HYPERMAX OS reports the pending error on the next I O and then the second error Enginuity reports error conditions to the host and to the EMC Customer Support Center When reporting to the host Enginuity presents a unit check status in the...

Страница 181: ... REMOTE FAILED The Service Processor cannot communicate with the EMC Customer Support Center Environmental errors The following table lists the environmental errors in SIM format for HYPERMAX OS 5977 or higher Note All listed severity levels can be modified via SymmWin Table 47 Environmental errors reported as SIM messages Hex code Severity level Description SIM reference code 04DD MODERATE MMCS h...

Страница 182: ...urce Pool has exceeded its upper threshold value 2471 0473 SERIOUS A periodic environmental test env_test9 detected the mirrored device in a Not Ready state E473 0474 SERIOUS A periodic environmental est env_test9 detected the mirrored device in a Write Disabled WD state E474 0475 SERIOUS An SRDF R1 remote mirror is in a Not Ready state E475 0476 SERVICE Service Processor has been reset 2476 0477 ...

Страница 183: ...n Data Pointer Meta Data Usage reached 100 E489 0492 MODERATE Flash monitor or MMCS drive error 2492 04BE MODERATE Meta Data Paging file system mirror not ready 24BE 04CA MODERATE An SRDF A session dropped due to a non user request Possible reasons include fatal errors SRDF link loss or reaching the maximum SRDF A host response delay time E4CA 04D1 REMOTE SERVICE Remote connection established Remo...

Страница 184: ...0 00000014 PC failed to call home due to communication problems Figure 58 z OS IEA480E service alert error message format Disk Adapter failure IEA480E 1900 SCU SERIOUS ALERT MT 2107 SER 0509 ANTPC 531 REFCODE 2463 0000 0021 SENSE 00101000 003C8F00 11800000 Disk Adapter Director 21 0x2C One of the Disk Adapters failed into IMPL Monitor state Figure 59 z OS IEA480E service alert error message format...

Страница 185: ...error message format mirror 2 resynchronization IEA480E 0D03 SCU SERVICE ALERT MT 3990 3 SER REFCODE E461 0000 6200 Channel address of the synchronized device E461 Mirror 2 volume resynchronized with Mirror 1 volume Figure 61 z OS IEA480E service alert error message format mirror 1 resynchronization IEA480E 0D03 SCU SERVICE ALERT MT 3990 3 SER REFCODE E462 0000 6200 Channel address of the synchron...

Страница 186: ...Mainframe Error Reporting 186 Product Guide VMAX 100K VMAX 200K VMAX 400K with HYPERMAX OS ...

Страница 187: ...APPENDIX B Licensing This appendix provides an overview of licensing on arrays running HYPERMAX OS Topics include l eLicensing 188 l Open systems licenses 190 l Mainframe licenses 198 Licensing 187 ...

Страница 188: ...y 3 3 The entitled user retrieves the LAC letter on the Get and Manage Licenses page on support emc com and then downloads the license file 4 The entitled user loads the license file to the array and verifies that the licenses were successfully activated 5 Note To install array licenses follow the procedure described in the Solutions Enabler Installation Guide and Unisphere for VMAX online Help Ea...

Страница 189: ...ring this value depends on the license s capacity type Usable or Registered Not all product titles are available in all capacity types as shown below Table 48 VMAX3 product title capacity types Usable Registered Other HYPERMAX OS SRDF Metro PowerPath Base Suite SRDF Replicator Registered Events and Retention Suite Remote Replication Suite TF SnapSure Registered Local Replication Suite ProtectPoint...

Страница 190: ...icense is measured as the sum of the capacity of a device if it is a clone source or target a snap source or SAVE device in a snap pool If a device meets more than one of the previous criteria it is only counted once n For virtually provisioned devices the registered capacity is equal to the total space allocated to the thin device For devices that have compressed allocations the un compressed siz...

Страница 191: ... as a source volume l Use an encapsulated volume as a clone source symconfigure l Enable cache partitions for an array l Create cache partitions l Set cache partitions to Analyze mode symqos cp l Enable priority of service for an array l Set host I O priority l Set copy QoS priority symqos pst l Enable Optimizer functionality including n Manual mode n Rollback mode n Manual Migration mode l Schedu...

Страница 192: ...nce Period Validate or create VLUN migrations symmigrate Create time window symoptmz symtw Create cold pull sessions symrcopy Advanced Suite Single a and multi tier l HYPERMAX OS l Priority Controls l OR DM l Unisphere for VMAX l FAST l SL Provisioning l Workload Planner l Database Storage Analyzer l Unisphere for File Perform tasks available in the Base Suite and Unisphere Suite Create time windo...

Страница 193: ...n Maximum Devices to Move n Maximum Simultaneous Devices n Workload Period n Minimum Performance Period l Add data pool tiers to FAST policies l Set the following FAST VP specific parameters n Thin Data Move Mode n Thin Relocation Rate n Pool Reservation Capacity l Set the following FAST parameters n Workload Period n Minimum Performance Period Perform SL based provisioning symconfigure symsg symc...

Страница 194: ...p l Create snap pools l Create SAVE devices symconfigure l Perform SnapVX Establish operations l Perform SnapVX snapshot Link operations symsnapvx Remote Replication Suite Single and multi tier l SRDF l SRDF Asynchronous l SRDF Synchronous l SRDF CE l SRDF STAR l Replication for File l Compatible Peer l Create new SRDF groups l Create dynamic SRDF pairs in Adaptive Copy mode symrdf l Create SRDF d...

Страница 195: ...s into Asynchronous mode symrdf l Add SRDF mirrors to devices in Asynchronous mode Create RDFA_DSE pools Set any of the following SRDF A attributes on an SRDF group n Minimum Cycle Time n Transmit Idle n DSE attributes including Associatin g an RDFA DSE pool with an SRDF group DSE Threshold DSE Autostart n Write Pacing attributes including Write Pacing Threshold Write Pacing Autostart symconfigure...

Страница 196: ... OR DM l Unisphere for VMAX l Unisphere for File Manage arrays running HYPERMAX OS N A a As part of the Total Productivity Pack Individual licenses These items are available for arrays running HYPERMAX OS and are not included in any of the license suites Table 50 Individual licenses for open systems environment License Allows you to With the command D RE Encrypt data and protect it against unautho...

Страница 197: ...recover data from a point in time using journaling technology SRDF Metro l Place new SRDF device pairs into an SRDF Metro configuration l Synchronize device pairs Unisphere 360 View and monitor all arrays running HYPERMAX OS at a single site For more information refer to Unisphere 360 on page 49 The Unisphere 360 license is array based Ecosystem licenses These licenses do not apply to arrays Table...

Страница 198: ...level to meet compliance requirements l Integrate with third party anti virus checking quota management and auditing applications Mainframe licenses This section details the licenses available in a mainframe environment License packs The following table lists the license packs available for arrays running HYPERMAX OS in the mainframe environment Table 52 License suites for mainframe environment Li...

Страница 199: ...ainframe environment License pack Entitlements in license file Included features l TimeFinder Clone Individual license The following feature has an individual license l Data Protector for z Systems Licensing Individual license 199 ...

Страница 200: ...Licensing 200 Product Guide VMAX 100K VMAX 200K VMAX 400K with HYPERMAX OS ...

Отзывы: