All measurements were completed on a POWER5 570+ 4-Way (2.2 GHz). Each system is
configured as an LPAR, and each virtual SCSI test was performed between two partitions on the
same system with one CPU for each partition. IBM i operating system 5.4 was used on the
virtual SCSI server and AIX 5.3 was used on the client partitions.
The primitive disk workload used to evaluative the performance of virtual SCSI is an in house,
multi-processed application that performs all types of Synchronous or Asynchronous I/O
(read/write/sequential/random) to a target device. The program is run on an AIX or Linux client
and gets reports of CPU consumption and gathers disk statistics. Remote statistics are gathered
via a socket based application which gathers CPU from the IBM i operating system hosted disk
and physical disk statistics.
The purpose of this document is to help virtual SCSI users to better understand the performance
of their virtual SCSI system. A customer should be able to size the expected speed of their
application from this document.
Note:
You will see different terms in this publication that refer to the various components involved with virtual
SCSI. Depending on the context, these terms may vary. With SCSI, usually the terms server and client are used, so
you may see terms such as virtual SCSI client and virtual SCSI server. On the Hardware Management Console, the
terms virtual SCSI server adapter and virtual SCSI client adapter are used. They refer to the same thing. When
describing the client/server relationship between the partitions involved in virtual SCSI, the terms hosting partition
(meaning the IBM i operating system Server) and hosted partition (meaning the client partition) are used.
14.6.2 Virtual SCSI Performance Examples
The following sections compare virtual to native I/O performance on bandwidth tests. In these
tests, a single thread operates sequentially on a constant file that is 6GB in size, with a dedicated
IBM i operating system Server partition. More I/O operations are issued when reading or writing
to the file using a small block size than with a larger block size. Because of the larger number of
operations and the fact that each operation has a fixed amount of overhead regardless of transfer
length, the bandwidth measured with small block sizes is much lower than with large block
sizes.
For tests with multiple Network Storage Spaces (NWSS), a thread operates sequentially for each
network storage space on a constant file that is 6GB in size, again with a dedicated IBM i
operating system Server partition. The following sections compare native vs. virtual, multiple
network storage spaces, multiple network storage descriptions, and disk scaling.
IBM i 6.1 Performance Capabilities Reference - January/April/October 2008
©
Copyright IBM Corp. 2008
Chapter 14 DASD Performance
234