The eServer i5 Domino Edition builds on the tradition of the DSD (Dedicated Server for Domino) and the
iSeries for Domino offering - providing great price/performance for Lotus software on System i5 and
i5/OS. Please visit the following sites for the latest information on Domino Edition solutions:
y
http://www.ibm.com/servers/eserver/iseries/domino/
y
http://www.ibm.com/servers/eserver/iseries/domino/edition.html
11.7 Performance Tips / Techniques
1.
Refer to the redbooks listed at the beginning of this chapter which provide Tips and Techniques
for tuning and analyzing Domino environments on System i servers.
2.
Our mail tests show approximately a 10% reduction in CPU utilization with the system value
QPRCMLTTSK(Processor multi-tasking) set to 1 for the pre-POWER4 models. This allows the
system to have two sets of task data ready to run for each physical processor. When one of the
tasks has a cache miss, the processor can switch to the second task while the cache miss for the
first task is serviced. With QPRCMLTTSK set to 0, the processor is essentially idle during a
cache miss. This parameter does not apply to the POWER4-based i825, i870, and i890 servers.
NOTE: It is recommended to always set QPRCMLTTSK to “1” for the POWER5 models for
Domino processing as it has an even greater CPU impact than the 10% described above.
3.
It has been shown in customer settings that maintaining a machine pool faulting rate of less than 5
faults per second is optimal for response time performance.
4.
iSeries notes.ini / server document settings:
y
Mail.box setting
Setting the number of mail boxes to more than 1 may reduce contention and reduce the CPU
utilization. Setting this to 2, 3, or 4 should be sufficient for most environments. This is in the
Server Configuration document for R5.
y
Mail Delivery and Transfer Threads
You can configure the following in the Server Configuration document:
y
Maximum delivery threads. These pull mail out of mail.box and place it in the users
mail file. These threads tended to use more resources than the transfer threads, so
we needed to configure twice as many of these so they would keep up.
y
Maximum Transfer threads. These move mail from one server’s mail.box to another
server’s mail.box. In the peer-to-peer topology, at least 3 were needed. In the hub
and spoke topology, only 1 was needed in each spoke since mail was transferred to
only one location (the hub). Twenty-five were configured for the hubs (one for each
spoke).
y
Maximum concurrent transfer threads. This is the number of transfer threads from
server ‘A’ to server ‘B’. We set this to 1, which was sufficient in all our testing.
y
NSF_Buffer_Pool_Size_MB
This controls the size of the memory section used for buffering I/Os to and from disk storage.
If you make this too small and more storage is needed, Domino will begin using its own
memory management code which adds unnecessary overhead since OS/400 already is
managing the virtual storage. If it is made too large, Domino will use the space inefficiently
and will overrun the main storage pool and cause high faulting. The general rule of thumb is
IBM i 6.1 Performance Capabilities Reference - January/April/October 2008
©
Copyright IBM Corp. 2008
Chapter 11 - Domino
164