| Previous | Contents | Index | 
First off, if you have not begun to use a compiled configuration, then begin doing so. This will noticeably reduce the startup time of PMDF processing jobs (and on OpenVMS, the startup time for PMDF MAIL) as well as reduce the time spent waiting for a response the first time your users use a PMDF handled address in their user agent, such as send a message from Pine (or on OpenVMS, send to an IN% address in VMS MAIL). See Section 8.1 for instructions on how to generate a compiled configuration.
Consider establishing processing queues (OpenVMS) or Job Controller 
queues (UNIX and NT) for specific channels which you want to ensure 
always have processing slots. For instance, set up a separate queue for 
your pager channel so that delivery jobs for urgent pages do not get 
held up waiting in the MAIL$BATCH queue (OpenVMS) or DEFAULT queue 
(UNIX and NT). Then use the queue keyword to direct 
particular channels to run in particular queues; see Section 2.3.4.18 for 
more details on the queue keyword, and Section 33.4 for 
a further discussion of directing channels to run in specific queues.
Busy channels --- channels which usually have immediate jobs processing 
in the queues --- are likely to achieve greater overall throughput in 
exchange for slightly increased latency by use of the 
after channel keyword with a (typically small) delta time 
value. By specifying a delta time value that, while not introducing too 
much of a delay for new messages, does allow PMDF to 
"collect" multiple messages to be handled by one channel job, 
the overhead of image activation or expensive protocol connections can 
be reduced. For a busy channel, this can lead to a substantial increase 
in overall throughput. Channels such as multithreaded TCP/IP channels, 
or on OpenVMS also the L channel or MR channels, are often candidates. 
Multithreaded TCP/IP channels, for instance, sort messages to different 
hosts into different threads; when given multiple messages to deliver 
to a single host, those messages can then be delivered during a single 
SMTP connection session. See Section 2.3.4.18 for more details on the 
after keyword.
If your system has the memory to spare, increasing the size of message that processing jobs can buffer internally can reduce use of temporary buffer files on disk when receiving or processing large messages. See the discussion of the MAX_INTERNAL_BLOCKS PMDF option in Section 7.3.5. On OpenVMS, make sure that the account under which PMDF jobs are operating (normally the SYSTEM account) has sufficient memory quotas; PGFLQUOTA, WSDEF, WSQUO, and WSEXTENT are particularly relevant.
The PMDF Dispatcher controls the creation and use of multithreaded SMTP server processes. If, as is typical, incoming SMTP over TCP/IP messages are a major component of e-mail traffic at your site, monitor how many simultaneous incoming SMTP connections you tend to have, and the pacing at which such connections come in. Tuning of Dispatcher configuration options controlling the number of SMTP server processes, the number of connections each can handle, the threshold at which new server processes are created, etc., can be beneficial if your site's incoming SMTP over TCP/IP traffic is unusually high or low.
For typical SMTP over TCP/IP channels, used to send to multiple 
different remote systems, the PMDF multithreaded TCP/IP channel's 
default behavior of sorting messages to different destinations into 
different threads and then handling all messages to a single host in a 
single thread is desirable for performance. However, for a 
daemon TCP/IP channel, one dedicated to sending to a 
specific system, if the receiving system supports multiple simultaneous 
connections it can be preferable to force PMDF to split the outgoing 
messages into separate threads, by using a combination of the 
threaddepth keyword set for some appropriate value and the 
MAX_CLIENT_THREADS channel option; see Section 2.3.4.29 and 
Section 23.1.2.2.
To make better utilization of your CPU resources, consider making 
MAIL$BATCH a generic queue feeding specific queues across your cluster. 
If you do this, keep in mind that those channels which use software 
available on only a few systems must do their processing on those 
systems. For instance, if you only run Jnet on one system, then you 
must process your bit_ channels on that system. Use the 
queue channel keyword to designate which queues a channel 
should use for its processing. By default, channels will use 
MAIL$BATCH; thus, you need only specify this keyword on those channels 
which should use a separate queue. See 2.3.4.18 for information 
on the queue keyword.
On OpenVMS, if you are not already using the Process Symbiont, then consider using it. By default, MAIL$BATCH is a batch queue. Thus, PMDF's processing jobs are by default batch jobs: each processing job must be created and go through the LOGINOUT procedure. The Process Symbiont reduces this overhead by instead using a pool of detached processes for PMDF processing. When PMDF needs to launch a processing job, one of the idle detached processes is used. This avoids the overhead of creating and logging in a new process for each processing task. Use of Process Symbiont queues also reduces the creation of unnecessary log files. The Process Symbiont is a multi-threaded server symbiont. You can control how many detached processes it runs and how long they are allowed to remain idle before being deleted. As with batch jobs, you can create a generic Process Symbiont queue which feeds specific queues spread across your cluster. See Section 9.1 for instructions on how to configure the Process Symbiont.
On OpenVMS, installing as known images channel programs for often used 
channels will reduce processing job overhead. For local delivery, you 
do not need to do anything: the requisite images, 
SYS$SYSTEM:mail.exe and PMDF_SHARE_LIBRARY, are already 
installed. Sites with large volumes of outgoing SMTP messages should 
consider installing the PMDF_EXE:tcp_smtp_client.exe image 
(using the DCL INSTALL utility's /OPEN/HEADER/SHARED qualifiers).
| Previous | Next | Contents | Index |