[Previous Section] [Back to Table of Contents] [Next Section]

IRIX Advanced Site and Server Administration Guide


Appendix A
IRIX Kernel Tunable Parameters

This appendix describes the tunable parameters that define kernel structures. These structures keep track of processes, files, and system activity. Many of the parameter values are specified in the files found in /var/sysgen/mtune.

If the system does not respond favorably to your tuning changes, you may want to return to your original configuration or continue making changes to related parameters. An unfavorable response to your tuning changes can be as minor as the system not gaining the hoped-for speed or capacity, or as major as the system becoming unable to boot or run. This generally occurs when parameters are changed to a great degree. Simply maximizing a particular parameter without regard for related parameters can upset the balance of the system to the point of inoperability. For complete information on the proper procedure for tuning your operating system, read Chapter 5, "Tuning System Performance."


Format of This Appendix

The rest of this appendix describes the more important tunable parameters according to function. Related parameters are grouped into sections. These sections include:

Each section begins with a short description of the activities controlled by the parameters in that section, and each listed parameter has a description that may include:

Note that the tunable parameters are subject to change with each release of the system.


General Tunable Parameters

The following group of tunable parameters specifies the size of various system structures. These are the parameters you will most likely change when you tune a system.

nbuf

Description of nbuf

The nbuf parameter specifies the number of buffer headers in the file system buffer cache. The actual memory associated with each buffer header is dynamically allocated as needed and can be of varying size, currently 1 to 128 blocks (512 to 64KB).

The system uses the file system buffer cache to optimize file system I/O requests. The buffer memory caches blocks from the disk, and the blocks that are used frequently stay in the cache. This helps avoid excess disk activity.

Buffers are used only as transaction headers. When the input or output operation has finished, the buffer is detached from the memory it mapped and the buffer header becomes available for other uses. Because of this, a small number of buffer headers is sufficient for most systems. If nbuf is set to 0, the system automatically configures nbuf for average systems. There is little overhead in making it larger for non-average systems.

The nbuf parameter is defined in /var/sysgen/mtune.

Value of nbuf

Default:

0 (Automatically configured if set to 0)

Formula:

100 + (total number of pages of memory/40)

Range:

up to 6000

When to Change nbuf

The automatic configuration is adequate for average systems. If you see dropping ``cache hit'' rates in sar(1M) and osview(1M) output, increase this parameter. Also, if you have directories with a great number of files (over 1000), you may wish to raise this parameter.

callout_himark

Description of callout_himark

The callout_himark parameter specifies the maximum number of callout table entries system-wide. The callout table is used by device drivers to provide a timeout to make sure that the system does not hang when a device does not respond to commands.

This parameter is defined in /var/sysgen/mtune and has the following formula:

nproc + 32

where:

nproc is the maximum number of processes, system-wide.

Value of callout_himark

Default:

0 (Automatically configured if set to 0)

Formula:

nproc + 32

Range:

42 - 1100

When to Change callout_himark

Increase this parameter if you see console error messages indicating that no more callouts are available.

ncallout

Description of ncallout

The ncallout parameter specifies the number of callout table entries at boot time. The system will automatically allocate one new callout entry if it runs out of entries in the callout table. However, the maximum number of entries in the callout table is defined by the callout_himark parameter.

Value of ncallout

Default:

40

Range:

20­1000

When to Change ncallout

Change ncallout if you are running an unusually high number of device drivers on your system. Note that the system automatically allocates additional entries when needed, and releases them when they are no longer needed.

reserve_ncallout

Description of reserve_ncallout

The reserve_ncallout parameter specifies the number of reserved callout table entries. These reserved table entries exist for kernel interrupt routines when the system has run out of the normal callout table entries and cannot allocate additional entries.

Value of reserve_ncallout

Default:

5

Range:

0-30

ncsize

Description of ncsize

This parameter controls the size of the name cache. The name cache is used to allow the system to bypass reading directory names out of the main buffer cache. A name cache entry contains a directory name, a pointer to the directory's in-core inode and version numbers, and a similar pointer to the directory's parent directory in-core inode and version number.

Value of ncsize

Default:

0 (Automatically configured if set to 0)

Range:

268-6200

ndquot

Description of ndquot

The ndquot parameter controls disk quotas.

Value of ndquot

Default:

0 (Automatically configured if set to 0)

Range:

268-6200

nhbuf

Description of nhbuf

The nhbuf parameter specifies the number of entries in the buffer hash table. Each table's entry is the head of a queue of buffer headers. When a block is requested, a hashing algorithm searches the buffer hash table for the requested buffer. A small hash table reduces the algorithm's efficiency; a large table wastes space.

nhbuf is defined in /var/sysgen/mtune and has the following formula:

nbuf/16 (with the result rounded down to the nearest power of 2)

Value of nhbuf

Default:

0 (Automatically configured if set to 0)

Formula:

nbuf/4 (result rounded down to the nearest power of 2)

When to Change nhbuf

You should not have to change this parameter.

nproc

Description of nproc

The nproc parameter specifies the number of entries in the system process (proc) table. Each running process requires an in-core proc structure. Thus nproc is the maximum number of processes that can exist in the system at any given time.

The default value of nproc is based on the amount of memory on your system. To find the currently auto-configured value of nproc, use the systune(1M) command.

The nproc parameter is defined in /var/sysgen/mtune.

Value of nproc

Default:

0 (Automatically configured if set to 0)

Range:

30­10000

When to Change nproc

Increase this parameter if you see an overflow in the sar -v output for the proc -sz ov column or you receive the operating system message:

no more processes

This means that the total number of processes in the system has reached the current setting. If processes are prevented from forking (being created), increase this parameter. A related parameter is maxup.

Notes on nproc

If a process can't fork, make sure that this is system-wide, and not just a user ID problem (see the maxup parameter).

If nproc is too small, processes that try to fork receive the operating system error:

EAGAIN: No more processes 

The shell also returns a message:

fork failed: too many processes 

If a system daemon such as sched, vhand, init, or bdflush can't allocate a process table entry, the system halts and displays:

No process slots 

maxpmem

Description of maxpmem

The maxpmem parameter specifies the amount of physical memory (in pages) that the system can recognize. If set to zero (0), the system will use all available pages in memory. A value other than zero defines the physical memory size (in pages) that the system will recognize.

This parameter is defined in /var/sysgen/mtune.

Value of maxpmem

Default:

0 (Automatically configured if set to 0)

Range:

1024 pages ­ total amount of memory

When to Change maxpmem

You don't need to change this parameter, except when benchmarks require a specific system memory size less than the physical memory size of the system. This is primarily useful to kernel developers. You can also boot the system with the command:

maxpmem = memory_size

added on the boot command line to achieve the same effect.

syssegsz

Description of syssegsz

This is the maximum number of pages of dynamic system memory.

Value of syssegsz

Default:

0 (Autoconfigured if set to 0)

Range:

0x2000 - 0x10000

When to Change syssegsz

Increase this parameter correspondingly when maxdmasz is increased, or when you install a kernel driver that performs a lot of dynamic memory allocation.

maxdmasz

Description of maxdmasz

The maximum DMA transfer expressed in pages of memory. This amount must be less than the value of syssegsz and maxpmem.

Value of maxdmasz

Default:

1024 pages

Range:

1 - syssegsz

When to Change maxdmasz

Change this parameter when you need to be able to use very large read or write system calls, or other system calls that perform large scale DMA. This situation typically arises when using optical scanners, film recorders or some printers.


Spinlocks Tunable Parameters

R3000-based multiprocessor machines have a limited number of MPBUS hardware locks. To reduce the possibility of spinlock depletion, groups of spinlocks use shared pools of locks. The following parameters are included:

sema_pool_size

Description of sema_pool_size

This parameter specifies the number of spinlocks pooled for all semaphores.

Value of sema_pool_size

Default:

8192

Range:

1024-16384

When to Change sema_pool_size

This parameter is used exclusively for R3000-based multiprocessor machines. The only time this parameter might need to be changed is if IRIX panics and delivers the error message:

out of spinlocks

This can generally happen only if a driver requiring a large number of spinlocks is added to the system. User spinlock requirements do not affect this parameter. In the case of such a panic and error message, reduce this parameter to the next smaller power of two. (For example, if the value is 8192, reduce it to 4096.)

vnode_pool_size

Description of vnode_pool_size

This parameter specifies the number of spinlocks pooled for vnodes.

Value of vnode_pool_size

Default:

1024

Range:

512-2048

When to Change vnode_pool_size

This parameter is only used for R3000-based multiprocessor machines. The only time this parameter might need to be changed is if IRIX panics and delivers the error message ``out of spinlocks.'' This can generally only happen if a driver requiring a large number of spinlocks is added to the system. User spinlock requirements do not affect this parameter. In the case of such a panic and error message, reduce this parameter to the next smaller power of two. (For example, if the value is 1024, reduce it to 512.)

file_pool_size

Description of file_pool_size

This parameter specifies the number of spinlocks pooled for file structures.

Value of file_pool_size

Default:

1024

Range:

512-2048

When to Change file_pool_size

This parameter is used exclusively for R3000-based multiprocessor machines. The only time this parameter might need to be changed is if IRIX panics and delivers the error message ``out of spinlocks.'' This can generally happen only if a driver requiring a large number of spinlocks is added to the system. User spinlock requirements do not affect this parameter. In the case of such a panic and error message, reduce this parameter to the next smaller power of two. (For example, if the value is 1024, reduce it to 512.)


System Limits Tunable Parameters

IRIX has configurable parameters for certain system limits. For example, you can set maximum values for each process (its core or file size), the number of groups per user, the number of resident pages, and so forth. These parameters are listed below. All parameters are set and defined in /var/sysgen/mtune.

maxup

Description of maxup

The maxup parameter defines the number of processes allowed per user login. This number should always be at least 5 processes smaller than nproc.

Value of maxup

Default:

150 processes

Range:

15­10000 (but never larger than nproc - 5)

When to Change maxup

Increase this parameter to allow more processes per user. In a heavily loaded time-sharing environment, you may want to decrease the value to reduce the number of processes per user.

ngroups_max

Description of ngroups_max

The ngroups_max parameter specifies the maximum number of multiple groups to which a user may simultaneously belong.

The constants NGROUPS_UMIN <= ngroups_max <= NGROUPS_UMAX are defined in </usr/include/sys/param.h>. NGROUPS_UMIN is the minimum number of multiple groups that can be selected at lboot time. NGROUPS_UMAX is the maximum number of multiple groups that can be selected at lboot time and is the number of group-id slots for which space is set aside at compile time. NGROUPS, which is present for compatibility with networking code (defined in </usr/include/sys/param.h>), must not be larger than ngroups_max.

Value of ngroups_max

Default:

16

Range:

0-32

When to Change ngroups_max

The default value is adequate for most systems. Increase this parameter if your system has users who need simultaneous access to more than 16 groups.

maxwatchpoints

Description of maxwatchpoints

maxwatchpoints sets the maximum number of watchpoints per process. Watchpoints are set and used via the proc(4) file system. This parameter specifies the maximum number of virtual address segments to be watched in the traced process. This is typically used by debuggers.

Value of maxwatchpoints

Default:

100

Range:

1-1000

When to Change maxwatchpoints

Raise maxwatchpoints if your debugger is running out of watchpoints

nprofile

Description of nprofile

nprofile is the maximum number of disjoint text spaces that can be profiled using the sprofil(2) system call. This is useful if you need to profile programs using shared libraries or profile an address space using different granularities for different sections of text.

Value of nprofile

Default:

100

Range:

100-200

When to Change nprofile

Change nprofile if you need to profile more text spaces than are currently configured.

maxsymlinks

Description of maxsymlinks

This parameter defines the maximum number of symbolic links that will be followed during filename lookups (for example, during the open(2) or stat(2) system calls) before ceasing the lookup. This limit is required to prevent loops where a chain of symbolic links points back to the original file name.

Value of maxsymlinks

Default:

30

Range:

0-50

When to Change maxsymlinks

Change this parameter if you have pathnames with more than 30 symbolic links.


Resource Limits Tunable Parameters

You can set numerous limits on a per-process basis by using getrlimit(2), setrlimit(2), and limit, the shell built-in command. These limits are inherited, and the original values are set in /var/sysgen/mtune. These limits are different from the system limits listed above in that they apply only to the current process and any child processes that may be spawned. To achieve similar effects, you can also use the limit command within the Bourne, C, and Korn shells (/bin/sh, /bin/csh, and /bin/ksh).

Each limit has a default and a maximum. Only the superuser can change the maximum. Each resource can have a value that turns off any checking RLIM_INFINITY. The default values are adequate for most systems.

The following parameters are associated with system resource limits:

ncargs

Description of ncargs

The ncargs parameter specifies the maximum size of arguments in bytes that may be passed during an exec(2) system call.

This parameter is specified in /var/sysgen/mtune.

Value of ncargs

Default:

20480

Range:

5120­262144

When to Change ncargs

The default value is adequate for most systems. Increase this parameter if you get the following message from exec(2), shell(1), or make(1):

E2BIG arg list too long 

Note on ncargs

Setting this parameter too large wastes memory (although this memory is pageable) and may cause some programs to function incorrectly. Also note that some shells may have independent limits smaller than ncargs.

rlimit_core_cur

Description of rlimit_core_cur

The current limit to the size of core image files for the given process.

Value of rlimit_core_cur

Default:

0x7fffffff

Range:

0­0x7fffffff

When to change rlimit_core_cur

Change this parameter when you want to place a cap on core file size.

rlimit_core_max

Description of rlimit_core_max

The maximum limit to the size of core image files.

Value of rlimit_core_max

Default:

0x7fffffff

Range:

0­0x7fffffff

When to change rlimit_core_max

Change this parameter when you want to place a maximum restriction on core file size. rlimit_core_cur cannot be larger than this value.

rlimit_cpu_cur

Description of rlimit_cpu_cur

The current limit to the amount of cpu time in minutes that may be used in executing the process.

Value of rlimit_cpu_cur

Default:

0x7fffffff (2147483647 minutes)

Range:

0­0x7fffffff

When to change rlimit_cpu_cur

Change this parameter when you want to place a cap on cpu usage.

rlimit_cpu_max

Description of rlimit_cpu_max

The maximum limit to the amount of cpu time that may be used in executing a process.

Value of rlimit_cpu_max

Default:

0x7fffffff

Range:

0­0x7fffffff

When to change rlimit_cpu_max

Change this parameter when you want to place a maximum restriction on general cpu usage.

rlimit_data_cur

Description of rlimit_data_cur

The current limit to the data size of the process.

Value of rlimit_data_cur

Default:

rlimit_vmem_cur * NBPP (0x20000000)

Range:

0­2 GB (0x7fffffff)

When to change rlimit_data_cur

Change this parameter when you want to place a cap on data segment size.

rlimit_data_max

Description of rlimit_data_max

The maximum limit to the size of data that may be used in executing a process.

Value of rlimit_data_max

Default:

rlimit_vmem_cur * NBPP (0x20000000)

Range:

0­2 GB (0x7fffffff)

When to change rlimit_data_max

Change this parameter when you want to place a maximum restriction on the size of the data segment of any process.

rlimit_fsize_cur

Description of rlimit_fsize_cur

The current limit to file size on the system for the process.

Value of rlimit_fsize_cur

Default:

2 GB (0x7fffffff)

Range:

0­0x7fffffff bytes

When to change rlimit_fsize_cur

Change this parameter when you want to place a limit on file size.

rlimit_fsize_max

Description of rlimit_fsize_max

The maximum limit to file size on the system.

Value of rlimit_fsize_max

Default:

2 GB (0x7fffffff)

Range:

0­0x7fffffff bytes

When to change rlimit_fsize_max

Change this parameter when you want to place a maximum size on all files.

rlimit_nofile_cur

Description of rlimit_nofile_cur

The current limit to the number of file descriptors that may be used in executing the process.

Value of rlimit_nofile_cur

Default:

200

Range:

20­0x7fffffff (2 GB)

When to change rlimit_nofile_cur

Change this parameter when you want to place a cap on the number of file descriptors.

rlimit_nofile_max

Description of rlimit_nofile_max

The maximum limit to the number of file descriptors that may be used in executing a process.

Value of rlimit_nofile_max

Default:

2500

Range:

0­0x7fffffff

When to change rlimit_nofile_max

Change this parameter when you want to place a maximum restriction on the number of file descriptors.

rlimit_rss_cur

Description of rlimit_rss_cur

The current limit to the resident set size (the number of pages of memory in use at any given time) that may be used in executing the process. This limit is the larger of the results of the following two formulae:

physical_memory_size - 4 MB

or

physical_memory_size * 9/10

Value of rlimit_rss_cur

Default:

0 (Automatically configured if set to 0)

Range:

0­(rlimit_vmem_cur * NBPP) (0x7fffffff)

When to change rlimit_rss_cur

Change this parameter when you want to place a cap on the resident set size of a process.

rlimit_rss_max

Description of rlimit_rss_max

The maximum limit to the resident set size that may be used in executing a process.

Value of rlimit_rss_max

Default:

(rlimit_vmem_cur * NBPP) (0x2000000)

Range:

0­(rlimit_vmem_cur * NBPP) (0x7fffffff)

When to change rlimit_rss_max

Change this parameter when you want to place a maximum restriction on resident set size.

rlimit_stack_cur

Description of rlimit_stack_cur

The current limit to the amount of stack space that may be used in executing the process.

Value of rlimit_stack_cur

Default:

64 MB (0x04000000)

Range:

0­2 GB (0x7fffffff)

When to change rlimit_stack_cur

Change this parameter when you want to place a limit on stack space usage.

rlimit_stack_max

Description of rlimit_stack_max

The maximum limit to the amount of stack space that may be used in executing a process.

Value of rlimit_stack_max

Default:

rlimit_vmem_cur * NBPP (0x20000000)

Range:

0­2 GB (0x7fffffff)

When to change rlimit_stack_max

Change this parameter when you want to place a maximum restriction on stack space usage.

rlimit_vmem_cur

Description of rlimit_vmem_cur

The current limit to the amount of virtual memory that may be used in executing the process.

Value of rlimit_vmem_cur

Default:

rlimit_vmem_cur * NBPP (0x20000000)

Range:

0­2 GB (0x7fffffff)

When to change rlimit_vmem_cur

Change this parameter when you want to place a cap on virtual memory usage.

rlimit_vmem_max

Description of rlimit_vmem_max

The maximum limit to the amount of virtual memory that may be used in executing a process.

Value of rlimit_vmem_max

Default:

rlimit_vmem_cur * NBPP (0x20000000)

Range:

0­2 GB (0x7fffffff)

When to change rlimit_vmem_max

Change this parameter when you want to place a maximum restriction on virtual memory usage.

rsshogfrac

Description of rsshogfrac

The number of physical memory pages occupied by a process at any given time is called its resident set size (RSS). The limit to the RSS of a process is determined by its allowable memory-use resource limit. rsshogfrac is designed to guarantee that even if one or more processes are exceeding their RSS limit, some percentage of memory is always kept free so that good interactive response is maintained.

Processes are permitted to exceed their RSS limit until either:

In either of these cases, the paging daemon runs and trims pages from all RSS processes exceeding the RSS limit.

The parameter RSSHOGFRAC is expressed as a fraction of the total physical memory of the system. The default value is 75 percent.

This parameter is specified in /var/sysgen/mtune. For more information, see the gpgshi, gpgslo, and rsshogslop resource limits.

Value of rsshogfrac

Default:

75% of total memory

Range:

0­100% of total memory

When to Change rsshogfrac

The default value is adequate for most systems.

rsshogslop

Description of rsshogslop

To avoid thrashing (A condition where the computer devotes 100% of its CPU cycles to swapping and paging), a process can use up to rsshogslop more pages than its resident set maximum (see "Resource Limits Tunable Parameters").

This parameter is specified in /var/sysgen/mtune. For more information, see the rsshogfrac resource limit.

Value of rsshogslop

Default:

20

When to Change rsshogslop

The default value is adequate for most systems.

shlbmax

Description of shlbmax

The shlbmax parameter specifies the maximum number of shared libraries with which a process can link.

This parameter is specified in /var/sysgen/mtune.

Value of shlbmax

Default:

8

Range:

3­32

When to Change shlbmax

The default value is adequate for most systems. Increase this parameter if you see the following message from exec(2):

ELIBMAX cannot link 


Paging Tunable Parameters

The paging daemon, vhand, frees up memory as the need arises. This daemon uses a ``least recently used'' algorithm to approximate process working sets and writes those pages out to disks that have not been touched during a specified period of time. The page size is 4K. When memory gets exceptionally tight, vhand may swap out entire processes.

vhand reclaims memory by:

The following tunable parameters determine how often vhand runs and under what conditions. Note that the default values should be adequate for most applications.

The following parameters are included:

bdflushr

Description of bdflushr

The bdflushr parameter specifies how often, in seconds, the bdflush daemon is executed; bdflush performs periodic flushes of dirty file system buffers.

This parameter is specified in /var/sysgen/mtune. For more information, see the autoup kernel parameter.

Value of bdflushr

Default:

5

Range:

1­31536000

When to Change bdflushr

This value is adequate for most systems. Increasing this parameter increases the chance that more data could be lost if the system crashes. Decreasing this parameter increases system overhead.

gpgsmsk

Description of gpgsmsk

The gpgsmsk parameter specifies the mask used to determine if a given page may be swapped. Whenever the pager (vhand) is run, it decrements software reference bits for every active page. When a process subsequently references a page, the counter is reset to the limit (NDREF, as defined in /usr/include/sys/immu.h). When the pager is looking for pages to steal back (if memory is in short supply), it takes only pages whose reference counter has fallen to gpgsmsk or below.

This parameter is specified in /var/sysgen/mtune.

Also see /usr/include/sys/immu.h and /usr/include/sys/tuneable.h and the gpgshi and gpgslo kernel parameters for more information.

Value of gpgsmsk

Default:

2

Range:

0­7

When to Change gpgsmsk

This value is adequate for most systems.

Notes on gpgsmsk

If the value is greater than 4, pages are written to the swap area earlier than they would be with the default value of gpgsmsk. Thus swapping/paging may occur before it should, unnecessarily using system resources.

gpgshi

Description of gpgshi

When the vhand daemon (page handler) is stealing pages, it stops stealing when the amount of free pages is greater than gpgshi.

In other words, vhand starts stealing pages when there are fewer than gpgslo free pages in the system. Once vhand starts stealing pages, it continues until there are gpgshi pages.

If, at boot time, gpgslo and gpgshi are 0, the system sets gpgshi to 8% of the number of pages of memory in the system, and sets gpgslo to one half of gpgshi.

This parameter is specified in /var/sysgen/mtune. For more information, see the kernel parameters gpgsmsk and gpgslo.

Value of gpgshi

Default:

0 (automatically configured to 8% of memory if set to 0)

Range:

30 pages ­ 1/2 of memory

When to Change gpgshi

This value is adequate for most systems.

Notes on gpgshi

If this parameter is too small, vhand cannot supply enough free memory for system-wide demand.

gpgslo

Description of gpgslo

When the vhand daemon (page handler) executes, it won't start stealing back pages unless there are fewer than gpgslo free pages in the system. Once vhand starts stealing pages, it continues until there are gpgshi pages.

This parameter is specified in /var/sysgen/mtune. For more information, see the gpgshi and gpgsmsk kernel parameters.

Value of gpgslo

Default:

0 (automatically configured to half of gpgshi if set to 0)

Range:

10 pages ­ 1/2 of memory

When to Change gpgslo

This value is adequate for most systems.

Notes on gpgslo

If this parameter is too small, vhand does not start swapping pages; thus entire processes must be swapped. If this parameter is too large, vhand swaps pages unnecessarily.

maxlkmem

Description of maxlkmem

The maxlkmem parameter specifies the maximum number of physical pages that can be locked in memory (by mpin(2) or plock(2)) per non-superuser process.

This parameter is specified in /var/sysgen/mtune.

Value of maxlkmem

Default:

2000

Range:

0 pages ­ 3/4 of physical memory

When to Change maxlkmem

Increase this parameter only if a particular application has a real need to lock more pages in memory.

On multi-user servers, you may want to decrease this parameter and also decrease rlimit_vmem_cur.

Notes on maxlkmem

When pages are locked in memory, the system can't reclaim those pages, and therefore can't maintain the most efficient paging.

maxfc

Description of maxfc

The maxfc parameter specifies the maximum number of pages that may be freed by the vhand daemon in a single operation. When the paging daemon (vhand) starts stealing pages, it collects pages that can be freed to the general page pool. It collects, at most, maxfc pages at a time before freeing them. Do not confuse this parameter with gpgshi, which sets the total number of pages that must be free before vhand stops stealing pages.

This parameter is specified in /var/sysgen/mtune.

Value of maxfc

Default:

100

Range:

50­100

When to Change maxfc

This value is adequate for most systems.

maxsc

Description of maxsc

The maxsc parameter specifies the maximum number of pages that may be swapped by the vhand daemon in a single operation. When the paging daemon starts tossing pages, it collects pages that must be written out to swap space before they are actually swapped and then freed into the general page pool. It collects at most maxsc pages at a time before swapping them out.

This parameter is specified in /var/sysgen/mtune.

Value of maxsc

Default:

100

Range:

8­100

When to Change maxsc

You may want to decrease this parameter on systems that are swapping over NFS (Network File System). This is always the case for diskless systems to increase performance.

maxdc

Description of maxdc

maxdc is the maximum number of pages which can be saved up and written to the disk at one time.

Value of maxdc

Default:

100 pages

Range:

1-100

When to Change maxdc

If the system is low on memory and consistently paging out user memory to remote swap space (for example, mounted via NFS), decrease this parameter by not more than 10 pages at a time. However, this parameter's setting does not usually make any measurable difference in system performance.

minarmem

Description of minarmem

This parameter represents the minimum available resident memory that must be maintained in order to avoid deadlock.

Value of minarmem

Default:

0 (Autoconfigured if set to 0)

When to Change minarmem

The automatically configured value of this parameter should always be correct for each system. You should not have to change this parameter.

minasmem

Description of minasmem

This parameter represents the minimum available swappable memory that must be maintained in order to avoid deadlock.

Value of minasmem

Default:

0 (Autoconfigured if set to 0)

When to Change minasmem

The automatically configured value of this parameter should always be correct for each system. You should not have to change this parameter.

tlbdrop

Description of tlbdrop

This parameter specifies the number of clock ticks before a process' wired entries are flushed.

Value of tlbdrop

Default:

100

When to Change tlbdrop

If sar(1) indicates a great deal of transaction lookaside buffer (utlbmiss) overhead in a very large application, you may need to increase this parameter. In general, the more the application changes the memory frame of reference in which it is executing, the more likely increasing tlbdrop will help performance. You may have to experiment somewhat to find the optimum value for your specific application.


IPC Tunable Parameters

The IPC tunable parameters set interprocess communication (IPC) structures. These structures include IPC messages, specified in /var/sysgen/mtune/msg; IPC semaphores, specified in /var/sysgen/mtune/sem; and IPC shared memory, specified in /var/sysgen/mtune/shm.

If IPC (interprocess communication) structures are incorrectly set, certain system calls will fail and return errors.

Before increasing the size of an IPC parameter, investigate the problem by using ipcs(1) to see if the IPC resources are being removed when no longer needed. For example, shmget returns the error ENOSPC if applications do not remove semaphores, memory segments, or message queues.

Note that IPC objects differ from most IRIX objects in that they are not automatically freed when all active references to them are gone. In particular, they are not deallocated when the program that created them exits.

Table A-1 lists error messages, system calls that cause the error, and parameters to adjust. Subsequent paragraphs explain the details you need to know before you increase the parameters listed in this table.

Table A-1 : System Call Errors and IPC Parameters to Adjust

Message System Call Parameter
EAGAIN msgsnd() see below
EINVAL msgsnd()
shmget()
msgmax
shmmax
EMFILE shmat() sshmseg
ENOSPC semget()
shmget()
msgmni
semmni, semmns,
shmmni



EAGAIN

If IPC_NOWAIT is set, msgsnd can return EAGAIN for a number of reasons:

EINVAL

shmget (which gets a new shared memory segment identifier) will fail with EINVAL if the given size is not within shmmin and shmmax. Since shmmin is set to the lowest possible value (1), and shmmax is very large, it should not be necessary to change these values.

EMFILE

shmat will return EMFILE if it attaches more than sshmseg shared memory segments. sshmseg is the total number of system-shared memory segments per process.

ENOSPC

shmget will return ENOSPC if:

semget will return ENOSPC if:

msgget will return ENOSPC if:


IPC Messages Tunable Parameters

If no one on the system uses or plans to use IPC messages, you may want to consider excluding this module. The following tunable parameters are associated with interprocess communication messages (see the msgctl(2) reference page):

msgmax

Description of msgmax

The msgmax parameter specifies the maximum size of a message.

This parameter is specified in /var/sysgen/mtune/msg.

Value of msgmax

Default:

16 * 1024 (0x4000)

Range:

512-0x8000

When to Change msgmax

Increase this parameter if the maximum size of a message needs to be larger. Decrease the value to limit the size of messages.

msgmnb

Description of msgmnb

The msgmnb parameter specifies the maximum length of a message queue.

This parameter is specified in /var/sysgen/mtune/msg.

Value of msgmnb

Default:

32 * 1024 (0x8000)

Range:

msgmax­1/2 of physical memory

When to Change msgmnb

Increase this parameter if the maximum number of bytes on a message queue needs to be longer. Decrease the value to limit the number of bytes per message queue.

msgmni

Description of msgmni

The msgmni parameter specifies the maximum number of message queues system-wide.

This parameter is specified in /var/sysgen/mtune/msg.

Value of msgmni

Default:

50

Range:

10­1000

When to Change msgmni

Increase this parameter if you want more message queues on the system. Decrease the value to limit the message queues.

Notes on msgmni

If there are not enough message queues, a msgget(2) system call that attempts to create a new message queue returns the error:

ENOSPC: No space left on device

msgseg

Description of msgseg

The msgseg parameter specifies the maximum number of message segments system-wide. A message on a message queue consists of one or more of these segments. The size of each segment is set by the msgssz parameter.

This parameter is specified in /var/sysgen/mtune/msg.

Value of msgseg

Default:

1536

When to Change msgseg

Modify this parameter to reserve the appropriate amount of memory for messages. Increase this parameter if you need more memory for message segments on the system. Decrease the value to limit the amount of memory used for message segments.

Notes on msgseg

If this parameter is too large, memory may be wasted (saved for messages but never used). If this parameter is too small, some messages that are sent will not fit into the reserved message buffer space. In this case, a msgsnd(2) system call waits until space becomes available.

msgssz

Description of msgssz

The msgssz parameter specifies the size, in bytes, of a message segment. Messages consist of a contiguous set of message segments large enough to accommodate the text of the message. Using segments helps to eliminate fragmentation and speed the allocation of the message buffers.

This parameter is specified in /var/sysgen/mtune/msg.

Value of msgssz

Default:

8

When to Change msgssz

This parameter is set to minimize wasted message buffer space. Change this parameter only if most messages do not fit into one segment with a minimum of wasted space.

If you modify this parameter, you may also need to change the msgseg parameter.

Notes on msgssz

If this parameter is too large, message buffer space may be wasted by fragmentation, which in turn may cause processes that are sending messages to sleep while waiting for message buffer space.

msgtql

Description of msgtql

The msgtql parameter specifies the maximum number of message headers system-wide, and thus the number of outstanding (unread) messages. One header is required for each outstanding message.

This parameter is specified in /var/sysgen/mtune/msg.

Value of msgtql

Default:

40

Range:

10­1000

When to Change msgtql

Increase this parameter if you require more outstanding messages. Decrease the value to limit the number of outstanding messages.

Notes on msgtql

If this parameter is too small, a msgsnd(2) system call attempting to send a message that would put msgtql over the limit waits until messages are received (read) from the queues.


IPC Semaphores Tunable Parameters

If no one on the system uses or plans to use IPC semaphores, you may want to consider excluding this module.

The following tunable parameters are associated with interprocess communication semaphores (see the semctl(2) reference page):

semmni

Description of semmni

The semmni parameter specifies the maximum number of semaphore identifiers in the kernel. This is the number of unique semaphore sets that can be active at any given time. Semaphores are created in sets; there may be more than one semaphore per set.

This parameter is specified in /var/sysgen/mtune/sem.

Value of semmni

Default:

10

When to Change semmni

Increase this parameter if processes require more semaphore sets. Increasing this parameter to a large value requires more memory to keep track of semaphore sets. If you modify this parameter, you may need to modify other related parameters.

semmns

Description of semmns

The semmns parameter specifies the number of ipc semaphores system-wide. This parameter is specified in /var/sysgen/mtune/sem.

Value of semmns

Default:

60

When to Change semmns

Increase this parameter if processes require more than the default number of semaphores.

Notes on semmns

If you set this parameter to a large value, more memory is required to keep track of semaphores.

semmnu

Description of semmnu

The semmnu parameter specifies the number of ''undo'' structures system-wide. An undo structure, which is set up on a per-process basis, keeps track of process operations on semaphores so that the operations may be ''undone'' if the structure terminates abnormally. This helps to ensure that an abnormally terminated process does not cause other processes to wait indefinitely for a change to a semaphore.

This parameter is specified in /var/sysgen/mtune/sem.

Value of semmnu

Default:

30

When to Change semmnu

Change this parameter when you want to increase/decrease the number of undo structures permitted on the system. semmnu limits the number of processes that can specify the UNDO flag in the semop(2) system call to undo their operations on termination.

semmsl

Description of semmsl

The semmsl parameter specifies the maximum number of semaphores per semaphore identifier.

This parameter is specified in /var/sysgen/mtune/sem.

Value of semmsl

Default:

25

When to Change semmsl

Increase this parameter if processes require more semaphores per semaphore identifier.

semopm

Description of semopm

The semopm parameter specifies the maximum number of semaphore operations that can be executed per semop() system call. This parameter permits the system to check or modify the value of more than one semaphore in a set with each semop() system call.

This parameter is specified in /var/sysgen/mtune/sem.

Value of semopm

Default:

10

When to Change semopm

Change this parameter to increase/decrease the number of operations permitted per semop() system call. You may need to increase this parameter if you increase semmsl (the number of semaphore sets), so that a process can check/modify all the semaphores in a set with one system call.

semume

Description of semume

The semume parameter specifies the maximum number of ''undo'' entries per undo structure. An undo structure, which is set up on a per-process basis, keeps track of process operations on semaphores so that the operations may be ''undone'' if it terminates abnormally. Each undo entry represents a semaphore that has been modified with the UNDO flag specified in the semop(2) system call.

This parameter is specified in /var/sysgen/mtune/sem.

Value of semume

Default:

10

When to Change semume

Change this parameter to increase/decrease the number of undo entries per structure. This parameter is related to the semopm parameter (the number of operations per semop(2) system call).

semvmx

Description of semvmx

The semvmx parameter specifies the maximum value that a semaphore can have.

This parameter is specified in /var/sysgen/mtune/sem.

Value of semvmx

Default:

32767 (maximum value)

When to Change semvmx

Decrease this parameter if you want to limit the maximum value for a semaphore.

semaem

Description of semaem

The semaem parameter specifies the adjustment on exit for maximum value, alias semadj. This value is used when a semaphore value becomes greater than or equal to the absolute value of semop(2), unless the program has set its own value.

This parameter is specified in /var/sysgen/mtune/sem.

Value of semaem

Default:

16384 (maximum value)

When to Change semaem

Change this parameter to decrease the maximum value for the adjustment on exit value.


IPC Shared Memory Tunable Parameters

The following tunable parameters are associated with interprocess communication shared memory:

shmall

Description of shmall

The shmall parameter specifies the maximum number of pages of shared memory that can be allocated at any given time to all processes on the system combined.

This parameter is specified in /var/sysgen/mtune/shm.

Value of shmall

Default:

512

Range:

256-2048

When to Change shmall

Keep this parameter as small as possible so that the use of shared memory does not cause unnecessary swapping.

Decrease this parameter to limit the amount of memory that can be used for shared memory segments. You may do this if swapping is occurring because large amounts of memory are being used for shared memory segments. But be aware that if an application requires a larger shared memory segment, that application will not run if this limit is set too low.

shmmax

Description of shmmax

The shmmax parameter specifies the maximum size of an individual shared memory segment.

This parameter is specified in /var/sysgen/mtune/shm.

Value of shmmax

Default:

(rlimit_vmem_cur * NBPP) (0x20000000)

When to Change shmmax

Keep this parameter small if it is necessary that a single shared memory segment not use too much memory.

shmmin

Description of shmmin

The shmmin parameter specifies the minimum shared memory segment size.

This parameter is specified in /var/sysgen/mtune/shm.

Value of shmmin

Default:

1 byte

When to Change shmmin

Increase this parameter if you want an error message to be generated when a process requests a shared memory segment that is too small.

shmmni

Description of shmmni

The shmmni parameter specifies the maximum number of shared memory identifiers system-wide.

This parameter is specified in /var/sysgen/mtune/shm.

Value of shmmni

Default:

100

Range:

10­1000

When to Change shmmni

Increase this parameter by one (1) for each additional shared memory segment that is required, and also if processes that use many shared memory segments reach the shmmni limit.

Decrease this parameter if you need to reduce the maximum number of shared memory segments of the system at any one time. Also decrease it to reduce the amount of kernel space taken for shared memory segments.

sshmseg

Description of sshmseg

The sshmseg parameter specifies the maximum number of attached shared memory segments per process. A process must attach a shared memory segment before the data can be accessed.

This parameter is specified in /var/sysgen/mtune/shm.

Value of sshmseg

Default:

100

Range:

1­SHMMNI

When to Change sshmseg

Keep this parameter as small as possible to limit the amount of memory required to track the attached segments.

Increase this parameter if processes need to attach more than the default number of shared memory segments at one time.


Streams Tunable Parameters

The following parameters are associated with STREAMS processing.

nstrpush

Description of nstrpush

nstrpush defines the maximum number of STREAMS modules that can be pushed on a single stream.

Value of nstrpush

Default:

9

Range:

9-10

When to Change nstrpush

Change nstrpush from 9 to 10 modules when you need an extra module.

strctlsz

Description of strctlsz

strctlsz is the maximum size of the ctl buffer of a STREAMS message. See the getmsg(2) or putmsg(2) reference pages for a discussion of the ctl and data parts of a STREAMS message.

Value of strctlsz

Default:

1024

When to Change strctlsz

Change strctlsz when you need a larger buffer for the ctl portion of a STREAMS message.

strmsgsz

Description of strmsgsz

strmsgsz defines the maximum STREAMS message size. This is the maximum allowable size of the ctl part plus the data part of a message. Use this parameter in conjunction with the strctlsz parameter described above to set the size of the data buffer of a STREAMS message. See the getmsg(2) or putmsg(2) reference pages for a discussion of the ctl and data parts of a STREAMS message.

Value of strmsgsz

Default:

0x8000

When to Change strmsgsz

Change this parameter in conjunction with the strctlsz parameter to adjust the sizes of the STREAMS message as a whole and the data portion of the message.


Signal Parameters

The following signal parameters control the operation of interprocess signals within the kernel:

maxsigq

Description of maxsigq

The maximum number of signals that can be queued. Normally, multiple instances of the same signal result in only one signal being delivered. With the use of the SA_SIGINFO flag, outstanding signals of the same type are queued instead of being dropped.

Value of maxsigq

Default

64

Range

32-32767

When to Change maxsigq

Raise maxsigq when a process expects to receive a great number of signals and 64 queue places may be insufficient to avoid losing signals before they can be processed. Change maxsigq to a value appropriate to your needs.


Dispatch Parameters

One of the most important functions of the kernel is ``dispatching'' processes. When a user issues a command and a process is created, the kernel endows the process with certain characteristics. For example, the kernel gives the process a priority for receiving CPU time. This priority can be changed by the user who requested the process or by the Superuser. Also, the length of time (slice-size) that a process receives in the CPU is adjustable by a dispatch parameter. The Periodic Deadline Scheduler (PDS) is also part of the dispatch group. The deadline scheduler is invoked via the schedctl(2) system call in a user program and requires the inclusion of <sys/schedctl.h>. The following parameters are included in the dispatch group:

ndpri_hilim

Description of ndpri_hilim

The ndpri_hilim parameter sets the highest non-degrading priority a user process may have.

Note that the higher the numeric value of ndpri_hilim, the lower the priority of the process.

Value of ndpri_hilim

Default

128

Range

30-255

When to Change ndpri_hilim

Change this parameter when you want to limit user process priority in a busy system with many users or when you want to enable a high priority user process, for example, a real-time graphics application.

ndpri_lolim

Description of ndpri_lolim

ndpri-lolim sets the lowest possible non-degrading priority for a user process. Note that lower priority values give a process higher scheduling priority.

Value of ndpri_lolim

Default:

39

Range:

30-255

When to Change ndpri_lolim

The default value is adequate for most systems.

runq_dl_maxuse

Description of runq_dl_maxuse

This parameter sets an absolute limit on the amount of the reference frame (set by the runq_dl_refframe parameter) that can be allocated under any circumstances.

Value of runq_dl_maxuse

Default:

700

Range:

0-100000

When to Change runq_dl_maxuse

If your deadline-scheduled processes require more scheduled CPU time, increase the value of runq_dl_maxuse and runq_dl_nonpriv.

runq_dl_nonpriv

Description of runq_dl_nonpriv

This parameter controls the amount of the reference frame (set by the runq_dl_refframe parameter) that can be allocated by non-privileged user processes.

Value of runq_dl_nonpriv

Default:

200

Range:

0-100000

When to change runq_dl_nonpriv

If your non-privileged deadline processes require more CPU time, increase this value.

runq_dl_refframe

Description of runq_dl_refframe

This parameter specifies the longest interval that a deadline process may request.

Value of runq_dl_refframe

Default:

1000

Range:

0-100000

When to Change runq_dl_refframe

If you change the values of runq_dl_nonpriv and runq_dl_use, you may need to change this parameter as well, to expand the reference frame in which runq_dl_nonpriv and runq_dl_use act.

slice-size

Description of slice-size

slice-size is the default process time slice, expressed as a number of ticks of the system clock. The frequency of the system clock is expressed by the constant Hz, which has a value of 100. Thus each unit of slice-size corresponds to 10 milliseconds. When a process is given control of the CPU, the kernel lets it run for slice-size ticks. When the time slice expires or when the process voluntarily gives up the CPU (for example, by calling pause(2) or by doing some other system call that causes the process to wait), the kernel examines the run queue and selects the process with the highest priority that is eligible to run on that CPU.

The slice-size parameter is defined in /var/sysgen/mtune/disp and has the following formula:

#define slice-size Hz / 30 int slice_size = slice-size

Since slice_size is an integer, the resulting value is 3. This means that the default process time slice is 30 milliseconds.

Value of slice-size

Default:

3

Range:

1­100

When to Change slice-size

If you use the system primarily for compute-intensive jobs and interactive response is not an important consideration, you can increase slice-size. For example, setting slice-size to 10 gives greater efficiency to the compute jobs, since each time they get control of the CPU, they are able to run for 100 milliseconds before getting switched out.

In situations where the system runs both compute jobs and interactive jobs, interactive response time will suffer as you increase the value of slice-size.


EFS Parameters

The IRIX Extent File System works closely with the operating system kernel, and the following parameters adjust the kernel's interface with the file system.

The following parameters are defined in the efs group:

The following parameters are set in the kernel parameter group, and are used for file system tuning. They determine how many pages of memory are clustered together into a single disk write operation. Adjusting these parameters can greatly increase file system performance. Available parameters include:

efs_bmmax

Description of efs_bmmax

This parameter represents the number of efs bitmap buffers to keep privately cached at any given time.

Value of efs_bmmax

Default:

10 buffers

When to Change efs_bmmax

It is not generally useful to change this parameter, but you may want to increase it over the default for systems with 10 gigabytes of disk space or more and a great deal of file system activity.

dwcluster

Description of dwcluster

This parameter sets the maximum number of delayed-write pages to cluster in each push.

Value of dwcluster

Default:

64

When to Change dwcluster

It should not be necessary to change this parameter. The automatically configured value is sufficient.

autoup

Description of autoup

The autoup parameter specifies the age, in seconds, that a buffer marked for delayed write must be before the bdflush daemon writes it to disk. This parameter is specified in /var/sysgen/mtune. For more information, see the entry for the bdflushr kernel parameter.

Value of autoup

Default:

10

Range:

1­30

When to Change autoup

This value is adequate for most systems.


Loadable Drivers Parameters

IRIX 5.0 allows you to load and run device drivers while the system remains up and running. Occasionally, you may have to make adjustments to the running kernel to allow for the extra resources these loadable drivers require. The following parameters allow you to make the necessary adjustments:

bdevsw_extra

Description of bdevsw_extra

This parameter specifies an extra number of entries in the block device switch. This parameter is for use by loadable drivers only. If you configured a block device into the system at lboot(1M) time, you will not need to add extra entries to bdevsw.

Value of bdevsw_extra

Default:

21

Range:

1-254

When to Change bdevsw_extra

Change this parameter when you have more than 21 block devices to load into dynamically the system. IRIX provides 21 spaces in the bdevsw by default.

cdevsw_extra

Description of cdevsw_extra

This parameter specifies an extra number of entries in the character device switch. This parameter is for use by loadable drivers only. If you configured a character device into the system at lboot(1M) time, you will not need to add extra entries to cdevsw.

Value of cdevsw_extra

Default:

23

Range:

3-254

When to Change cdevsw_extra

Change this parameter when you have more than 23 character devices to load dynamically into the system. IRIX provides 23 spaces in the cdevsw by default.

fmodsw_extra

Description of fmodsw_extra

This parameter specifies an extra number of entries in the streams module switch. This parameter is for use by loadable drivers only. If you configured a streams module into the system at lboot(1M) time, you will not need to add extra entries to fmodsw.

Value of fmodsw_extra

Default:

20

Range:

0-254

When to Change fmodsw_extra

Change this parameter when you have more than 20 streams modules to load dynamically into the system. IRIX provides 20 spaces in the fmodsw by default.

vfssw_extra

Description of vfssw_extra

This parameter specifies an extra number of entries in the vnode file system module switch. This parameter is for use by loadable drivers only. If you configured a vfs module into the system at lboot(1M) time, you will not need to add extra entries to vfssw.

Value of vfssw_extra

Default:

5

Range:

0-254

When to Change vfssw_extra

Change this parameter when you have more than 5 virtual file system modules to load dynamically into the system. IRIX provides 5 spaces in the vfssw by default.


CPU Actions Parameters

CPU actions parameters are used in multi-processor systems to allow the user to select the processor or processors that will be used to perform a given task.

The following parameters are defined:

nactions

Description of nactions

The nactions parameter controls the number of action blocks. An action block lets you queue a process to be run on a specific CPU. The value of the nactions parameter is found by the formula:

maxcpu + 60

Value of nactions

Default:

0 (Auto-configured if set to 0)

Range:

60-200

When to Change nactions

Increase the value of nactions if you see the kernel error message:

PANIC: Ran out of action blocks


Switch Parameters

The following parameters are simple on/off switches within the kernel that allow or disallow certain features, such as whether shells that set the user ID to the superuser are allowed:

Note that all the above listed parameters are enforced system-wide. It is not possible to select different values on a per-process basis.

svr3pipe

Description of svr3pipe

This parameter, when set to 1, specifies SVR3.2 style pipes, which are unidirectional. When set to 0, SVR4 style pipes are specified, which are bidirectional.

Value of svr3pipe

Default:

1 (SVR3.2 style pipes)

Range:

0 or 1

When to Change svr3pipe

Change this parameter if you wish to take advantage of SVR4 style pipes.

nosuidshells

Description of nosuidshells

Some programs are written so that they perform actions that require superuser privilege. In order to perform these actions, they create a shell in which the user has superuser privilege. Such shells pose a certain manageable risk to system security, but application developers are generally careful to limit the actions taken by the application in these shells. The nosuidshells switch, when set to 0, allows these applications to create superuser-privileged shells. When set to any value other than 0, such shells are not permitted.

Value of nosuidshells

Default:

1 (setuid shells not permitted)

When to Change nosuidshells

Change this switch to allow setuid shells.

posix_tty_default

Description of posix_tty_default

IRIX uses a default system of line disciplines and settings for serial lines. These default settings are different from those specified by POSIX. If the value of this switch is 0, the default Silicon Graphics line disciplines are used. If the value is set to 1, POSIX line disciplines and settings are used.

Value of posix_tty_ default

Default:

0

Range:

0 or 1

When to Change posix_tty_default

Change this switch if you need to use POSIX line disciplines.

resettable_clocal

Description of resettable_clocal

In the standard configuration, the CLOCAL environment variable on a tty is read-only, but under POSIX, CLOCAL can be reset. This switch allows you to use either the default behavior or POSIX compliant behavior.

Value of resettable_clocal

Default:

0

Range:

0 or 1

When to Change resettable_clocal

Change this switch if you need POSIX compliance in your tty handling.

restricted_chown

Description of restricted_chown

This switch allows you to decide whether you want to use a BSD UNIX style chown(2) system call or the System V style. Under the BSD version, only the Superuser can use the chown system call to ''give away'' a file ­ to change the ownership to another user. Under the System V version, any user can give away a file or directory. If the value of the switch is 0, System V chown is enabled. If the value is not zero, BSD chown is enabled.

Value of restricted_chown

Default:

0

Range:

0 or 1

When to Change restricted_chown

Change this switch to choose which behavior you prefer for the chown(2) system call.

force_old_dump

Description of force_old_dump

When set to 1, this parameter forces the system to use old-style core dump formats, rather than the new IRIX 5 format.

Value of force_old_dump

Default:

0

Range:

0 or 1

When to Change force_old_dump

This parameter is for use when the new form of compressed dumps are inadequate.

use_old_serialnum

Description of use_old_serialnum

When set to 1, this parameter forces the kernel to use the old method (before IRIX Version 5) of calculating a 32-bit serial number for sysinfo -s. This variable affects only Onyx and Challenge L or XL systems.

Value of use_old_serialnum

Default:

0

Range:

0 or 1

When to Change use_old_serialnum

Change this parameter on your Challenge or Onyx system if you need to use some older software that requires a 32-bit serial number.


Timer parameters

Timer parameters control the functioning of system clocks and timing facilities. The following parameters are defined:

fasthz

Description of fasthz

This parameter is used to set the profiling/fast itimer clock speed.

Value of fasthz

Default:

1000

Range:

500-2500

When to Change fasthz

Change this parameter to give a finer or coarser grain for such system calls as gettimeofday(3B), getitimer(2) and settimer(2).

itimer_on_clkcpu

Description of itimer_on_clkcpu

This parameter is set to either 0 or 1, to determine whether itimer requests are queued on the clock processor or on the running processor, respectively.

Value of itimer_on_clkcpu

Default:

0

Range:

0 or 1

When to Change itimer_on_clkcpu

If a process uses the gettimeofday(2) call to compare the accuracy of the itimer delivery, then you should set this parameter to 1, to take advantage of the clock processor. If the itimer request is for the purpose of implementing a user frequency-based scheduler, then set this parameter to 0 to queue the requests on the current running processor.

timetrim

Description of timetrim

The system clock is adjusted every second by the signed number of nanoseconds specified by this parameter. This adjustment is limited to 3 milliseconds or 0.3%. timed(1M) and timeslave(1M) periodically place suggested values in /var/adm/SYSLOG.

Value of timetrim

Default:

0

Range:

0-0.3% of a second (3 milliseconds)

When to Change timetrim

Change this parameter as suggested by timed and timeslave.


NFS Parameters

The following parameters control the kernel-level functions of the Network File System (NFS). Reducing these values is likely to cause significant performance decreases in your system:

nfs_portmon

Description of nfs_portmon

This parameter determines whether or not a client must use a ``privileged'' port for NFS requests. Only processes with superuser privilege may bind to a privileged port. The nfs_portmon parameter is binary. If it is set to 0, clients may use any available port. If it is set to 1, clients must use only privileged ports.

Value of nfs_portmon

Default:

0

Range:

0 or 1

When to Change nfs_portmon

You should change this parameter only if it is absolutely necessary to maintain root privilege on your NFS mounted file systems and you have checked each NFS client to be sure that it requests a privileged port. If there are any clients requesting non-privileged ports, they will be unable to mount the file systems.

Additionally, changing the value of nfs_portmon to 1 can give a false sense of security. A process must have root privilege in order to bind to a privileged port, but a single ``insecure'' machine compromises the security of this privilege check.

first_timeout

Description of first_timeout

This parameter determines the length of time before a portmapper request is retransmitted.

Value of first_timeout

Default:

1

Range:

1 or 2

When to Change first_timeout

Change this value to 2 if your portmapper requests are consistently being timed out. Decreasing this value can seriously impede system performance.

normal_timeout

Description of normal_timeout

This parameter determines the time before a ping request times out and is sent again.

Value of normal_timeout

Default:

5

Range:

1-5

When to Change normal_timeout

Increase this value if your portmapper requests are consistently being timed out. Decreasing this value can seriously impede system performance.

working_timeout

Description of working_timeout

This parameter set the time allowed before a lockd request times out and is sent again.

Value of working_timeout

Default:

5

Range:

1-5

When to Change working_timeout

Increase this value if your portmapper requests are consistently being timed out. Decreasing this value can seriously impede system performance.

svc_maxdupregs

Description of svc_maxdupregs

This parameter sets the number of cached NFS requests.

Value of svc_maxdupregs

Default:

1024

Range:

400-4096

When to Change svc_maxdupregs

This parameter should be adjusted to the service load so that there is likely to be a response entry when the first retransmission comes in.


UDS Parameters

Under UNIX domain sockets, there is a pair of buffers associated with each socket in use. There is a buffer on the receiving side of the socket, and on the sending side. The size of these buffers represent the maximum amount of data that can be queued. The behavior of these buffers differs depending on whether the socket is a streams socket or a data-gram socket.

On a streams socket, when the sender sends data, the data is queued in the receive buffer of the receiving process. If the receive buffer fills up, the data begins queueing in the sendspace buffer. If both buffers fill up, the socket blocks any further data from being sent.

Under data-gram sockets, when the receive buffer fills up, all further data-grams sent are discarded and the error EWOULDBLOCK is generated. Because of this behavior, the default receive buffer size for data-gram sockets is twice that of the send buffer.

The following parameters control UNIX domain sockets (UDS):

unpst_sendspace

Description of unpst_sendspace

This parameter controls the default size of the send buffer on streams sockets.

Value of unpst_sendspace

Default:

0x4000 (16KB)

When to Change unpst_sendspace

It is generally recommended that you change the size of socket buffers individually, since changing this parameter changes the send buffer size on all streams sockets, using a tremendous amount of kernel memory. Also, increasing this parameter increases the amount of time necessary to wait for socket response, since all sockets will have more buffer space to read.

See the setsockopt(2) reference page for more information on setting specific socket options.

unpst_recvspace

Description of unpst_recvspace

This parameter controls the default size of the receive buffer of streams sockets.

Value of unpst_recvspace

Default:

0x4000 (16 Kbytes)

When to Change unpst_recvspace

It is generally recommended that you change the size of socket buffers on an individual basis, since changing this parameter changes the receive buffer size on all streams sockets, using a tremendous amount of kernel memory. Also, increasing this parameter will increase the amount of time necessary to wait for socket response, since all sockets will have more buffer space to read.

See the setsockopt(2) reference page for more information on setting specific individual socket options.

unpdg_sendspace

Description of unpdg_sendspace

This parameter controls the size of a data-gram that can be sent over a socket.

Value of unpdg_sendspace

Default:

0x2000 (8 KB)

When to Change unpdg_sendspace

Data-gram sockets operate slightly differently from streams sockets. When a streams socket fills the receive buffer, all data is then sent to the send buffer, and when the send buffer fills up, an error message is issued. Data-gram sockets allow data-grams to fill the receive buffer, and when the receive buffer is full, all future data-grams are discarded, and the error EWOULDBLOCK is generated. Therefore, the unpdg_sendspace parameter serves only to limit the size of a data-gram to not more than can be received by the receive buffer.

Note that the default data-gram socket receive buffers are twice the size of the default data-gram send buffers, thus allowing the process to appear the same as the streams sockets.

It is generally recommended that you not change the size of this parameter without also changing the default receive buffer size for data-gram sockets. If you raise the value of this parameter (unpdg_sendspace) without raising the receive buffer size (unpdg_recvspace), you will allow the sending half of the socket to send more data than can be received by the receiving half. Also it is generally recommended that socket buffer sizes be set individually via the setsockopt(2) system call. See the setsockopt(2) reference page for more information on setting specific individual socket options.

unpdg_recvspace

Description of unpdg_recvspace

This parameter controls the default size of data-gram socket receive buffers.

Value of unpdg_recvspace

Default:

0x4000 (16 Kbytes)

When to Change unpdg_recvspace

It is generally recommended that you change the size of socket buffers individually, since changing this parameter changes the receive buffer size on all data-gram sockets, using a tremendous amount of kernel memory. Also, increasing this parameter increases the amount of time necessary to wait for socket response, since all sockets will have more buffer space to read.

See the setsockopt(2) reference page for more information on setting specific individual socket options.


[Previous Section] [Back to Table of Contents] [Next Section]


Send feedback to Technical Publications.

Copyright © 1997, Silicon Graphics, Inc. All Rights Reserved. Trademark Information