IRIX Advanced Site and Server Administration Guide |
This appendix describes the tunable parameters that define kernel structures. These structures keep track of processes, files, and system activity. Many of the parameter values are specified in the files found in /var/sysgen/mtune.
If the system does not respond favorably to your tuning changes, you may want to return to your original configuration or continue making changes to related parameters. An unfavorable response to your tuning changes can be as minor as the system not gaining the hoped-for speed or capacity, or as major as the system becoming unable to boot or run. This generally occurs when parameters are changed to a great degree. Simply maximizing a particular parameter without regard for related parameters can upset the balance of the system to the point of inoperability. For complete information on the proper procedure for tuning your operating system, read Chapter 5, "Tuning System Performance."
The rest of this appendix describes the more important tunable parameters according to function. Related parameters are grouped into sections. These sections include:
General tunable parameters. See "General
Tunable Parameters".
Spinlock tunable parameters. See "Spinlocks
Tunable Parameters".
System limits tunable parameters. See "System
Limits Tunable Parameters".
Resource limits tunable parameters. See "Resource
Limits Tunable Parameters".
Paging tunable parameters. See "Paging
Tunable Parameters".
IPC tunable parametersincluding interprocess communication messages,
semaphores, and shared memory. See "IPC
Tunable Parameters", "IPC
Messages Tunable Parameters", "IPC
Semaphores Tunable Parameters", and "IPC
Shared Memory Tunable Parameters".
Streams tunable parameters. See "Streams
Tunable Parameters".
Signal parameters. See "Signal
Parameters".
Dispatch parameters. See "Dispatch
Parameters".
Extent File System (EFS) parameters. See "EFS
Parameters".
Loadable driver parameters. See "Loadable
Drivers Parameters".
CPU actions parameters. See "CPU
Actions Parameters".
Switch parameters. See "Switch
Parameters".
Timer parameters. See "Timer
parameters".
Network File System (NFS) parameters. See "NFS
Parameters".
UNIX Domain Sockets (UDS) parameters. See "UDS Parameters".
Each section begins with a short description of the activities controlled by the parameters in that section, and each listed parameter has a description that may include:
Name the name of the parameter.
Description a description of the parameter, including the file in
which the parameter is specified, and the formula, if applicable.
Value the default setting and, if applicable, a range. Note that the
value given for each parameter is usually appropriate for a single-user
graphics workstation.
When to Change the conditions under which it is appropriate to change
the parameter.
Notes other pertinent information, such as error messages.
Note that the tunable parameters are subject to change with each release of the system.
The following group of tunable parameters specifies the size of various system structures. These are the parameters you will most likely change when you tune a system.
nbuf specifies the number of buffer headers in the file system
buffer cache.
callout_himark specifies the high water mark for callouts
ncallout specifies the initial number of callouts
reserve_ncallout specifies the number of reserved callouts
ncsize specifies the name cache size.
ndquot used by the disk quota system.
nhbuf specifies the number of buffer hash buckets in the disk
buffer cache.
nproc specifies the number of user processes allowed at any
given time.
maxpmem specifies the maximum physical memory address.
syssegsz specifies the maximum number of pages of dynamic system
memory.
maxdmasz specifies the maximum DMA transfer in pages.
The nbuf parameter specifies the number of buffer headers in the file system buffer cache. The actual memory associated with each buffer header is dynamically allocated as needed and can be of varying size, currently 1 to 128 blocks (512 to 64KB).
The system uses the file system buffer cache to optimize file system I/O requests. The buffer memory caches blocks from the disk, and the blocks that are used frequently stay in the cache. This helps avoid excess disk activity.
Buffers are used only as transaction headers. When the input or output operation has finished, the buffer is detached from the memory it mapped and the buffer header becomes available for other uses. Because of this, a small number of buffer headers is sufficient for most systems. If nbuf is set to 0, the system automatically configures nbuf for average systems. There is little overhead in making it larger for non-average systems.
The nbuf parameter is defined in /var/sysgen/mtune.
The automatic configuration is adequate for average systems. If you see dropping ``cache hit'' rates in sar(1M) and osview(1M) output, increase this parameter. Also, if you have directories with a great number of files (over 1000), you may wish to raise this parameter.
The callout_himark parameter specifies the maximum number of callout table entries system-wide. The callout table is used by device drivers to provide a timeout to make sure that the system does not hang when a device does not respond to commands.
This parameter is defined in /var/sysgen/mtune and has the following formula:
nproc + 32
where:
nproc is the maximum number of processes, system-wide.
Increase this parameter if you see console error messages indicating that no more callouts are available.
The ncallout parameter specifies the number of callout table entries at boot time. The system will automatically allocate one new callout entry if it runs out of entries in the callout table. However, the maximum number of entries in the callout table is defined by the callout_himark parameter.
Change ncallout if you are running an unusually high number of device drivers on your system. Note that the system automatically allocates additional entries when needed, and releases them when they are no longer needed.
The reserve_ncallout parameter specifies the number of reserved callout table entries. These reserved table entries exist for kernel interrupt routines when the system has run out of the normal callout table entries and cannot allocate additional entries.
This parameter controls the size of the name cache. The name cache is used to allow the system to bypass reading directory names out of the main buffer cache. A name cache entry contains a directory name, a pointer to the directory's in-core inode and version numbers, and a similar pointer to the directory's parent directory in-core inode and version number.
The ndquot parameter controls disk quotas.
The nhbuf parameter specifies the number of entries in the buffer hash table. Each table's entry is the head of a queue of buffer headers. When a block is requested, a hashing algorithm searches the buffer hash table for the requested buffer. A small hash table reduces the algorithm's efficiency; a large table wastes space.
nhbuf is defined in /var/sysgen/mtune and has the following formula:
nbuf/16 (with the result rounded down to the nearest power of 2)
You should not have to change this parameter.
The nproc parameter specifies the number of entries in the system process (proc) table. Each running process requires an in-core proc structure. Thus nproc is the maximum number of processes that can exist in the system at any given time.
The default value of nproc is based on the amount of memory on your system. To find the currently auto-configured value of nproc, use the systune(1M) command.
The nproc parameter is defined in /var/sysgen/mtune.
Increase this parameter if you see an overflow in the sar -v output for the proc -sz ov column or you receive the operating system message:
no more processes
This means that the total number of processes in the system has reached the current setting. If processes are prevented from forking (being created), increase this parameter. A related parameter is maxup.
If a process can't fork, make sure that this is system-wide, and not just a user ID problem (see the maxup parameter).
If nproc is too small, processes that try to fork receive the operating system error:
EAGAIN: No more processes
The shell also returns a message:
fork failed: too many processes
If a system daemon such as sched, vhand, init, or bdflush can't allocate a process table entry, the system halts and displays:
No process slots
The maxpmem parameter specifies the amount of physical memory (in pages) that the system can recognize. If set to zero (0), the system will use all available pages in memory. A value other than zero defines the physical memory size (in pages) that the system will recognize.
This parameter is defined in /var/sysgen/mtune.
You don't need to change this parameter, except when benchmarks require a specific system memory size less than the physical memory size of the system. This is primarily useful to kernel developers. You can also boot the system with the command:
maxpmem = memory_size
added on the boot command line to achieve the same effect.
This is the maximum number of pages of dynamic system memory.
Increase this parameter correspondingly when maxdmasz is increased, or when you install a kernel driver that performs a lot of dynamic memory allocation.
The maximum DMA transfer expressed in pages of memory. This amount must be less than the value of syssegsz and maxpmem.
Change this parameter when you need to be able to use very large read or write system calls, or other system calls that perform large scale DMA. This situation typically arises when using optical scanners, film recorders or some printers.
R3000-based multiprocessor machines have a limited number of MPBUS hardware locks. To reduce the possibility of spinlock depletion, groups of spinlocks use shared pools of locks. The following parameters are included:
sema_pool_size the number of spinlocks pooled for all semaphores.
vnode_pool_size the number of spinlocks pooled for vnodes.
file_pool_size the number of spinlocks pooled for file structures.
This parameter specifies the number of spinlocks pooled for all semaphores.
This parameter is used exclusively for R3000-based multiprocessor machines. The only time this parameter might need to be changed is if IRIX panics and delivers the error message:
out of spinlocks
This can generally happen only if a driver requiring a large number of spinlocks is added to the system. User spinlock requirements do not affect this parameter. In the case of such a panic and error message, reduce this parameter to the next smaller power of two. (For example, if the value is 8192, reduce it to 4096.)
This parameter specifies the number of spinlocks pooled for vnodes.
This parameter is only used for R3000-based multiprocessor machines. The only time this parameter might need to be changed is if IRIX panics and delivers the error message ``out of spinlocks.'' This can generally only happen if a driver requiring a large number of spinlocks is added to the system. User spinlock requirements do not affect this parameter. In the case of such a panic and error message, reduce this parameter to the next smaller power of two. (For example, if the value is 1024, reduce it to 512.)
This parameter specifies the number of spinlocks pooled for file structures.
This parameter is used exclusively for R3000-based multiprocessor machines. The only time this parameter might need to be changed is if IRIX panics and delivers the error message ``out of spinlocks.'' This can generally happen only if a driver requiring a large number of spinlocks is added to the system. User spinlock requirements do not affect this parameter. In the case of such a panic and error message, reduce this parameter to the next smaller power of two. (For example, if the value is 1024, reduce it to 512.)
IRIX has configurable parameters for certain system limits. For example, you can set maximum values for each process (its core or file size), the number of groups per user, the number of resident pages, and so forth. These parameters are listed below. All parameters are set and defined in /var/sysgen/mtune.
maxup the number of processes per user
ngroups_max the number of groups to which a user may belong
maxwatchpoints the maximum number of ``watchpoints'' per process.
nprofile amount of disjoint text space to be profiled
maxsymlinks specifies the maximum number of symlinks expanded in a pathname.
The maxup parameter defines the number of processes allowed per user login. This number should always be at least 5 processes smaller than nproc.
Increase this parameter to allow more processes per user. In a heavily loaded time-sharing environment, you may want to decrease the value to reduce the number of processes per user.
The ngroups_max parameter specifies the maximum number of multiple groups to which a user may simultaneously belong.
The constants NGROUPS_UMIN <= ngroups_max <= NGROUPS_UMAX are defined in </usr/include/sys/param.h>. NGROUPS_UMIN is the minimum number of multiple groups that can be selected at lboot time. NGROUPS_UMAX is the maximum number of multiple groups that can be selected at lboot time and is the number of group-id slots for which space is set aside at compile time. NGROUPS, which is present for compatibility with networking code (defined in </usr/include/sys/param.h>), must not be larger than ngroups_max.
The default value is adequate for most systems. Increase this parameter if your system has users who need simultaneous access to more than 16 groups.
maxwatchpoints sets the maximum number of watchpoints per process. Watchpoints are set and used via the proc(4) file system. This parameter specifies the maximum number of virtual address segments to be watched in the traced process. This is typically used by debuggers.
Raise maxwatchpoints if your debugger is running out of watchpoints
nprofile is the maximum number of disjoint text spaces that can be profiled using the sprofil(2) system call. This is useful if you need to profile programs using shared libraries or profile an address space using different granularities for different sections of text.
Change nprofile if you need to profile more text spaces than are currently configured.
This parameter defines the maximum number of symbolic links that will be followed during filename lookups (for example, during the open(2) or stat(2) system calls) before ceasing the lookup. This limit is required to prevent loops where a chain of symbolic links points back to the original file name.
Change this parameter if you have pathnames with more than 30 symbolic links.
You can set numerous limits on a per-process basis by using getrlimit(2), setrlimit(2), and limit, the shell built-in command. These limits are inherited, and the original values are set in /var/sysgen/mtune. These limits are different from the system limits listed above in that they apply only to the current process and any child processes that may be spawned. To achieve similar effects, you can also use the limit command within the Bourne, C, and Korn shells (/bin/sh, /bin/csh, and /bin/ksh).
Each limit has a default and a maximum. Only the superuser can change the maximum. Each resource can have a value that turns off any checking RLIM_INFINITY. The default values are adequate for most systems.
The following parameters are associated with system resource limits:
ncargs the number of bytes of arguments that may be passed
during an exec(2) call.
rlimit-core-cur the maximum size of a core file.
rlimit-core-max the maximum value rlimit-core-cur may
hold.
rlimit-cpu-cur the limit for maximum cpu time available to
a process.
rlimit-cpu-max the maximum value rlimit-cpu-current
may hold.
rlimit-data-cur the maximum amount of data space available
to a process.
rlimit-data-max the maximum value rlimit-data-cur may
hold.
rlimit-fsize-cur the maximum file size available to a process.
rlimit-fsize-max the maximum value rlimit-fsize-cur
may hold.
rlimit-nofile-cur the maximum number of file descriptors available
to a process.
rlimit-nofile-max the maximum value rlimit-nofile-cur
may hold.
rlimit-rss-cur the maximum resident set size available to a
process.
rlimit-rss-max the maximum value rlimit-rss-cur may
hold.
rlimit-stack-cur the maximum stack size for a process.
rlimit-stack-max the maximum value rlimit-stack-cur
may hold.
rlimit-vmem-cur the maximum amount of virtual memory for a
process.
rlimit-vmem-max the maximum value rlimit-vmem-cur may
hold.
rsshogfrac the percentage of memory allotted for resident pages.
rsshogslop the number of pages above the resident set maximum
that a process may use.
shlbmax the maximum number of shared libraries with which a process can link.
The ncargs parameter specifies the maximum size of arguments in bytes that may be passed during an exec(2) system call.
This parameter is specified in /var/sysgen/mtune.
The default value is adequate for most systems. Increase this parameter if you get the following message from exec(2), shell(1), or make(1):
E2BIG arg list too long
Setting this parameter too large wastes memory (although this memory is pageable) and may cause some programs to function incorrectly. Also note that some shells may have independent limits smaller than ncargs.
The current limit to the size of core image files for the given process.
Change this parameter when you want to place a cap on core file size.
The maximum limit to the size of core image files.
Change this parameter when you want to place a maximum restriction on core file size. rlimit_core_cur cannot be larger than this value.
The current limit to the amount of cpu time in minutes that may be used in executing the process.
Change this parameter when you want to place a cap on cpu usage.
The maximum limit to the amount of cpu time that may be used in executing a process.
Change this parameter when you want to place a maximum restriction on general cpu usage.
The current limit to the data size of the process.
Change this parameter when you want to place a cap on data segment size.
The maximum limit to the size of data that may be used in executing a process.
Change this parameter when you want to place a maximum restriction on the size of the data segment of any process.
The current limit to file size on the system for the process.
Change this parameter when you want to place a limit on file size.
The maximum limit to file size on the system.
Change this parameter when you want to place a maximum size on all files.
The current limit to the number of file descriptors that may be used in executing the process.
Change this parameter when you want to place a cap on the number of file descriptors.
The maximum limit to the number of file descriptors that may be used in executing a process.
Change this parameter when you want to place a maximum restriction on the number of file descriptors.
The current limit to the resident set size (the number of pages of memory in use at any given time) that may be used in executing the process. This limit is the larger of the results of the following two formulae:
physical_memory_size - 4 MB
or
physical_memory_size * 9/10
Change this parameter when you want to place a cap on the resident set size of a process.
The maximum limit to the resident set size that may be used in executing a process.
Change this parameter when you want to place a maximum restriction on resident set size.
The current limit to the amount of stack space that may be used in executing the process.
Change this parameter when you want to place a limit on stack space usage.
The maximum limit to the amount of stack space that may be used in executing a process.
Change this parameter when you want to place a maximum restriction on stack space usage.
The current limit to the amount of virtual memory that may be used in executing the process.
Change this parameter when you want to place a cap on virtual memory usage.
The maximum limit to the amount of virtual memory that may be used in executing a process.
Change this parameter when you want to place a maximum restriction on virtual memory usage.
The number of physical memory pages occupied by a process at any given time is called its resident set size (RSS). The limit to the RSS of a process is determined by its allowable memory-use resource limit. rsshogfrac is designed to guarantee that even if one or more processes are exceeding their RSS limit, some percentage of memory is always kept free so that good interactive response is maintained.
Processes are permitted to exceed their RSS limit until either:
one or more processes exceed their default RSS limit (thereby becoming
an ``RSS hog'') and the amount of free memory drops below rsshogfrac
of the total amount of physical memory; or
the amount of free memory drops below GPGSHI
In either of these cases, the paging daemon runs and trims pages from all RSS processes exceeding the RSS limit.
The parameter RSSHOGFRAC is expressed as a fraction of the total physical memory of the system. The default value is 75 percent.
This parameter is specified in /var/sysgen/mtune. For more information, see the gpgshi, gpgslo, and rsshogslop resource limits.
The default value is adequate for most systems.
To avoid thrashing (A condition where the computer devotes 100% of its CPU cycles to swapping and paging), a process can use up to rsshogslop more pages than its resident set maximum (see "Resource Limits Tunable Parameters").
This parameter is specified in /var/sysgen/mtune. For more information, see the rsshogfrac resource limit.
The default value is adequate for most systems.
The shlbmax parameter specifies the maximum number of shared libraries with which a process can link.
This parameter is specified in /var/sysgen/mtune.
The default value is adequate for most systems. Increase this parameter if you see the following message from exec(2):
ELIBMAX cannot link
The paging daemon, vhand, frees up memory as the need arises. This daemon uses a ``least recently used'' algorithm to approximate process working sets and writes those pages out to disks that have not been touched during a specified period of time. The page size is 4K. When memory gets exceptionally tight, vhand may swap out entire processes.
vhand reclaims memory by:
stealing memory from processes that have exceeded their permissible
resident set size maximum, forcing delayed write data buffers out to disk
(with bdflush) so that the underlying pages can be reused
calling down to system resource allocators to trim back dynamically
sized data structures
stealing pages from processes in order of lowest-priority process first,
and the least-recently-used page first within that process
swapping out the entire process
The following tunable parameters determine how often vhand runs and under what conditions. Note that the default values should be adequate for most applications.
The following parameters are included:
bdflushr specifies how often, in seconds, the bdflush daemon
is executed; bdflush performs periodic flushes of dirty file system buffers.
gpgsmsk specifies the mask used to determine if a given page
may be swapped.
gpgshi the number of free pages above which vhand stops
stealing pages.
gpgslo the number of free pages below which vhand starts
stealing pages
maxlkmem The maxlkmem parameter specifies the maximum
number of physical pages that can be locked in memory (by mpin(2) or plock(2))
by a non-superuser process.
maxsc the maximum number of pages that may be swapped by the
vhand daemon in a single operation.
maxfc the maximum number of pages that will be freed at once.
maxdc the maximum number of pages that will be written to disk
at once.
minarmem the minimum available resident pages of memory.
minasmem the minimum available swappable pages of memory.
tlbdrop number of clock ticks before a process' wired entries are flushed.
The bdflushr parameter specifies how often, in seconds, the bdflush daemon is executed; bdflush performs periodic flushes of dirty file system buffers.
This parameter is specified in /var/sysgen/mtune. For more information, see the autoup kernel parameter.
This value is adequate for most systems. Increasing this parameter increases the chance that more data could be lost if the system crashes. Decreasing this parameter increases system overhead.
The gpgsmsk parameter specifies the mask used to determine if a given page may be swapped. Whenever the pager (vhand) is run, it decrements software reference bits for every active page. When a process subsequently references a page, the counter is reset to the limit (NDREF, as defined in /usr/include/sys/immu.h). When the pager is looking for pages to steal back (if memory is in short supply), it takes only pages whose reference counter has fallen to gpgsmsk or below.
This parameter is specified in /var/sysgen/mtune.
Also see /usr/include/sys/immu.h and /usr/include/sys/tuneable.h and the gpgshi and gpgslo kernel parameters for more information.
This value is adequate for most systems.
If the value is greater than 4, pages are written to the swap area earlier than they would be with the default value of gpgsmsk. Thus swapping/paging may occur before it should, unnecessarily using system resources.
When the vhand daemon (page handler) is stealing pages, it stops stealing when the amount of free pages is greater than gpgshi.
In other words, vhand starts stealing pages when there are fewer than gpgslo free pages in the system. Once vhand starts stealing pages, it continues until there are gpgshi pages.
If, at boot time, gpgslo and gpgshi are 0, the system sets gpgshi to 8% of the number of pages of memory in the system, and sets gpgslo to one half of gpgshi.
This parameter is specified in /var/sysgen/mtune. For more information, see the kernel parameters gpgsmsk and gpgslo.
This value is adequate for most systems.
If this parameter is too small, vhand cannot supply enough free memory for system-wide demand.
When the vhand daemon (page handler) executes, it won't start stealing back pages unless there are fewer than gpgslo free pages in the system. Once vhand starts stealing pages, it continues until there are gpgshi pages.
This parameter is specified in /var/sysgen/mtune. For more information, see the gpgshi and gpgsmsk kernel parameters.
This value is adequate for most systems.
If this parameter is too small, vhand does not start swapping pages; thus entire processes must be swapped. If this parameter is too large, vhand swaps pages unnecessarily.
The maxlkmem parameter specifies the maximum number of physical pages that can be locked in memory (by mpin(2) or plock(2)) per non-superuser process.
This parameter is specified in /var/sysgen/mtune.
Increase this parameter only if a particular application has a real need to lock more pages in memory.
On multi-user servers, you may want to decrease this parameter and also decrease rlimit_vmem_cur.
When pages are locked in memory, the system can't reclaim those pages, and therefore can't maintain the most efficient paging.
The maxfc parameter specifies the maximum number of pages that may be freed by the vhand daemon in a single operation. When the paging daemon (vhand) starts stealing pages, it collects pages that can be freed to the general page pool. It collects, at most, maxfc pages at a time before freeing them. Do not confuse this parameter with gpgshi, which sets the total number of pages that must be free before vhand stops stealing pages.
This parameter is specified in /var/sysgen/mtune.
This value is adequate for most systems.
The maxsc parameter specifies the maximum number of pages that may be swapped by the vhand daemon in a single operation. When the paging daemon starts tossing pages, it collects pages that must be written out to swap space before they are actually swapped and then freed into the general page pool. It collects at most maxsc pages at a time before swapping them out.
This parameter is specified in /var/sysgen/mtune.
You may want to decrease this parameter on systems that are swapping over NFS (Network File System). This is always the case for diskless systems to increase performance.
maxdc is the maximum number of pages which can be saved up and written to the disk at one time.
If the system is low on memory and consistently paging out user memory to remote swap space (for example, mounted via NFS), decrease this parameter by not more than 10 pages at a time. However, this parameter's setting does not usually make any measurable difference in system performance.
This parameter represents the minimum available resident memory that must be maintained in order to avoid deadlock.
The automatically configured value of this parameter should always be correct for each system. You should not have to change this parameter.
This parameter represents the minimum available swappable memory that must be maintained in order to avoid deadlock.
The automatically configured value of this parameter should always be correct for each system. You should not have to change this parameter.
This parameter specifies the number of clock ticks before a process' wired entries are flushed.
If sar(1) indicates a great deal of transaction lookaside buffer (utlbmiss) overhead in a very large application, you may need to increase this parameter. In general, the more the application changes the memory frame of reference in which it is executing, the more likely increasing tlbdrop will help performance. You may have to experiment somewhat to find the optimum value for your specific application.
The IPC tunable parameters set interprocess communication (IPC) structures. These structures include IPC messages, specified in /var/sysgen/mtune/msg; IPC semaphores, specified in /var/sysgen/mtune/sem; and IPC shared memory, specified in /var/sysgen/mtune/shm.
If IPC (interprocess communication) structures are incorrectly set, certain system calls will fail and return errors.
Before increasing the size of an IPC parameter, investigate the problem by using ipcs(1) to see if the IPC resources are being removed when no longer needed. For example, shmget returns the error ENOSPC if applications do not remove semaphores, memory segments, or message queues.
Note that IPC objects differ from most IRIX objects in that they are not automatically freed when all active references to them are gone. In particular, they are not deallocated when the program that created them exits.
Table A-1 lists error messages, system calls that cause the error, and parameters to adjust. Subsequent paragraphs explain the details you need to know before you increase the parameters listed in this table.
Message | System Call | Parameter |
---|---|---|
EAGAIN | msgsnd() | see below |
EINVAL | msgsnd() shmget() |
msgmax shmmax |
EMFILE | shmat() | sshmseg |
ENOSPC | semget() shmget() |
msgmni semmni, semmns, shmmni |
If IPC_NOWAIT is set, msgsnd can return EAGAIN for a number of reasons:
The total number of bytes on the message queue exceeds msgmnb.
The total number of bytes used by messages in the system exceeds msgseg
* msgsz.
The total number of system-wide message headers exceeds msgmax.
shmget (which gets a new shared memory segment identifier) will fail with EINVAL if the given size is not within shmmin and shmmax. Since shmmin is set to the lowest possible value (1), and shmmax is very large, it should not be necessary to change these values.
shmat will return EMFILE if it attaches more than sshmseg shared memory segments. sshmseg is the total number of system-shared memory segments per process.
shmget will return ENOSPC if:
shmmni (the system-wide number of shared memory segments) is too small. However, applications may be creating shared memory segments and forgetting to remove them. So, before making a parameter change, use ipcs(1) to get a listing of currently active shared memory segments.
semget will return ENOSPC if:
semmni is too small, indicating that the total number of semaphore
identifiers is exceeded.
semmns (the system-wide number of semaphores) is exceeded. Use ipcs to see if semaphores are being removed as they should be.
msgget will return ENOSPC if:
msgmni is too small. Use ipcs to see if message queues are being removed as they should be.
If no one on the system uses or plans to use IPC messages, you may want to consider excluding this module. The following tunable parameters are associated with interprocess communication messages (see the msgctl(2) reference page):
msgmax specifies the maximum size of a message.
msgmnb specifies the maximum length of a message queue.
msgmni specifies the maximum number of message queues system-wide.
msgseg specifies the maximum number of message segments system-wide.
msgssz specifies the size, in bytes, of a message segment.
msgtql specifies the maximum number of message headers system-wide.
The msgmax parameter specifies the maximum size of a message.
This parameter is specified in /var/sysgen/mtune/msg.
Increase this parameter if the maximum size of a message needs to be larger. Decrease the value to limit the size of messages.
The msgmnb parameter specifies the maximum length of a message queue.
This parameter is specified in /var/sysgen/mtune/msg.
Increase this parameter if the maximum number of bytes on a message queue needs to be longer. Decrease the value to limit the number of bytes per message queue.
The msgmni parameter specifies the maximum number of message queues system-wide.
This parameter is specified in /var/sysgen/mtune/msg.
Increase this parameter if you want more message queues on the system. Decrease the value to limit the message queues.
If there are not enough message queues, a msgget(2) system call that attempts to create a new message queue returns the error:
ENOSPC: No space left on device
The msgseg parameter specifies the maximum number of message segments system-wide. A message on a message queue consists of one or more of these segments. The size of each segment is set by the msgssz parameter.
This parameter is specified in /var/sysgen/mtune/msg.
Modify this parameter to reserve the appropriate amount of memory for messages. Increase this parameter if you need more memory for message segments on the system. Decrease the value to limit the amount of memory used for message segments.
If this parameter is too large, memory may be wasted (saved for messages but never used). If this parameter is too small, some messages that are sent will not fit into the reserved message buffer space. In this case, a msgsnd(2) system call waits until space becomes available.
The msgssz parameter specifies the size, in bytes, of a message segment. Messages consist of a contiguous set of message segments large enough to accommodate the text of the message. Using segments helps to eliminate fragmentation and speed the allocation of the message buffers.
This parameter is specified in /var/sysgen/mtune/msg.
This parameter is set to minimize wasted message buffer space. Change this parameter only if most messages do not fit into one segment with a minimum of wasted space.
If you modify this parameter, you may also need to change the msgseg parameter.
If this parameter is too large, message buffer space may be wasted by fragmentation, which in turn may cause processes that are sending messages to sleep while waiting for message buffer space.
The msgtql parameter specifies the maximum number of message headers system-wide, and thus the number of outstanding (unread) messages. One header is required for each outstanding message.
This parameter is specified in /var/sysgen/mtune/msg.
Increase this parameter if you require more outstanding messages. Decrease the value to limit the number of outstanding messages.
If this parameter is too small, a msgsnd(2) system call attempting to send a message that would put msgtql over the limit waits until messages are received (read) from the queues.
If no one on the system uses or plans to use IPC semaphores, you may want to consider excluding this module.
The following tunable parameters are associated with interprocess communication semaphores (see the semctl(2) reference page):
semmni specifies the maximum number of semaphore identifiers
in the kernel.
semmns specifies the number of ipc semaphores system-wide.
semmnu specifies the number of ''undo'' structures system-wide.
semmsl specifies the maximum number of semaphores per semaphore
identifier.
semopm specifies the maximum number of semaphore operations
that can be executed per semop(2) system call.
semume specifies the maximum number of ''undo'' entries per
undo structure.
semvmx specifies the maximum value that a semaphore can have.
semaem specifies the adjustment on exit for maximum value.
The semmni parameter specifies the maximum number of semaphore identifiers in the kernel. This is the number of unique semaphore sets that can be active at any given time. Semaphores are created in sets; there may be more than one semaphore per set.
This parameter is specified in /var/sysgen/mtune/sem.
Increase this parameter if processes require more semaphore sets. Increasing this parameter to a large value requires more memory to keep track of semaphore sets. If you modify this parameter, you may need to modify other related parameters.
The semmns parameter specifies the number of ipc semaphores system-wide. This parameter is specified in /var/sysgen/mtune/sem.
Increase this parameter if processes require more than the default number of semaphores.
If you set this parameter to a large value, more memory is required to keep track of semaphores.
The semmnu parameter specifies the number of ''undo'' structures system-wide. An undo structure, which is set up on a per-process basis, keeps track of process operations on semaphores so that the operations may be ''undone'' if the structure terminates abnormally. This helps to ensure that an abnormally terminated process does not cause other processes to wait indefinitely for a change to a semaphore.
This parameter is specified in /var/sysgen/mtune/sem.
Change this parameter when you want to increase/decrease the number of undo structures permitted on the system. semmnu limits the number of processes that can specify the UNDO flag in the semop(2) system call to undo their operations on termination.
The semmsl parameter specifies the maximum number of semaphores per semaphore identifier.
This parameter is specified in /var/sysgen/mtune/sem.
Increase this parameter if processes require more semaphores per semaphore identifier.
The semopm parameter specifies the maximum number of semaphore operations that can be executed per semop() system call. This parameter permits the system to check or modify the value of more than one semaphore in a set with each semop() system call.
This parameter is specified in /var/sysgen/mtune/sem.
Change this parameter to increase/decrease the number of operations permitted per semop() system call. You may need to increase this parameter if you increase semmsl (the number of semaphore sets), so that a process can check/modify all the semaphores in a set with one system call.
The semume parameter specifies the maximum number of ''undo'' entries per undo structure. An undo structure, which is set up on a per-process basis, keeps track of process operations on semaphores so that the operations may be ''undone'' if it terminates abnormally. Each undo entry represents a semaphore that has been modified with the UNDO flag specified in the semop(2) system call.
This parameter is specified in /var/sysgen/mtune/sem.
Change this parameter to increase/decrease the number of undo entries per structure. This parameter is related to the semopm parameter (the number of operations per semop(2) system call).
The semvmx parameter specifies the maximum value that a semaphore can have.
This parameter is specified in /var/sysgen/mtune/sem.
Decrease this parameter if you want to limit the maximum value for a semaphore.
The semaem parameter specifies the adjustment on exit for maximum value, alias semadj. This value is used when a semaphore value becomes greater than or equal to the absolute value of semop(2), unless the program has set its own value.
This parameter is specified in /var/sysgen/mtune/sem.
Change this parameter to decrease the maximum value for the adjustment on exit value.
The following tunable parameters are associated with interprocess communication shared memory:
shmall specifies the maximum number of pages of shared memory
that can be allocated at any given time to all processes combined.
shmmax specifies the maximum size of an individual shared memory
segment.
shmmin specifies the minimum shared memory segment size.
shmmni specifies the maximum number of shared memory identifiers
system-wide.
sshmseg specifies the maximum number of attached shared memory segments per process.
The shmall parameter specifies the maximum number of pages of shared memory that can be allocated at any given time to all processes on the system combined.
This parameter is specified in /var/sysgen/mtune/shm.
Keep this parameter as small as possible so that the use of shared memory does not cause unnecessary swapping.
Decrease this parameter to limit the amount of memory that can be used for shared memory segments. You may do this if swapping is occurring because large amounts of memory are being used for shared memory segments. But be aware that if an application requires a larger shared memory segment, that application will not run if this limit is set too low.
The shmmax parameter specifies the maximum size of an individual shared memory segment.
This parameter is specified in /var/sysgen/mtune/shm.
Keep this parameter small if it is necessary that a single shared memory segment not use too much memory.
The shmmin parameter specifies the minimum shared memory segment size.
This parameter is specified in /var/sysgen/mtune/shm.
Increase this parameter if you want an error message to be generated when a process requests a shared memory segment that is too small.
The shmmni parameter specifies the maximum number of shared memory identifiers system-wide.
This parameter is specified in /var/sysgen/mtune/shm.
Increase this parameter by one (1) for each additional shared memory segment that is required, and also if processes that use many shared memory segments reach the shmmni limit.
Decrease this parameter if you need to reduce the maximum number of shared memory segments of the system at any one time. Also decrease it to reduce the amount of kernel space taken for shared memory segments.
The sshmseg parameter specifies the maximum number of attached shared memory segments per process. A process must attach a shared memory segment before the data can be accessed.
This parameter is specified in /var/sysgen/mtune/shm.
Keep this parameter as small as possible to limit the amount of memory required to track the attached segments.
Increase this parameter if processes need to attach more than the default number of shared memory segments at one time.
The following parameters are associated with STREAMS processing.
nstrpush maximum number of modules that can be pushed on a
stream.
strctlsz maximum size of the ctl part of message.
strmsgsz maximum stream message size.
nstrpush defines the maximum number of STREAMS modules that can be pushed on a single stream.
Change nstrpush from 9 to 10 modules when you need an extra module.
strctlsz is the maximum size of the ctl buffer of a STREAMS message. See the getmsg(2) or putmsg(2) reference pages for a discussion of the ctl and data parts of a STREAMS message.
Change strctlsz when you need a larger buffer for the ctl portion of a STREAMS message.
strmsgsz defines the maximum STREAMS message size. This is the maximum allowable size of the ctl part plus the data part of a message. Use this parameter in conjunction with the strctlsz parameter described above to set the size of the data buffer of a STREAMS message. See the getmsg(2) or putmsg(2) reference pages for a discussion of the ctl and data parts of a STREAMS message.
Change this parameter in conjunction with the strctlsz parameter to adjust the sizes of the STREAMS message as a whole and the data portion of the message.
The following signal parameters control the operation of interprocess signals within the kernel:
maxsigq specifies the maximum number of signals that can be queued.
The maximum number of signals that can be queued. Normally, multiple instances of the same signal result in only one signal being delivered. With the use of the SA_SIGINFO flag, outstanding signals of the same type are queued instead of being dropped.
Raise maxsigq when a process expects to receive a great number of signals and 64 queue places may be insufficient to avoid losing signals before they can be processed. Change maxsigq to a value appropriate to your needs.
One of the most important functions of the kernel is ``dispatching'' processes. When a user issues a command and a process is created, the kernel endows the process with certain characteristics. For example, the kernel gives the process a priority for receiving CPU time. This priority can be changed by the user who requested the process or by the Superuser. Also, the length of time (slice-size) that a process receives in the CPU is adjustable by a dispatch parameter. The Periodic Deadline Scheduler (PDS) is also part of the dispatch group. The deadline scheduler is invoked via the schedctl(2) system call in a user program and requires the inclusion of <sys/schedctl.h>. The following parameters are included in the dispatch group:
ndpri_hilim sets the highest non-degrading priority a user
process may have.
ndpri_lolim sets the lowest non-degrading priority a user process
may have.
runq_dl_refframe sets a limit on the amount of the reference
frame that can be allocated.
runq_dl_nonpriv controls the amount of the reference frame
that can be allocated by non-privileged user processes.
runq_dl_use specifies the longest interval that a deadline
process may request.
slice-size specifies the amount of time a process receives at the CPU.
The ndpri_hilim parameter sets the highest non-degrading priority a user process may have.
Note that the higher the numeric value of ndpri_hilim, the lower the priority of the process.
Change this parameter when you want to limit user process priority in a busy system with many users or when you want to enable a high priority user process, for example, a real-time graphics application.
ndpri-lolim sets the lowest possible non-degrading priority for a user process. Note that lower priority values give a process higher scheduling priority.
The default value is adequate for most systems.
This parameter sets an absolute limit on the amount of the reference frame (set by the runq_dl_refframe parameter) that can be allocated under any circumstances.
If your deadline-scheduled processes require more scheduled CPU time, increase the value of runq_dl_maxuse and runq_dl_nonpriv.
This parameter controls the amount of the reference frame (set by the runq_dl_refframe parameter) that can be allocated by non-privileged user processes.
If your non-privileged deadline processes require more CPU time, increase this value.
This parameter specifies the longest interval that a deadline process may request.
If you change the values of runq_dl_nonpriv and runq_dl_use, you may need to change this parameter as well, to expand the reference frame in which runq_dl_nonpriv and runq_dl_use act.
slice-size is the default process time slice, expressed as a number of ticks of the system clock. The frequency of the system clock is expressed by the constant Hz, which has a value of 100. Thus each unit of slice-size corresponds to 10 milliseconds. When a process is given control of the CPU, the kernel lets it run for slice-size ticks. When the time slice expires or when the process voluntarily gives up the CPU (for example, by calling pause(2) or by doing some other system call that causes the process to wait), the kernel examines the run queue and selects the process with the highest priority that is eligible to run on that CPU.
The slice-size parameter is defined in /var/sysgen/mtune/disp and has the following formula:
#define slice-size Hz / 30 int slice_size = slice-size
Since slice_size is an integer, the resulting value is 3. This means that the default process time slice is 30 milliseconds.
If you use the system primarily for compute-intensive jobs and interactive response is not an important consideration, you can increase slice-size. For example, setting slice-size to 10 gives greater efficiency to the compute jobs, since each time they get control of the CPU, they are able to run for 100 milliseconds before getting switched out.
In situations where the system runs both compute jobs and interactive jobs, interactive response time will suffer as you increase the value of slice-size.
The IRIX Extent File System works closely with the operating system kernel, and the following parameters adjust the kernel's interface with the file system.
The following parameters are defined in the efs group:
efs_bmmax The number of efs bitmap buffers to keep cached.
The following parameters are set in the kernel parameter group, and are used for file system tuning. They determine how many pages of memory are clustered together into a single disk write operation. Adjusting these parameters can greatly increase file system performance. Available parameters include:
dwcluster number of delayed-write pages to cluster in each
push.
autoup specifies the age, in seconds, that a buffer marked for delayed write must be before the bdflush daemon writes it to disk.
This parameter represents the number of efs bitmap buffers to keep privately cached at any given time.
It is not generally useful to change this parameter, but you may want to increase it over the default for systems with 10 gigabytes of disk space or more and a great deal of file system activity.
This parameter sets the maximum number of delayed-write pages to cluster in each push.
It should not be necessary to change this parameter. The automatically configured value is sufficient.
The autoup parameter specifies the age, in seconds, that a buffer marked for delayed write must be before the bdflush daemon writes it to disk. This parameter is specified in /var/sysgen/mtune. For more information, see the entry for the bdflushr kernel parameter.
This value is adequate for most systems.
IRIX 5.0 allows you to load and run device drivers while the system remains up and running. Occasionally, you may have to make adjustments to the running kernel to allow for the extra resources these loadable drivers require. The following parameters allow you to make the necessary adjustments:
bdevsw_extra specifies an extra number of entries in the block
device switch.
cdevsw_extra specifies an extra number of entries in the character
device switch.
fmodsw_extra specifies an extra number of entries in the streams
module switch.
vfssw_extra specifies an extra number of entries in the virtual file system module switch.
This parameter specifies an extra number of entries in the block device switch. This parameter is for use by loadable drivers only. If you configured a block device into the system at lboot(1M) time, you will not need to add extra entries to bdevsw.
Change this parameter when you have more than 21 block devices to load into dynamically the system. IRIX provides 21 spaces in the bdevsw by default.
This parameter specifies an extra number of entries in the character device switch. This parameter is for use by loadable drivers only. If you configured a character device into the system at lboot(1M) time, you will not need to add extra entries to cdevsw.
Change this parameter when you have more than 23 character devices to load dynamically into the system. IRIX provides 23 spaces in the cdevsw by default.
This parameter specifies an extra number of entries in the streams module switch. This parameter is for use by loadable drivers only. If you configured a streams module into the system at lboot(1M) time, you will not need to add extra entries to fmodsw.
Change this parameter when you have more than 20 streams modules to load dynamically into the system. IRIX provides 20 spaces in the fmodsw by default.
This parameter specifies an extra number of entries in the vnode file system module switch. This parameter is for use by loadable drivers only. If you configured a vfs module into the system at lboot(1M) time, you will not need to add extra entries to vfssw.
Change this parameter when you have more than 5 virtual file system modules to load dynamically into the system. IRIX provides 5 spaces in the vfssw by default.
CPU actions parameters are used in multi-processor systems to allow the user to select the processor or processors that will be used to perform a given task.
The following parameters are defined:
nactions specifies the number of action blocks.
The nactions parameter controls the number of action blocks. An action block lets you queue a process to be run on a specific CPU. The value of the nactions parameter is found by the formula:
maxcpu + 60
Increase the value of nactions if you see the kernel error message:
PANIC: Ran out of action blocks
The following parameters are simple on/off switches within the kernel that allow or disallow certain features, such as whether shells that set the user ID to the superuser are allowed:
svr3pipe controls whether SVR3.2 or SVR4 pipes are used.
nosuidshells when set to 0, allows applications to create superuser-privileged
shells. When set to any value other than 0, such shells are not permitted.
posix_tty_default if the value of this switch is 0, the default
Silicon Graphics line disciplines are used. If the value is set to 1, POSIX
line disciplines and settings are used.
resettable_clocal allows you to use either the default behavior
or POSIX compliant behavior.
restricted_chown allows you to decide whether you want to use
BSD UNIX style chown(2) system call or the System V style.
force_old_dump When set to 1, forces the system to use old-style
core dump formats, rather than the new IRIX 5 format.
use_old_serialnum When set to 1, forces the kernel to use the old method of calculating a 32-bit serial number for sysinfo -s. This variable affects only Onyx and Challenge L or XL systems.
Note that all the above listed parameters are enforced system-wide. It is not possible to select different values on a per-process basis.
This parameter, when set to 1, specifies SVR3.2 style pipes, which are unidirectional. When set to 0, SVR4 style pipes are specified, which are bidirectional.
Change this parameter if you wish to take advantage of SVR4 style pipes.
Some programs are written so that they perform actions that require superuser privilege. In order to perform these actions, they create a shell in which the user has superuser privilege. Such shells pose a certain manageable risk to system security, but application developers are generally careful to limit the actions taken by the application in these shells. The nosuidshells switch, when set to 0, allows these applications to create superuser-privileged shells. When set to any value other than 0, such shells are not permitted.
Change this switch to allow setuid shells.
IRIX uses a default system of line disciplines and settings for serial lines. These default settings are different from those specified by POSIX. If the value of this switch is 0, the default Silicon Graphics line disciplines are used. If the value is set to 1, POSIX line disciplines and settings are used.
Change this switch if you need to use POSIX line disciplines.
In the standard configuration, the CLOCAL environment variable on a tty is read-only, but under POSIX, CLOCAL can be reset. This switch allows you to use either the default behavior or POSIX compliant behavior.
Change this switch if you need POSIX compliance in your tty handling.
This switch allows you to decide whether you want to use a BSD UNIX style chown(2) system call or the System V style. Under the BSD version, only the Superuser can use the chown system call to ''give away'' a file to change the ownership to another user. Under the System V version, any user can give away a file or directory. If the value of the switch is 0, System V chown is enabled. If the value is not zero, BSD chown is enabled.
Change this switch to choose which behavior you prefer for the chown(2) system call.
When set to 1, this parameter forces the system to use old-style core dump formats, rather than the new IRIX 5 format.
This parameter is for use when the new form of compressed dumps are inadequate.
When set to 1, this parameter forces the kernel to use the old method (before IRIX Version 5) of calculating a 32-bit serial number for sysinfo -s. This variable affects only Onyx and Challenge L or XL systems.
Change this parameter on your Challenge or Onyx system if you need to use some older software that requires a 32-bit serial number.
Timer parameters control the functioning of system clocks and timing facilities. The following parameters are defined:
fasthz sets the profiling/fast itimer clock speed.
itimer_on_clkcpu determines whether itimer requests
are queued on the clock processor or on the running processor, respectively.
timetrim the system clock is adjusted every second by the signed number of nanoseconds specified by this parameter.
This parameter is used to set the profiling/fast itimer clock speed.
Change this parameter to give a finer or coarser grain for such system calls as gettimeofday(3B), getitimer(2) and settimer(2).
This parameter is set to either 0 or 1, to determine whether itimer requests are queued on the clock processor or on the running processor, respectively.
If a process uses the gettimeofday(2) call to compare the accuracy of the itimer delivery, then you should set this parameter to 1, to take advantage of the clock processor. If the itimer request is for the purpose of implementing a user frequency-based scheduler, then set this parameter to 0 to queue the requests on the current running processor.
The system clock is adjusted every second by the signed number of nanoseconds specified by this parameter. This adjustment is limited to 3 milliseconds or 0.3%. timed(1M) and timeslave(1M) periodically place suggested values in /var/adm/SYSLOG.
Change this parameter as suggested by timed and timeslave.
The following parameters control the kernel-level functions of the Network File System (NFS). Reducing these values is likely to cause significant performance decreases in your system:
nfs_portmon set to 0, clients may use any available port. If
it is set to 1, clients must use only privileged ports.
first_timeout sets the portmapper query timeout.
normal_timeout sets the lockd ping timeout.
working_timeout sets the lockd requests timeout.
svc_maxdupregs sets the number of cached NFS requests.
This parameter determines whether or not a client must use a ``privileged'' port for NFS requests. Only processes with superuser privilege may bind to a privileged port. The nfs_portmon parameter is binary. If it is set to 0, clients may use any available port. If it is set to 1, clients must use only privileged ports.
You should change this parameter only if it is absolutely necessary to maintain root privilege on your NFS mounted file systems and you have checked each NFS client to be sure that it requests a privileged port. If there are any clients requesting non-privileged ports, they will be unable to mount the file systems.
Additionally, changing the value of nfs_portmon to 1 can give a false sense of security. A process must have root privilege in order to bind to a privileged port, but a single ``insecure'' machine compromises the security of this privilege check.
This parameter determines the length of time before a portmapper request is retransmitted.
Change this value to 2 if your portmapper requests are consistently being timed out. Decreasing this value can seriously impede system performance.
This parameter determines the time before a ping request times out and is sent again.
Increase this value if your portmapper requests are consistently being timed out. Decreasing this value can seriously impede system performance.
This parameter set the time allowed before a lockd request times out and is sent again.
Increase this value if your portmapper requests are consistently being timed out. Decreasing this value can seriously impede system performance.
This parameter sets the number of cached NFS requests.
This parameter should be adjusted to the service load so that there is likely to be a response entry when the first retransmission comes in.
Under UNIX domain sockets, there is a pair of buffers associated with each socket in use. There is a buffer on the receiving side of the socket, and on the sending side. The size of these buffers represent the maximum amount of data that can be queued. The behavior of these buffers differs depending on whether the socket is a streams socket or a data-gram socket.
On a streams socket, when the sender sends data, the data is queued in the receive buffer of the receiving process. If the receive buffer fills up, the data begins queueing in the sendspace buffer. If both buffers fill up, the socket blocks any further data from being sent.
Under data-gram sockets, when the receive buffer fills up, all further data-grams sent are discarded and the error EWOULDBLOCK is generated. Because of this behavior, the default receive buffer size for data-gram sockets is twice that of the send buffer.
The following parameters control UNIX domain sockets (UDS):
unpst_sendspace UNIX domain socket stream send buffer size.
unpst_recvspace UNIX domain socket stream receive buffer size.
unpdg_sendspace UNIX domain socket data-gram send buffer size.
unpdg_recvspace UNIX domain socket data-gram receive buffer size.
This parameter controls the default size of the send buffer on streams sockets.
It is generally recommended that you change the size of socket buffers individually, since changing this parameter changes the send buffer size on all streams sockets, using a tremendous amount of kernel memory. Also, increasing this parameter increases the amount of time necessary to wait for socket response, since all sockets will have more buffer space to read.
See the setsockopt(2) reference page for more information on setting specific socket options.
This parameter controls the default size of the receive buffer of streams sockets.
It is generally recommended that you change the size of socket buffers on an individual basis, since changing this parameter changes the receive buffer size on all streams sockets, using a tremendous amount of kernel memory. Also, increasing this parameter will increase the amount of time necessary to wait for socket response, since all sockets will have more buffer space to read.
See the setsockopt(2) reference page for more information on setting specific individual socket options.
This parameter controls the size of a data-gram that can be sent over a socket.
Data-gram sockets operate slightly differently from streams sockets. When a streams socket fills the receive buffer, all data is then sent to the send buffer, and when the send buffer fills up, an error message is issued. Data-gram sockets allow data-grams to fill the receive buffer, and when the receive buffer is full, all future data-grams are discarded, and the error EWOULDBLOCK is generated. Therefore, the unpdg_sendspace parameter serves only to limit the size of a data-gram to not more than can be received by the receive buffer.
Note that the default data-gram socket receive buffers are twice the size of the default data-gram send buffers, thus allowing the process to appear the same as the streams sockets.
It is generally recommended that you not change the size of this parameter without also changing the default receive buffer size for data-gram sockets. If you raise the value of this parameter (unpdg_sendspace) without raising the receive buffer size (unpdg_recvspace), you will allow the sending half of the socket to send more data than can be received by the receiving half. Also it is generally recommended that socket buffer sizes be set individually via the setsockopt(2) system call. See the setsockopt(2) reference page for more information on setting specific individual socket options.
This parameter controls the default size of data-gram socket receive buffers.
It is generally recommended that you change the size of socket buffers individually, since changing this parameter changes the receive buffer size on all data-gram sockets, using a tremendous amount of kernel memory. Also, increasing this parameter increases the amount of time necessary to wait for socket response, since all sockets will have more buffer space to read.
See the setsockopt(2) reference page for more information on setting specific individual socket options.
|
Copyright © 1997, Silicon Graphics, Inc. All Rights Reserved. Trademark Information