configuration information, the nodes to be managed, information about
how those nodes are grouped into partitions, and various scheduling
parameters associated with those partitions. This file should be
consistent across all nodes in the cluster.
.LP
The file location can be modified at execution time by setting the SLURM_CONF
environment variable. The Slurm daemons also allow you to override
both the built\-in and environment\-provided location using the "\-f"
option on the command line.
.LP
The contents of the file are case insensitive except for the names of nodes
and partitions. Any text following a "#" in the configuration file is treated
as a comment through the end of that line.
Changes to the configuration file take effect upon restart of
Slurm daemons, daemon receipt of the SIGHUP signal, or execution
of the command "scontrol reconfigure" unless otherwise noted.
Changes to TCP listening settings will require a daemon restart.
.LP
If a line begins with the word "Include" followed by whitespace
and then a file name, that file will be included inline with the current
configuration file. For large or complex systems, multiple configuration files
may prove easier to manage and enable reuse of some files (See INCLUDE
MODIFIERS for more details).
.LP
Note on file permissions:
.LP
The \fIslurm.conf\fR file must be readable by all users of Slurm, since it
is used by many of the Slurm commands. Other files that are defined
in the \fIslurm.conf\fR file, such as log files and job accounting files,
may need to be created/owned by the user "SlurmUser" to be successfully
accessed. Use the "chown" and "chmod" commands to set the ownership
and permissions appropriately.
See the section \fBFILE AND DIRECTORY PERMISSIONS\fR for information
about the various files and directories used by Slurm.

.SH "PARAMETERS"
.LP
The overall configuration parameters available include:

.TP
\fBAccountingStorageBackupHost\fR
The name of the backup machine hosting the accounting storage database.
If used with the accounting_storage/slurmdbd plugin, this is where the backup
slurmdbd would be running.
Only used with systems using SlurmDBD, ignored otherwise.
.IP

.TP
\fBAccountingStorageEnforce\fR
This controls what level of association\-based enforcement to impose
on job submissions. Valid options are any comma-separated combination of the
following, many of which will implicitly include other options:
.TP
\fBlimits\fR
Users can be limited by association to whatever job size or run time limits are
defined. Implies \fBassociations\fR.
.IP

.TP
\fBnojobs\fR
Slurm will not account for any jobs or steps on the system.
Implies \fBnosteps\fR.
.IP

.TP
\fBnosteps\fR
Slurm will not account for any steps that have run.
.IP

.TP
\fBqos\fR
Jobs will not be scheduled unless a valid qos is specified.
Implies \fBassociations\fR.
.IP

.TP
\fBsafe\fR
A job will only be launched against an association or qos that has a
TRES\-minutes limit set if the job will be able to run to completion. Without
this option set, jobs will be launched as long as their usage hasn't reached
the TRES\-minutes limit. This can lead to jobs being launched but then killed
when the limit is reached. With this option, a job won't be killed due to limits,
even if the limits are changed after the job was started and the association or
qos violates the updated limits. Implies \fBlimits\fR and \fBassociations\fR.
.IP

.TP
\fBwckeys\fR
Jobs will not be scheduled unless a valid workload characterization key is
specified. Implies \fBassociations\fR and \fBTrackWCKey\fR (a separate
configuration option).
.RE
.IP

.TP
\fBAccountingStorageExternalHost\fR
A comma\-separated list of external slurmdbds (<host/ip>[:port][,...]) to
register with. If no port is given, the \fBAccountingStoragePort\fR will be
used.

This allows clusters registered with the external slurmdbd to communicate with
each other using the \fI\-\-cluster/\-M\fR client command options.

The cluster will add itself to the external slurmdbd if it doesn't exist. If a
.RS
.TP 2
\fBmax_step_records\fR=\#
The number of steps that are recorded in the database for each job -- excluding
batch, extern, and interactive steps.
.IP

.RE
.IP
The following comma\-separated list of key\-value options are used to establish
a secure connection to the database:
.IP
.RS
.TP 2
\fBSSL_CERT\fR
The path name of the client public key certificate file.
.IP

.TP
\fBSSL_CA\fR
The path name of the Certificate Authority (CA) certificate file.
.IP

.TP
\fBSSL_CAPATH\fR
The path name of the directory that contains trusted SSL CA certificate files.
.IP

.TP
\fBSSL_KEY\fR
The path name of the client private key file.
.IP

.TP
\fBSSL_CIPHER\fR
The list of permissible ciphers for SSL encryption.
.RE
.IP

.TP
\fBAccountingStoragePass\fR
The password used to gain access to the database to store the
accounting data. Only used for database type storage plugins, ignored
otherwise. In the case of SlurmDBD (Database Daemon) with MUNGE
authentication this can be configured to use a MUNGE daemon
specifically configured to provide authentication between clusters
while the default MUNGE daemon provides authentication within a
cluster. In that case, \fBAccountingStoragePass\fR should specify the
named port to be used for communications with the alternate MUNGE
daemon (e.g. "/var/run/munge/global.socket.2"). The default value is
NULL.
.IP
These are the resources requested by the sbatch/srun job when it
is submitted. Currently this consists of any GRES, BB (burst buffer) or
license along with CPU, Memory, Node, Energy, FS/[Disk|Lustre], IC/OFED, Pages,
and VMem. By default Billing, CPU, Energy, Memory, Node, FS/Disk, Pages and VMem
are tracked. These default TRES cannot be disabled, but only appended to.
AccountingStorageTRES=gres/craynetwork,license/iop1
will track billing, cpu, energy, memory, nodes, fs/disk, pages and vmem along
with a gres called craynetwork as well as a license called iop1. Whenever these
resources are used on the cluster they are recorded. The TRES are automatically
set up in the database on the start of the slurmctld.

If multiple GRES of different types are tracked (e.g. GPUs of different types),
then job requests with matching type specifications will be recorded.
Given a configuration of
"AccountingStorageTRES=gres/gpu,gres/gpu:tesla,gres/gpu:volta"
Then "gres/gpu:tesla" and "gres/gpu:volta" will track only jobs that explicitly
request those two GPU types, while "gres/gpu" will track allocated GPUs of any
type ("tesla", "volta" or any other GPU type).

Given a configuration of
"AccountingStorageTRES=gres/gpu:tesla,gres/gpu:volta"
Then "gres/gpu:tesla" and "gres/gpu:volta" will track jobs that explicitly
request those GPU types.
If a job requests GPUs, but does not explicitly specify the GPU type, then
its resource allocation will be accounted for as either "gres/gpu:tesla" or
"gres/gpu:volta", although the accounting may not match the actual GPU type
allocated to the job and the GPUs allocated to the job could be heterogeneous.
In an environment containing various GPU types, use of a job_submit plugin
may be desired in order to force jobs to explicitly specify some GPU type.

\fBNOTE\fR: Setting gres/gpu will also set gres/gpumem and gres/gpuutil.
gres/gpumem and gres/gpuutil can be set individually when gres/gpu is not set.
.IP

.TP
\fBAccountingStorageType\fR
The accounting storage mechanism type. Unset by default, which indicates
that accounting records are not maintained.

Current options are:
.IP
.RS
.TP
\fBaccounting_storage/slurmdbd\fR
The accounting records will be written to the SlurmDBD, which manages an
underlying MySQL database. See "man slurmdbd" for more information.
.RE
.IP

.TP
\fBAccountingStoreFlags\fR
Comma separated list used to modify which fields the slurmctld send to the
\fBjob_env\fR
Include a batch job's environment variables used at job submission in the job
start message sent to the Accounting Storage database.
.IP

.TP
\fBjob_extra\fR
Include the job's extra field in the job complete message sent to the Accounting
Storage database.
.IP

.TP
\fBjob_script\fR
Include the job's batch script in the job start message sent to the Accounting Storage database.
.IP

.TP
\fBno_stdio\fR
Exclude the stdio paths when recording data into the database on a job or
step start. StdOut, StdErr and StdIn db fields for jobs and steps will be empty.
.RE
.IP

.TP
\fBAcctGatherNodeFreq\fR
The AcctGather plugins sampling interval for node accounting.
For AcctGather plugin values of none, this parameter is ignored.
For all other values this parameter is the number
of seconds between node accounting samples. For the
acct_gather_energy/rapl plugin, set a value less
than 300 because the counters may overflow beyond this rate.
The default value is zero. This value disables accounting sampling
for nodes. Note: The accounting sampling interval for jobs is
determined by the value of \fBJobAcctGatherFrequency\fR.
.IP

.TP
\fBAcctGatherEnergyType\fR
Identifies the plugin to be used for energy consumption accounting.
The jobacct_gather plugin and slurmd daemon call this plugin to collect
energy consumption data for jobs and nodes. The collection of energy
consumption data takes place on the node level, hence only in case of exclusive
job allocation the energy consumption measurements will reflect the job's
real consumption. In case of node sharing between jobs the reported consumed
energy per job (through sstat or sacct) will not reflect the real energy
consumed by the jobs. Default is nothing is collected.

Configurable values at present are:
.IP
.RS
.TP 20
\fBacct_gather_energy/gpu\fR
Controller (BMC) for HPE Cray systems.
.IP

.TP
\fBacct_gather_energy/rapl\fR
Energy consumption data is collected from hardware sensors using the Running
Average Power Limit (RAPL) mechanism. Note that enabling RAPL may require the
execution of the command "sudo modprobe msr".
.IP

.TP
\fBacct_gather_energy/xcc\fR
Energy consumption data is collected from the Lenovo SD650 XClarity Controller
(XCC) using IPMI OEM raw commands.
.RE
.IP

.TP
\fBAcctGatherInterconnectType\fR
Identifies the plugin to be used for interconnect network traffic accounting.
The jobacct_gather plugin and slurmd daemon call this plugin to collect
network traffic data for jobs and nodes.
The collection of network traffic data takes place on the node level,
hence only in case of exclusive job allocation the collected values will
reflect the job's real traffic. In case of node sharing between jobs the reported
network traffic per job (through sstat or sacct) will not reflect the real
network traffic by the jobs.

Configurable values at present are:
.IP
.RS
.TP 20
\fBacct_gather_interconnect/ofed\fR
Infiniband network traffic data are collected from the hardware monitoring
counters of Infiniband devices through the OFED library.
In order to account for per job network traffic, add the "ic/ofed" TRES to
\fIAccountingStorageTRES\fR.
.IP

.TP
\fBacct_gather_interconnect/sysfs\fR
Network traffic statistics are collected from the Linux sysfs
pseudo\-filesystem for specific interfaces defined in
\fBacct_gather.conf\fR(5).
In order to account for per job network traffic, add the "ic/sysfs" TRES to
\fIAccountingStorageTRES\fR.
.RE
.IP

.TP
\fBAcctGatherFilesystemType\fR
Identifies the plugin to be used for filesystem traffic accounting.
Lustre filesystem traffic data are collected from the counters found in
/proc/fs/lustre/.
In order to account for per job lustre traffic, add the "fs/lustre" TRES to
\fIAccountingStorageTRES\fR.
.RE
.IP

.TP
\fBAcctGatherProfileType\fR
Identifies the plugin to be used for detailed job profiling.
The jobacct_gather plugin and slurmd daemon call this plugin to collect
detailed data such as I/O counts, memory usage, or energy consumption for jobs
and nodes. There are interfaces in this plugin to collect data as step start
and completion, task start and completion, and at the account gather
frequency. The data collected at the node level is related to jobs only in
case of exclusive job allocation.

Configurable values at present are:
.IP
.RS
.TP 20
\fBacct_gather_profile/hdf5\fR
This enables the HDF5 plugin. The directory where the profile files
are stored and which values are collected are configured in the
acct_gather.conf file.
.IP

.TP
\fBacct_gather_profile/influxdb\fR
This enables the influxdb plugin. The influxdb instance host, port, database,
retention policy and which values are collected are configured in the
acct_gather.conf file.
.RE
.IP

.TP
\fBAllowSpecResourcesUsage\fR
If set to "YES", Slurm allows individual jobs to override node's configured
CoreSpecCount value. For a job to take advantage of this feature,
a command line option of \-\-core\-spec must be specified. The default
value for this option is "YES" for Cray systems and "NO" for other system types.
.IP

.TP
\fBAuthAltTypes\fR
Comma\-separated list of alternative authentication plugins that the slurmctld
will permit for communication. Acceptable values at present include
\fBauth/jwt\fR.

\fBNOTE\fR: If \fBAuthAltParameters\fR is not used to specify a path to the
required jwt_hs256.key then slurmctld will default to looking for it in the
\fBStateSaveLocation\fR.
.TP 15
\fBdisable_token_creation\fR
Disable "scontrol token" use by non\-SlurmUser accounts.
.TP
\fBmax_token_lifespan\fR=<seconds>
Set max lifespan (in seconds) for any token generated for user accounts. Limit
applies to all users except SlurmUser. Sites wishing to have per user limits
should generate tokens using JWT\-compatible tools, and\/or an authenticating
proxy, instead of using \fIscontrol token\fR.
.IP

.TP
\fBjwks\fR=
Absolute path to JWKS file. Key should be owned by SlurmUser or root, must be
readable by SlurmUser, with suggested permissions of 0400. It must not be
writable by 'other'.
Only RS256 keys are supported, although other key types may be listed in the
file. If set, no HS256 key will be loaded by default (and token generation is
disabled), although the jwt_key setting may be used to explicitly re\-enable
HS256 key use (and token generation).
.IP

.TP
\fBjwt_key\fR=
Absolute path to JWT key file. Key must be HS256. Key should be owned by
SlurmUser or root, must be readable by SlurmUser, with suggested permissions of
0400. It must not be accessible by 'other'.
If not set, the default key file is jwt_hs256.key in \fIStateSaveLocation\fR.
.IP

.TP
\fBuserclaimfield\fR=
Use an alternative claim field for the Slurm UserName \fBsun\fR field. This
option is designed to allow compatibility with tokens generated outside of
Slurm. (This field may also be known as a grant.)
.NR
Default: (disabled)
.RE
.IP

.TP
\fBAuthInfo\fR
Additional information to be used for authentication of communications
between the Slurm daemons (slurmctld and slurmd) and the Slurm
clients. The interpretation of this option is specific to the
configured \fBAuthType\fR.
Multiple options may be specified in a comma\-delimited list.
If not specified, the default authentication information will be used.
.IP
.RS
.TP 14
\fBcred_expire\fR

.TP
\fBttl\fR
Credential lifetime, in seconds (e.g. "ttl=300").
The default value is dependent on the \fBAuthType\fR used.
For \fBauth/munge\fR, the default value is dependent upon the MUNGE
installation, but is typically 300 seconds. For \fBauth/slurm\fR, the default
value is 60 seconds. For \fBauth/jwt\fR, the default value is 1800 seconds.
.IP

.TP
\fBuse_client_ids\fR
Allow the \fBauth/slurm\fR plugin to authenticate users without relying on
the user information from LDAP or the operating system. When coupled with
nss_slurm, the user information can be managed on the compute nodes by
slurmstepd. This would allow the cluster to operate in an environment where
only the login nodes have access to LDAP/OS user information.
See <https://slurm.schedmd.com/nss_slurm.html> for more information.
.RE
.IP

.TP
\fBAuthType\fR
The authentication method for communications between Slurm
components.
All Slurm daemons and commands must be terminated prior to changing
the value of \fBAuthType\fR and later restarted.
Changes to this value will interrupt outstanding job steps and prevent them
from completing.
Acceptable values at present:
.RS
.TP
\fBauth/munge\fR
Indicates that MUNGE is to be used (default).
(See "https://dun.github.io/munge/" for more information).
.IP

.TP
\fBauth/slurm\fR
Use Slurm's internal authentication plugin.
.RE
.IP

.TP
\fBBatchStartTimeout\fR
The maximum time (in seconds) that a batch job is permitted for
launching before being considered missing and releasing the
allocation. The default value is 10 (seconds). Larger values may be
required if more time is required to execute the \fBProlog\fR, load
user environment variables, or if the slurmd daemon gets paged from memory.
.br
.br
indicate that no directory paths should be excluded. The default value is
"\fI/lib,/usr/lib,/lib64,/usr/lib64\fR". This option can be overridden by
\fBsbcast \-\-exclude\fR and \fBsrun \-\-bcast\-exclude\fR.
.IP

.TP
\fBBcastParameters\fR
Controls sbcast and srun \-\-bcast behavior. Multiple options can be specified
in a comma separated list.
Supported values include:
.IP
.RS
.TP 15
\fBDestDir\fR=
Destination directory for file being broadcast to allocated compute nodes.
Default value is current working directory, or \-\-chdir for srun if set.
.IP

.TP
\fBCompression\fR=
Specify default file compression library to be used.
Supported values are "lz4" and "none".
The default value with the sbcast \-\-compress option is "lz4" and "none" otherwise.
Some compression libraries may be unavailable on some systems.
.IP

.TP
\fBsend_libs\fR
If set, attempt to autodetect and broadcast the executable's shared object
dependencies to allocated compute nodes. The files are placed in a directory
alongside the executable. For \fBsrun\fR only, the \fBLD_LIBRARY_PATH\fR is
automatically updated to include this cache directory as well.
This can be overridden with either \fBsbcast\fR or \fBsrun\fR
\fB\-\-send\-libs\fR option. By default this is disabled.
.RE
.IP

.TP
\fBBurstBufferType\fR
The plugin used to manage burst buffers. Unset by default.
Acceptable values at present are:
.IP
.RS
.TP
\fBburst_buffer/datawarp\fR
Use Cray DataWarp API to provide burst buffer functionality.
.IP

.TP
\fBburst_buffer/lua\fR
This plugin provides hooks to an API that is defined by a Lua script. This
plugin was developed to provide system administrators with a way to do any task
only the certificate PEM file should be printed to stdout. Must return 0 on
success, and non-zero on error.
.IP

.TP
\fBkeygen_script=\fR
Absolute path to executable script to generate private key used later to
generate a self-signed certificate. Only the private key PEM file should be
printed to stdout, which will be later sent as stdin to \fBcertgen_script\fR.
Must return 0 on success, and non-zero on error.
.RE
.IP

.TP
\fBCertgenType\fR
Specify the certgen plugin that will be used.
Acceptable values at present:
.IP
.RS
.TP
\fBcertgen/script\fR
Use built-in/configured scripts to generate certificate key pair.
.RE
.IP

.TP
\fBCertmgrParameters\fR
Used to define parameters for certmgr plugin.
.IP
.RS
.TP
\fBcertificate_renewal_period=\fR
slurmd/sackd will request a new signed certificate from slurmctld at this
specified interval (in minutes).

Default is 1440 minutes (once per day).
.IP

.TP
\fBgenerate_csr_script=\fR
Path to script used to generate certificate signing requests. The nodename is
passed in as an argument to the script. The script must print only the
certificate signing request PEM file to stdout, and return 0 on success. Must
return non-zero on error.

Required with certmgr/script. Only run by daemons requesting certificates.
.IP

.TP
\fBget_node_cert_key_script=\fR
Path to script used to get node's private key which was used to generate the
CSR returned by \fBgenerate_csr_script\fR. The nodename is passed in as an
Required with certmgr/script. Only run by daemons requesting certificates.
.IP

.TP
\fBsign_csr_script=\fR
Path to script used to sign incoming certificate signing requests.
This script will only be called if \fBvalidate_node_script=\fR was
already called on the accompanying unique node token and returned with a
non-zero exit code.
The certificate signing request (as given by \fBgenerate_csr_script=\fR) is
passed as an argument to this script.
The script must print the new signed certificate to stdout, and return 0 on
success. Must return non-zero on error.

Required with certmgr/script. Only run by slurmctld.
.IP

.TP
\fBsingle_use_tokens\fR
Unique node tokens that are dynamically set (e.g. set via scontrol) will be
consumed upon successful certificate signing.
.IP

.TP
\fBvalidate_node_script=\fR
Path to script used to validate a unique node token.
The unique node token is passed as an argument to this script.
If the script finds the node token to be valid, return 0.
Otherwise, if the node token is invalid, return non-zero.

Required with certmgr/script. Only run by slurmctld.
.IP
.RE

.TP
\fBCertmgrType\fR
Plugin used to dynamically renew TLS certificates for slurmd/sackd.
.RS
.TP
\fBcertmgr/script\fR
Use script hooks to implement certificate management. See
\fBCertmgrParameters\fR for details on how to setup these scripts.
.IP
.RE

.TP
\fBCliFilterParameters\fR
Extra parameters for cli_filter plugins. Multiple options may be
comma\-separated. Acceptable values at present are:
.IP
.RS
.TP
No cli_filter plugins are used by default. Acceptable values at present are:
.IP
.RS
.TP
\fBcli_filter/lua\fR
This plugin allows you to write your own implementation of a cli_filter
using lua.
.IP

.TP
\fBcli_filter/syslog\fR
This plugin enables logging of job submission activities performed. All the
salloc/sbatch/srun options are logged to syslog together with environment
variables in JSON format. If the plugin is not the last one in the list it may
log values different than what was actually sent to slurmctld.
.IP

.TP
\fBcli_filter/user_defaults\fR
This plugin looks for the file $HOME/.slurm/defaults and reads every line of it
as a \fIkey\fR=\fIvalue\fR pair, where \fIkey\fR is any of the job submission
options available to salloc/sbatch/srun and \fIvalue\fR is a default value
defined by the user. For instance:
.nf
time=1:30
mem=2048
.fi
The above will result in a user defined default for each of their jobs of
"\-t 1:30" and "\-\-mem=2048".
.RE
.IP

.TP
\fBClusterName\fR
The name by which this Slurm managed cluster is known in the
accounting database. This is needed to distinguish accounting records
when multiple clusters report to the same database. Because of limitations
in some databases, any upper case letters in the name will be silently mapped
to lower case. In order to avoid confusion, it is recommended that the name
be lower case. The cluster name must be 40 characters or less in order to
comply with the limit on the maximum length for table names in MySQL/MariaDB.
.IP

.TP
\fBCommunicationParameters\fR
Comma\-separated options identifying communication options.
.IP
.RS
.TP 15
\fBblock_null_hash\fR
Require all Slurm authentication tokens to include a newer (20.11.9 and
21.08.8) payload that provides an additional layer of security against
is the number of times that Slurm will retry making that connection. Slurm will
wait for 500 milliseconds in between each try. The default for this parameter
is zero (Slurm will not retry if EHOSTUNREACH is returned).
.IP

.TP
\fBDisableIPv4\fR
Disable IPv4 only operation for all slurm daemons (except slurmdbd). This
should also be set in your \fBslurmdbd.conf\fR file.
.IP

.TP
\fBEnableIPv6\fR
Enable using IPv6 addresses for all slurm daemons (except slurmdbd). When
using both IPv4 and IPv6, address family preferences will be based on your
/etc/gai.conf file. This should also be set in your \fBslurmdbd.conf\fR file.
.IP

.TP
\fBgetnameinfo_cache_timeout\fR
When munge is used as AuthType slurmctld makes use of getnameinfo to obtain
the hostname from IP address stored in munge credential. This parameter controls
the number of seconds slurmctld should keep the IP to hostname resolution. When
set to 0 cache is disabled. The default value is 60.
.IP

.TP
\fBkeepaliveinterval\fR=\#
Specifies the interval, in seconds, between keepalive probes on idle
connections.
This affects connections between srun and its slurmstepd process as well as all
connections to the slurmdbd.
The default is to use the system default settings.
.IP

.TP
\fBkeepaliveprobes\fR=\#
Specifies the number of unacknowledged keepalive probes sent before considering
the connection broken.
This affects connections between srun and its slurmstepd process as well as all
connections to the slurmdbd.
The default is to use the system default settings.
.IP

.TP
\fBkeepalivetime\fR=\#
Specifies how long, in seconds,  before a connection is marked as needing a
keepalive probe as well as how long to delay closing a connection to process
messages still in the queue.
This affects connections between srun and its slurmstepd process as well as all
connections to the slurmdbd.
Longer values can be used to improve reliability of communications in the event
of binding messages to any address on the node which is the default.
This option is for all daemons/clients except for the slurmctld.
.RE
.IP

.TP
\fBCompleteWait\fR
The time to wait, in seconds, when any job is in the COMPLETING state
before any additional jobs are scheduled. This is to attempt to keep jobs on
nodes that were recently in use, with the goal of preventing fragmentation.
If set to zero, pending jobs will be started as soon as possible.
Since a COMPLETING job's resources are released for use by other
jobs as soon as the \fBEpilog\fR completes on each individual node,
this can result in very fragmented resource allocations.
To provide jobs with the minimum response time, a value of zero is
recommended (no waiting).
To minimize fragmentation of resources, a value equal to \fBKillWait\fR
plus two is recommended.
In that case, setting \fBKillWait\fR to a small value may be beneficial.
The default value of \fBCompleteWait\fR is zero seconds.
The value may not exceed 65533.

\fBNOTE\fR: Setting \fBreduce_completing_frag\fR affects the behavior
of \fBCompleteWait\fR.
.IP

.TP
\fBCpuFreqDef\fR
Default CPU governor to use when running a job step if it has not been
explicitly set with the \-\-cpu\-freq option. Acceptable values at present
include one of the following governors:
.IP
.RS
.TP 14
\fBConservative\fR
attempts to use the Conservative CPU governor
.IP

.TP
\fBOnDemand\fR
attempts to use the OnDemand CPU governor
.IP

.TP
\fBPerformance\fR
attempts to use the Performance CPU governor
.IP

.TP
\fBPowerSave\fR
attempts to use the PowerSave CPU governor
.TP
attempts to use the Conservative CPU governor
.IP

.TP
\fBOnDemand\fR
attempts to use the OnDemand CPU governor (a default value)
.IP

.TP
\fBPerformance\fR
attempts to use the Performance CPU governor (a default value)
.IP

.TP
\fBPowerSave\fR
attempts to use the PowerSave CPU governor
.IP

.TP
\fBSchedUtil\fR
attempts to use the SchedUtil CPU governor
.IP

.TP
\fBUserSpace\fR
attempts to use the UserSpace CPU governor (a default value)
.TP
Default: OnDemand, Performance and UserSpace.
.RE
.IP

.TP
\fBCredType\fR
The cryptographic signature tool to be used in the creation of
job step credentials.
Acceptable values at present are:
.RS
.TP
\fBcred/munge\fR
Indicates that Munge is to be used (default).
.IP

.TP
\fBcred/slurm\fR
Use Slurm's internal credential format.
.RE
.IP

.TP
\fBDataParserParameters\fR=<\fIdata_parser\fR>
Apply default value for data_parser plugin parameters. See \fI\-\-json\fR or
\fI\-\-yaml\fR arguments in \fBsacct\fR(1), \fBscontrol\fR(1), \fBsinfo\fR(1),
environment variable defined with the desired flags when the process (client
command, daemon, etc.) is started.
The environment variable takes precedence over the setting in the slurm.conf.

Valid subsystems available include:
.IP
.RS
.TP 17
\fBAccrue\fR
Accrue counters accounting details
.IP

.TP
\fBAgent\fR
RPC agents (outgoing RPCs from Slurm daemons)
.IP

.TP
\fBAuditRPCs\fR
For all inbound RPCs to slurmctld, print the originating address, authenticated
user, and RPC type before the connection is processed.
.IP

.TP
\fBAuditTLS\fR
Print TLS certificates being used
.IP

.TP
\fBBackfill\fR
Backfill scheduler details
.IP

.TP
\fBBackfillMap\fR
Backfill scheduler to log a very verbose map of reserved resources through
time. Combine with \fBBackfill\fR for a verbose and complete view of the
backfill scheduler's work.
.IP

.TP
\fBBurstBuffer\fR
Burst Buffer plugin
.IP

.TP
\fBCgroup\fR
Cgroup details
.IP

.TP
\fBConMgr\fR
\fBData\fR
Generic data structure details.
.IP

.TP
\fBDBD_Agent\fR
RPC agent (outgoing RPCs to the DBD)
.IP

.TP
\fBDependency\fR
Job dependency debug info
.IP

.TP
\fBElasticsearch\fR
Elasticsearch debug info (deprecated). Alias of \fBJobComp\fR.
.IP

.TP
\fBEnergy\fR
AcctGatherEnergy debug info
.IP

.TP
\fBFederation\fR
Federation scheduling debug info
.IP

.TP
\fBGres\fR
Generic resource details
.IP

.TP
\fBHetjob\fR
Heterogeneous job details
.IP

.TP
\fBGang\fR
Gang scheduling details
.IP

.TP
\fBGLOB_SILENCE\fR
Do not display error message of glob "*" symbols in conf files.
.IP

.TP
\fBJobAccountGather\fR
Common job account gathering details (not plugin specific).
Namespace plugin details
.IP

.TP
\fBMetrics\fR
Metrics plugin details
.IP

.TP
\fBNetwork\fR
Network details. \fBWarning\fR: activating this flag may cause logging of
passwords, tokens or other authentication credentials.
.IP

.TP
\fBNetworkRaw\fR
Dump raw hex values of key Network communications. \fBWarning\fR: This flag
will cause very verbose logs and may cause logging of passwords, tokens or
other authentication credentials.
.IP

.TP
\fBNodeFeatures\fR
Node Features plugin debug info
.IP

.TP
\fBNO_CONF_HASH\fR
Do not log when the slurm.conf files differ between Slurm daemons
.IP

.TP
\fBPower\fR
Power management plugin and power save (suspend/resume programs) details
.IP

.TP
\fBPriority\fR
Job prioritization
.IP

.TP
\fBProfile\fR
AcctGatherProfile plugins details
.IP

.TP
\fBProtocol\fR
Communication protocol details
.IP

.TP

.TP
\fBSelectType\fR
Resource selection plugin
.IP

.TP
\fBSteps\fR
Slurmctld resource allocation for job steps
.IP

.TP
\fBSwitch\fR
Switch plugin
.IP

.TP
\fBTLS\fR
TLS plugin
.IP

.TP
\fBTraceJobs\fR
Trace jobs in slurmctld. It will print detailed job information
including state, job ids and allocated nodes counter.
.IP

.TP
\fBTriggers\fR
Slurmctld triggers
.RE
.IP

.TP
\fBDefCpuPerGPU\fR
Default count of CPUs allocated per allocated GPU. This value is used only if
the job didn't specify \-\-cpus\-per\-task and \-\-cpus\-per\-gpu.
.IP

.TP
\fBDefMemPerCPU\fR
Default real memory size available per usable allocated CPU in megabytes.
Used to avoid over\-subscribing memory and causing paging.
\fBDefMemPerCPU\fR would generally be used if individual processors
are allocated to jobs (\fBSelectType=select/cons_tres\fR).
The default value is 0 (unlimited).
Also see \fBDefMemPerGPU\fR, \fBDefMemPerNode\fR and \fBMaxMemPerCPU\fR.
\fBDefMemPerCPU\fR, \fBDefMemPerGPU\fR and \fBDefMemPerNode\fR are
mutually exclusive.


\fBNOTE\fR: This applies to \fBusable\fR allocated CPUs in a job allocation.
memory per cpu includes all threads:

.nf
.ft B
$ salloc \-n3 \-\-mem\-per\-cpu=100
salloc: Granted job allocation 17199
$ sacct \-j $SLURM_JOB_ID \-X \-o jobid%7,reqtres%35,alloctres%35
  JobID                             ReqTRES                           AllocTRES
\-\-\-\-\-\-\- \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
  17199     billing=3,cpu=3,mem=300M,node=1     billing=4,cpu=4,mem=400M,node=1
.ft
.fi

In this second example, because of \-\-threads\-per\-core=1, each
task is allocated an entire core but is only able to use one
thread per core. Allocated CPUs includes all threads on each
core. However, allocated memory per cpu includes only the
usable thread in each core.

.nf
.ft B
$ salloc \-n3 \-\-mem\-per\-cpu=100 \-\-threads\-per\-core=1
salloc: Granted job allocation 17200
$ sacct \-j $SLURM_JOB_ID \-X \-o jobid%7,reqtres%35,alloctres%35
  JobID                             ReqTRES                           AllocTRES
\-\-\-\-\-\-\- \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
  17200     billing=3,cpu=3,mem=300M,node=1     billing=6,cpu=6,mem=300M,node=1
.ft
.fi
.IP

.TP
\fBDefMemPerGPU\fR
Default real memory size available per allocated GPU in megabytes.
The default value is 0 (unlimited).
Please note a best effort attempt is made to predict which GPUs on the system
will be used, but this could change between job submission and start time,
causing \fBMaxMemPerNode\fR to potentially not work as expected for
heterogeneous jobs.
Also see \fBDefMemPerCPU\fR and \fBDefMemPerNode\fR.
\fBDefMemPerCPU\fR, \fBDefMemPerGPU\fR and \fBDefMemPerNode\fR are
mutually exclusive.
.IP

.TP
\fBDefMemPerNode\fR
Default real memory size available per allocated node in megabytes.
Used to avoid over\-subscribing memory and causing paging.
\fBDefMemPerNode\fR would generally be used if whole nodes
are allocated to jobs (\fBSelectType=select/linear\fR) and
resources are over\-subscribed (\fBOverSubscribe=yes\fR or
\fBOverSubscribe=force\fR).
federation must clear the singleton dependency before the job's singleton
dependency is considered satisfied. Enabling this option means that only the
origin cluster must clear the singleton dependency. This option must be set
in every cluster in the federation.
.IP

.TP
\fBkill_invalid_depend\fR
If a job has an invalid dependency and it can never run terminate it
and set its state to be JOB_CANCELLED. By default the job stays pending
with reason DependencyNeverSatisfied.
.IP

.TP
\fBmax_depend_depth\fR=\#
Maximum number of jobs to test for a circular job dependency. Stop testing
after this number of job dependencies have been tested. The default value is
10 jobs.
.RE
.IP

.TP
\fBDisableRootJobs\fR
If set to "YES" then user root will be prevented from running any jobs.
The default value is "NO", meaning user root will be able to execute jobs.
\fBDisableRootJobs\fR may also be set by partition.
.IP

.TP
\fBEioTimeout\fR
The number of seconds srun waits for slurmstepd to close the TCP/IP
connection used to relay data between the user application and srun
when the user application terminates. The default value is 60 seconds.
May not exceed 65533.
.IP

.TP
\fBEnforcePartLimits\fR
Controls whether partition limits are enforced when a job is submitted to the
cluster. The partition limits being considered by this option are its
configured MaxMemPerCPU, MaxMemPerNode, MinNodes, MaxNodes, MaxTime, AllocNodes,
AllowAccounts, AllowGroups, AllowQOS, and QOS usage threshold. It also considers
if the job requests more nodes than exist in the partition. If set, then a
job and job QOS cannot be submitted that exceed partition limits.
.IP
.RS
.TP
\fBALL\fR
Jobs which exceed the number of nodes in a partition and/or any of its
configured limits will be rejected at submission time. If the job is submitted
to multiple partitions, the job must satisfy the limits on all the requested
partitions.
partition limits are altered. This is the default.
.RE
.IP

.TP
\fBEpilog\fR
Pathname of a script to execute as user root on every node when a user's job
completes (e.g. "/usr/local/slurm/epilog"). If it is not an absolute path name
(i.e. it does not start with a slash), it will be searched for in the same
directory as the slurm.conf file. A glob pattern (See \fBglob\fR (7)) may also
be used to run more than one epilog script (e.g. "/etc/slurm/epilog.d/*").
When more than one epilog script is configured, they are executed in reverse
alphabetical order (z-a -> Z-A -> 9-0). The Epilog script(s) may be used
to purge files, disable user login, etc.
By default there is no epilog.
See \fBProlog and Epilog Scripts\fR for more information.

\fB\fBNOTE\fR: It is possible to configure multiple epilog scripts by including
this option on multiple lines.\fR
.IP

.TP
\fBEpilogMsgTime\fR
The number of microseconds that the slurmctld daemon requires to process
an epilog completion message from the slurmd daemons. This parameter can
be used to prevent a burst of epilog completion messages from being sent
at the same time which should help prevent lost messages and improve
throughput for large jobs.
The default value is 2000 microseconds.
For a 1000 node job, this spreads the epilog completion messages out over
two seconds.
.IP

.TP
\fBEpilogSlurmctld\fR
Fully qualified pathname of a program for the slurmctld to execute
upon termination of a job allocation (e.g.
"/usr/local/slurm/epilog_controller").
The program executes as SlurmUser, which gives it permission to drain
nodes and requeue the job if a failure occurs (See scontrol(1)).
Exactly what the program does and how it accomplishes this is completely at
the discretion of the system administrator.
Information about the job being initiated, its allocated nodes, etc. are
passed to the program using environment variables.
See \fBProlog and Epilog Scripts\fR for more information.

\fB\fBNOTE\fR: It is possible to configure multiple epilog scripts by including
this option on multiple lines.\fR
.IP

.TP
\fBEpilogTimeout\fR
in almost no difference between overconsumption by a factor of 10 and 100,
while a value of 5 will result in a significant difference in priority).
The default value is 1.
.IP

.TP
\fBFederationParameters\fR
Used to define federation options. Multiple options may be comma separated.
.IP
.RS
.TP
\fBfed_display\fR
If set, then the client status commands (e.g. squeue, sinfo, sprio, etc.) will
display information in a federated view by default. This option is functionally
equivalent to using the \-\-federation options on each command. Use the client's
\-\-local option to override the federated view and get a local view of the
given cluster.

Allow client commands to use the \-\-cluster option even when the \fBslurmdbd\fR
is down by retrieving cluster records from \fBslurmctld\fR instead.
.RE
.IP

.TP
\fBFirstJobId\fR
The job id to be used for the first job submitted to Slurm.
Job id values generated will incremented by 1 for each subsequent job.
Value must be larger than 0. The default value is 1.
Also see \fBMaxJobId\fR
.IP

.TP
\fBGresTypes\fR
A comma\-delimited list of generic resources to be managed (e.g.
\fIGresTypes=gpu,mps\fR).
These resources may have an associated GRES plugin of the same name providing
additional functionality.
No generic resources are managed by default.
Ensure this parameter is consistent across all nodes in the cluster for
proper operation.
.IP

.TP
\fBGroupUpdateForce\fR
If set to a non\-zero value, then information about which users are members
of groups allowed to use a partition will be updated periodically, even when
there have been no changes to the /etc/group file.
If set to zero, group member information will be updated only after the
/etc/group file is updated.
The default value is 1.
Also see the \fBGroupUpdateTime\fR parameter.
.IP
has not been explicitly set using the \-\-gpu\-freq option.
This option can be used to independently configure the GPU and its memory
frequencies.
There is no default value. If unset, no attempt to change the GPU frequency
is made if the \-\-gpu\-freq option has not been set.
After the job is completed, the frequencies of all affected GPUs will be reset
to the highest possible values.
In some cases, system power caps may override the requested values.
The field \fItype\fR can be "memory".
If \fItype\fR is not specified, the GPU frequency is implied.
The \fIvalue\fR field can either be "low", "medium", "high", "highm1" or
a numeric value in megahertz (MHz).
If the specified numeric value is not possible, a value as close as
possible will be used.
See below for definition of the values.
Examples of use include "GpuFreqDef=medium,memory=high and "GpuFreqDef=450".

Supported \fIvalue\fR definitions:
.IP
.RS
.TP 10
\fBlow\fR
the lowest available frequency.
.IP

.TP
\fBmedium\fR
attempts to set a frequency in the middle of the available range.
.IP

.TP
\fBhigh\fR
the highest available frequency.
.IP

.TP
\fBhighm1\fR
(high minus one) will select the next highest available frequency.
.RE
.IP

.TP
\fBHashPlugin\fR
Identifies the type of hash plugin to use for network communication.
Acceptable values include:

.IP
.RS
.TP 15
\fBhash/k12\fR
Hashes are generated by the KangorooTwelve cryptographic hash function.
This is the default.
The default value is zero, which disables execution.
.IP

.TP
\fBHealthCheckNodeState\fR
Identify what node states should execute the \fBHealthCheckProgram\fR.
Multiple state values may be specified with a comma separator.
The default value is ANY to execute on nodes in any state.
.IP
.RS
.TP 12
\fBALLOC\fR
Run on nodes in the ALLOC state (all CPUs allocated).
.IP

.TP
\fBANY\fR
Run on nodes in any state.
.IP

.TP
\fBCYCLE\fR
Rather than running the health check program on all nodes at the same time,
cycle through running on all compute nodes through the course of the
\fBHealthCheckInterval\fR. May be combined with the various node state
options.
.IP

.TP
\fBIDLE\fR
Run on nodes in the IDLE state.
.IP

.TP
\fBNONDRAINED_IDLE\fR
Run on nodes that are in the IDLE state and not DRAINED.
.IP

.TP
\fBMIXED\fR
Run on nodes in the MIXED state (some CPUs idle and other CPUs allocated).
.IP

.TP
\fBSTART_ONLY\fR
Run only at slurmd startup.
.RE
.IP

.TP
\fBHealthCheckProgram\fR
Fully qualified pathname of a script to execute as user root periodically
before it registers with the slurmctld daemon. If \fBHealthCheckNodeState\fR is
\fBSTART_ONLY\fR it will be executed only when the slurmd daemon is first
started.
By default, no program will be executed.
.IP

.TP
\fBHttpParserType\fR
Specify the http_parser implementation that will be used. Default is
\fIhttp_parser/libhttp_parser\fR.
Acceptable values at present:
.IP
.RS
.TP
\fBhttp_parser/libhttp_parser\fR
Use the libhttp_parser based plugin.
.RE
.IP

.TP
\fBInactiveLimit\fR
The interval, in seconds, after which a non\-responsive job allocation
command (e.g. \fBsrun\fR or \fBsalloc\fR) will result in the job being
terminated. If the node on which the command is executed fails or the
command abnormally terminates, this will terminate its job allocation.
This option has no effect upon batch jobs.
When setting a value, take into consideration that a debugger using \fBsrun\fR
to launch an application may leave the \fBsrun\fR command in a stopped state
for extended periods of time.
This limit is ignored for jobs running in partitions with the
\fBRootOnly\fR flag set (the scheduler running as root will be
responsible for the job).
The default value is unlimited (zero) and may not exceed 65533 seconds.
.IP

.TP
\fBInteractiveStepOptions\fR
When LaunchParameters=use_interactive_step is enabled, launching salloc will
automatically start an srun process with InteractiveStepOptions to launch
a terminal on a node in the job allocation.
The default value is "\-\-interactive \-\-preserve\-env \-\-pty $SHELL".
The "\-\-interactive" option is intentionally not documented in the srun man
page. It is meant only to be used in \fBInteractiveStepOptions\fR in order to
create an "interactive step" that will not consume resources so that other
steps may run in parallel with the interactive step.
.IP

.TP
\fBJobAcctGatherType\fR
The JobAcctGather plugin collects memory, cpu, io, interconnect, energy and gpu
usage information at the task level, depending on which plugins are configured
in Slurm. This parameter will control how some of these metrics will be
.TP
\fBjobacct_gather/linux\fR
Collect cpu and memory statistics by reading procfs. The plugin will take all
the pids of the task and for each of them will read /proc/<pid>/stats. If UsePSS
is set it will also read /proc/<pid>/smaps, and if NoShare is set it will also
read /proc/<pid>/statm (see \fBJobAcctGatherParams\fR for more information).

This plugin carries a performance penalty on jobs with a large number of spawned
processes since it needs to iterate over all the task pids and aggregate the
stats into one single metric for the ppid, and then these values need to be
aggregated to the task stats.
.RE
.IP

\fBNOTE\fR: Changing the plugin type when jobs are running in the cluster is
possible. The already running steps will keep using the previous plugin
mechanism, while new steps will use the new mechanism.
.IP

.TP
\fBJobAcctGatherFrequency\fR
The job accounting and profiling sampling intervals, specified for each data
type. Multiple comma\-separated \fB<datatype>=<interval>\fR intervals may be
specified. If an interval is provided without a datatype, it will be assigned
to the \fBtask\fR datatype. Supported datatypes are as follows:
.IP
.RS
.TP 12
Affects accounting and profiling:
.IP
.RS

.TP
\fBtask\fR=<\fIinterval\fR>
sampling interval in seconds for task usage by the jobacct_gather plugins and
for task profiling by the acct_gather_profile plugin.
Defaults to 30.
.br
.br
If this interval is 0 (disabled), accounting information is collected only at
job termination, which reduces Slurm
interference with the job, but also means that the statistics about a job
are only derived from a single sample and don't reflect the average or maximum
of several samples throughout the life of the job.
.IP
.RE

.TP
Affects profiling only:
.IP
.RS

sampling interval in seconds for filesystem profiling using the
acct_gather_filesystem plugin. Defaults to 0 (disabled).
.IP
.br
.RE
.RE
.IP
Smaller (non\-zero) values have a greater impact upon job performance,
but a value of 30 seconds is not likely to be noticeable for
applications having less than 10,000 tasks.
.br
.br
Users can independently override each interval on a per job basis using the
\fB\-\-acctg\-freq\fR option when submitting the job.
.br
This value should be lower or equal to \fBEnergyIPMIFreq\fR when using
\fIacct_gather_energy/ipmi\fR or xcc plugins as otherwise it will unnecessarily
get repeated values on successive polls.
.IP

.TP
\fBJobAcctGatherParams\fR
Arbitrary parameters for the job account gather plugin.
Acceptable values at present include:
.IP
.RS
.TP 20
\fBDisableGPUAcct\fR
Do not do accounting of GPU usage and skip any gpu driver library call. This
parameter can help to improve performance if the GPU driver response is slow.
.IP

.TP
\fBno_file_cache\fR
Filesystem-backed memory (active_file and inactive_file) will be subtracted
from the reported memory. This disables the use of the memory.peak interface,
which can result in MaxRSS failing to record short memory spikes.
Only compatible with \fBcgroup/v2\fR plugin.
.IP

.TP
\fBNoShared\fR
Exclude shared memory from RSS. This option cannot be used with UsePSS.
Only compatible with \fBjobacct_gather/linux\fR plugin.
.IP

.TP
\fBOverMemoryKill\fR
Kill processes that are being detected to use more memory than requested by
steps every time accounting information is gathered by the JobAcctGather plugin.
This parameter should be used with caution because a job exceeding its memory
allocation may affect other processes and/or machine health.

.TP
\fBUsePss\fR
Use PSS value instead of RSS to calculate real usage of memory. The PSS value
will be saved as RSS. This option cannot be used with NoShared. Only compatible
with \fBjobacct_gather/linux\fR plugin.
.RE
.IP

.TP
\fBJobCompHost\fR
The name of the machine hosting the job completion database.
Only used for database type storage plugins, ignored otherwise.
.IP

.TP
\fBJobCompLoc\fR
This option sets a string which has different meanings depending on
\fBJobCompType\fR:
.IP
.RS
.TP
If \fBjobcomp/elasticsearch\fR:
Instructs this plugin to send the finished job records information to the
Elasticsearch server URL endpoint (including the port number and the target
index) configured in this option. This string should typically take the form
of \fI<host>:<port>/<target>/_doc\fR. There is no default value for
JobCompLoc when this plugin is enabled.

\fBNOTE\fR: Refer to <https://slurm.schedmd.com/elasticsearch.html> for more
information.
.IP

.TP
If \fBjobcomp/filetxt\fR:
Instructs this plugin to send the finished job records information to a file
configured in this option. This string should represent an absolute path to
a file. The default value for this plugin is \fI/var/log/slurm_jobcomp.log\fR.
.IP

.TP
If \fBjobcomp/kafka\fR:
When this plugin is configured, finished (and optionally start running) job
records information is sent to a Kafka server. The plugin makes use of
\fBlibrdkafka\fR. This string represents an absolute path to a file containing
key=value pairs configuring the library behavior. For the plugin to work
properly, this file needs to exist and least the \fIbootstrap.servers\fR
\fBlibrdkafka\fR property needs to be configured in it. There is no default
value for JobCompLoc when this plugin is enabled.

\fBNOTE\fR: For a full list of \fBlibrdkafka\fR properties, please refer to
the library documentation. You can also view the jobcomp_kafka page for more
.TP
If \fBjobcomp/mysql\fR:
Instructs this plugin to send the finished job records information to a database
name configured in this option. This string should represent a database name.
The default value for this plugin is \fIslurm_jobcomp_db\fR.
.IP

.TP
If \fBjobcomp/script\fR:
The finished job record information is made available via environment variables
and processed by a script with name configured by this option. This string
should represent a path to a script. There is no default value for JobCompLoc
when this plugin is enabled. It needs to be explicitly configured or the
plugin will fail to initialize.
.RE
.IP

.TP
\fBJobCompParams\fR
Pass arbitrary text string to job completion plugin.
Also see \fBJobCompType\fR.
.RS
.IP

.TP
Optional comma-separated list for \fBjobcomp/elasticsearch\fR:
.RS
.IP

.TP
\fBsend_script\fR
Sends the job script as part of jobcomp messages.
.IP

.RE
.IP

.TP
Optional comma-separated list for \fBjobcomp/kafka\fR:
.RS
.IP

.TP
\fBenable_job_start\fR
Instruct the \fBjobcomp/kafka\fR plugin to send a subset of the job record
fields to the \fBtopic_job_start\fR Kafka topic when a job first starts running.

\fBNOTE\fR: The writing when the job finishes (historical purpose of the plugin)
is always enabled by default and can't be disabled.

\fBNOTE\fR: The subset of fields for job start events is slightly smaller than
those sent when the job finishes.
.IP

.TP
\fBpoll_interval\fR=<seconds>
Seconds between calls to \fBlibrdkafka\fR API poll function, which polls the
provided Kafka handle for events. The plugin spawns a separate thread to perform
this call at the configured interval.
Accepted values are [0,4294967295].
Defaults to 2 (seconds).
.IP

.TP
\fBrequeue_on_msg_timeout\fR
Instruct the delivery report callback to requeue messages that failed delivery
because their time waiting for successful delivery reached the \fBlibrdkafka\fR
property \fBmessage.timeout.ms\fR.
Defaults to not set (don't requeue and thus discard these messages).
.IP

.TP
\fBsend_script\fR
Sends the job script as part of jobcomp messages.
.IP

.TP
\fBtopic\fR=<string>
Target Kafka topic to send messages to when a job finishes.
Defaults to \fBClusterName\fR.
.IP

.TP
\fBtopic_job_start\fR=<string>
Target Kafka topic to send messages to when a job starts running.
Defaults to \fB<ClusterName>-job-start\fR.

\fBNOTE\fR: It is advisable that job start running event records be sent to a
different Kafka topic than the topic configured for job finish event records.
.RE
.IP

.TP
Optional comma-separated list for \fBjobcomp/mysql\fR:
.RS
.IP

.TP
\fBtoken_duration\fR
Duration in seconds to cache generated database passwords before requesting a
new one from the StoragePassScript. Typically the token should refresh prior
to actual expiration; upon token generation failure the cached token will
continue to be used to avoid transient generation failures from causing
connection failures.
.IP

.TP
\fBJobCompPassScript\fR
Absolute path to an executable script that generates ephemeral authentication
tokens for database connections which are used instead of \fBJobCompPass\fR.
The script must output the password/token to stdout and exit with status 0 on
success. This allows dynamic password generation, instead of storing static
credentials in configuration files.
The script must be owned and executable by SlurmUser.
.IP
Environment variables provided to the script:
.RS
.TP
\fBSLURM_STORAGE_HOSTNAME\fR
Database hostname
.TP
\fBSLURM_STORAGE_PORT\fR
Database port number
.TP
\fBSLURM_STORAGE_USER\fR
Database username
.RE
.IP
Expected output format:
.br
\fBTOKEN=\fR\fI<authentication_token>\fR
.IP
The script must exit with status 0 on success, non-zero on failure.
Any output to stderr will be logged as an error. If there is a backup
host specified, the script will still be provided the main hostname and
the same token is used for both hosts.
.IP

.TP
\fBJobCompPort\fR
The listening port of the job completion database server.
Only used for database type storage plugins, ignored otherwise.
.IP

.TP
\fBJobCompType\fR
The job completion logging mechanism type. Unset by default.
Acceptable values at present include:
.IP
.RS
.TP
\fBjobcomp/elasticsearch\fR
Upon job completion, a record of the job should be written to an
Elasticsearch server, specified by the \fBJobCompLoc\fR parameter.
.br
\fBNOTE\fR: More information is available at the Slurm web site
.IP

.TP
\fBjobcomp/lua\fR
Upon job completion, a record of the job should be processed by the
\fIjobcomp.lua\fR script, located in the default script directory
(typically the subdirectory \fIetc\fR of the installation directory.
.IP

.TP
\fBjobcomp/mysql\fR
Upon job completion, a record of the job should be written to a MySQL
or MariaDB database, specified by the \fBJobCompLoc\fR parameter.
.IP

.TP
\fBjobcomp/script\fR
Upon job completion, a script specified by the \fBJobCompLoc\fR parameter is
to be executed with environment variables providing the job information.
.RE
.IP

.TP
\fBJobCompUser\fR
The user account for accessing the job completion database.
Only used for database type storage plugins, ignored otherwise.
.IP

.TP
\fBJobFileAppend\fR
This option controls what to do if a job's output or error file
exist when the job is started.
If \fBJobFileAppend\fR is set to a value of 1, then append to
the existing file.
By default, any existing file is truncated.
.IP

.TP
\fBJobRequeue\fR
This option controls the default ability for batch jobs to be requeued.
Jobs may be requeued explicitly by a system administrator, after node
failure, or upon preemption by a higher priority job.
If \fBJobRequeue\fR is set to a value of 1, then batch jobs may be requeued
unless explicitly disabled by the user.
If \fBJobRequeue\fR is set to a value of 0, then batch jobs will not be requeued
unless explicitly enabled by the user.
Use the \fBsbatch\fR \fI\-\-no\-requeue\fR or \fI\-\-requeue\fR
option to change the default behavior for individual jobs.
The default value is 1.
.IP

.TP
.TP 24
\fBall_partitions\fR
Set default partition to all partitions on the cluster.
.IP

.TP
\fBdefaults\fR
Set default values for job submission or modify requests.
.IP

.TP
\fBlogging\fR
Log select job submission and modification parameters.
.IP

.TP
\fBlua\fR
Execute a Lua script implementing site's own job_submit logic. Only one Lua
script will be executed. It must be named "job_submit.lua" and must be located
in the default configuration directory (typically the subdirectory "etc" of the
installation directory). Sample Lua scripts can be found with the Slurm
distribution, in the directory contribs/lua. Slurmctld will fatal on startup if
the configured lua script is invalid. Slurm will try to load the script for each
job submission. If the script is broken or removed while slurmctld is running,
Slurm will fallback to the previous working version of the script.
\fBWarning\fR: slurmctld runs this script while holding internal locks, and
only a single copy of this script can run at a time. This blocks most
concurrency in slurmctld. Therefore, this script should run to completion as
quickly as possible.
.IP

.TP
\fBpartition\fR
Set a job's default partition based upon job submission parameters and
available partitions.
.IP

.TP
\fBpbs\fR
Translate PBS job submission options to Slurm equivalent (if possible).
.IP

.TP
\fBrequire_timelimit\fR
Force job submissions to specify a timelimit.
.RE
.IP

\fBNOTE\fR: For examples of use see the Slurm code in "src/plugins/job_submit"
and "contribs/lua/job_submit*.lua" then modify the code to satisfy your needs.
.IP

If the job fails to terminate gracefully in the interval specified,
it will be forcibly terminated.
The default value is 30 seconds.
The value may not exceed 65533.
.IP

.TP
\fBMaxBatchRequeue\fR
Maximum number of times a batch job may be automatically requeued before
being marked as JobHeldAdmin. (Mainly useful when the \fBSchedulerParameters\fR
option \fBnohold_on_prolog_fail\fR is enabled.)
The default value is 5.
.IP

.TP
\fBNodeFeaturesPlugins\fR
Identifies the plugins to be used for support of node features which can
change through time. For example, a node which might be booted with various
BIOS setting. This is supported through the use of a node's active_features
and available_features information.
Acceptable values at present include:
.IP
.RS
.TP
\fBnode_features/helpers\fR
Used to report and modify features on nodes using arbitrary scripts or
programs.
See helpers.conf man page for more information:
https://slurm.schedmd.com/helpers.conf.html
.RE
.IP

.TP
\fBLaunchParameters\fR
Identifies options to the job launch plugin.
Acceptable values include:
.IP
.RS
.TP 24
\fBbatch_step_set_cpu_freq\fR
Set the cpu frequency for the batch step from given \-\-cpu\-freq, or
slurm.conf CpuFreqDef, option. By default only steps started with srun will
utilize the cpu freq setting options.

\fBNOTE\fR: If you are using srun to launch your steps inside a batch script
(advised) this option will create a situation where you may have multiple
agents setting the cpu_freq as the batch step usually runs on the same
resources one or more steps the sruns in the script will create.
.IP

.TP 24
\fBinteractive_step_set_cpu_freq\fR
Permits passwd and group resolution for a job to be serviced by slurmstepd rather
than requiring a lookup from a network based service. See
https://slurm.schedmd.com/nss_slurm.html for more information.
.IP

.TP 24
\fBlustre_no_flush\fR
If set on a Cray XC cluster, then do not flush the Lustre cache on job step
completion. This setting will only take effect after reconfiguring, and will
only take effect for newly launched jobs.
.IP

.TP 24
\fBmem_sort\fR
Sort NUMA memory at step start. User can override this default with
SLURM_MEM_BIND environment variable or \-\-mem\-bind=nosort command line option.
.IP

.TP
\fBmpir_use_nodeaddr\fR
When launching tasks Slurm creates entries in MPIR_proctable that are used by
parallel debuggers, profilers, and related tools to attach to running process.
By default the MPIR_proctable entries contain MPIR_procdesc structures where
the host_name is set to NodeName by default. If this option is specified,
NodeAddr will be used in this context instead.
.IP

.TP
\fBdisable_send_gids\fR
By default, the slurmctld will look up and send the user_name and extended gids
for a job, rather than independently on each node as part of each task launch.
This helps mitigate issues around name service scalability when launching jobs
involving many nodes. Using this option will disable this functionality. This
option is ignored if enable_nss_slurm is specified.
.IP

.TP 24
\fBslurmstepd_memlock\fR
Lock the slurmstepd process's current memory in RAM.
.IP

.TP
\fBslurmstepd_memlock_all\fR
Lock the slurmstepd process's current and future memory in RAM.
.IP

.TP
\fBtest_exec\fR
Have srun verify existence of the executable program along with user
execute permission on the node where srun was called before attempting to
launch it on nodes in the step.
.IP
RLIMIT_RSS is set, as is done for tasks running in regular steps.
.RE
.IP

.TP
\fBLicenses\fR
Specification of licenses (or other resources available on all
nodes of the cluster) which can be allocated to jobs.
License names can optionally be followed by a colon
and count with a default count of one.
Multiple license names should be comma separated (e.g.
"Licenses=foo:4,bar").
Note that Slurm prevents jobs from being scheduled if their
required license specification is not available.
Slurm does not prevent jobs from using licenses that are
not explicitly listed in the job submission specification.
.IP

.TP
\fBLogTimeFormat\fR
Format of the timestamp in slurmctld and slurmd log files. Accepted
format values include "iso8601", "iso8601_ms", "rfc5424", "rfc5424_ms",
"rfc3339", "clock", "short" and "thread_id". The values ending in "_ms" differ
from the ones without in that fractional seconds with millisecond precision are
printed. The default value is "iso8601_ms". The "rfc5424" formats are the same
as the "iso8601" formats except that the timezone value is also shown.
The "clock" format shows a timestamp in microseconds retrieved
with the C standard clock() function. The "short" format is a short
date and time format. The "thread_id" format shows the timestamp
in the C standard ctime() function form without the year but
including the microseconds, the daemon's process ID and the current thread name
and ID.
.IP

.TP
\fBMailDomain\fR
Domain name to qualify usernames if email address is not explicitly given
with the "\-\-mail\-user" option. If unset, the local MTA will need to qualify
local address itself. Changes to MailDomain will only affect new jobs.
.IP

.TP
\fBMailProg\fR
Fully qualified pathname to the program used to send email per user request.
The default value is "/bin/mail" (or "/usr/bin/mail" if "/bin/mail" does not
exist but "/usr/bin/mail" does exist).
The program is called with arguments suitable for the default mail command,
however additional information about the job is passed in the form of
environment variables.

Additional variables are the same as those passed to \fIPrologSlurmctld\fR and
\fIEpilogSlurmctld\fR with additional variables in the following contexts:
The mail type triggering the mail.
.RE
.RE
.IP
.RS
.TP
\fBBEGIN\fR
.IP
.RS
.TP
\fBSLURM_JOB_QEUEUED_TIME\fR
The amount of time the job was queued.
.RE
.RE
.IP
.RS
.TP
\fBEND, FAIL, REQUEUE, TIME_LIMIT_*\fR
.IP
.RS
.TP
\fBSLURM_JOB_RUN_TIME\fR
The amount of time the job ran for.
.RE
.RE
.IP
.RS
.TP
\fBEND, FAIL\fR
.IP
.RS
.TP
\fBSLURM_JOB_EXIT_CODE_MAX\fR
Job's exit code or highest exit code for an array job.
.RE
.IP
.RS
.TP
\fBSLURM_JOB_EXIT_CODE_MIN\fR
Job's minimum exit code for an array job.
.RE
.IP
.RS
.TP
\fBSLURM_JOB_TERM_SIGNAL_MAX\fR
Job's highest signal for an array job.
.RE
.RE
.IP
.RS
.TP
\fBSTAGE_OUT\fR
The value may not exceed 4000001.
The value of \fBMaxJobCount\fR should be much larger than \fBMaxArraySize\fR,
since each job array task still counts as a separate job (see \fBMaxJobCount\fR
for further details).
The default value is 1001.
See also \fBmax_array_tasks\fR in SchedulerParameters.
.IP

.TP
\fBMaxDBDMsgs\fR
When communication to the SlurmDBD is not possible the slurmctld will queue
messages meant to processed when the SlurmDBD is available again.
In order to avoid running out of memory the slurmctld will only queue so many
messages. The default value is 10000, or \fBMaxJobCount\fR * 2 + Node Count
* 4, whichever is greater. The value can not be less than 10000.
.IP

.TP
\fBMaxJobCount\fR
The maximum number of jobs slurmctld can have in memory at one time.
Combine with \fBMinJobAge\fR to ensure the slurmctld daemon does not exhaust
its memory or other resources. Once this limit is reached, requests to submit
additional jobs will fail. The default value is 10000 jobs.
\fBNOTE\fR: Each task of a job array counts as one job even though they will not
occupy separate job records until modified or initiated.
Performance can suffer with more than a few hundred thousand jobs.
Setting MaxSubmitJobs per user is generally valuable to prevent a single
user from filling the system with jobs.
This is accomplished using Slurm's database and configuring enforcement of
resource limits.
.IP

.TP
\fBMaxJobId\fR
The maximum job id to be used for jobs submitted to Slurm without a specific
requested value. Job ids are unsigned 32bit integers with the first 26 bits
reserved for local job ids and the remaining 6 bits reserved for a cluster id
to identify a federated job's origin. The maximum allowed local job id is
67,108,863 (0x3FFFFFF). The default value is 67,043,328 (0x03ff0000).
\fBMaxJobId\fR only applies to the local job id and not the federated job id.
Job id values generated will be incremented by 1 for each subsequent job. Once
\fBMaxJobId\fR is reached, the next job will be assigned \fBFirstJobId\fR.
Federated jobs will always have a job ID of 67,108,865 or higher.
Also see \fBFirstJobId\fR.
.IP

.TP
\fBMaxMemPerCPU\fR
Maximum real memory size available per allocated CPU in megabytes.
Used to avoid over\-subscribing memory and causing paging.
\fBMaxMemPerCPU\fR would generally be used if individual processors
are allocated to jobs (\fBSelectType=select/cons_tres\fR).
\fBMaxMemPerNode\fR
Maximum real memory size available per allocated node in a job allocation in
megabytes. Used to avoid over\-subscribing memory and causing paging.
\fBMaxMemPerNode\fR would generally be used if whole nodes
are allocated to jobs (\fBSelectType=select/linear\fR) and
resources are over\-subscribed (\fBOverSubscribe=yes\fR or
\fBOverSubscribe=force\fR).
The default value is 0 (unlimited).
Also see \fBDefMemPerNode\fR and \fBMaxMemPerCPU\fR.
\fBMaxMemPerCPU\fR and \fBMaxMemPerNode\fR are mutually exclusive.
.IP

.TP
\fBMaxNodeCount\fR
Maximum count of nodes which may exist in the controller. By default MaxNodeCount
will be set to the number of nodes found in the slurm.conf. MaxNodeCount will
be ignored if less than the number of nodes found in the
slurm.conf. The total number of nodes in a system cannot exceed 65536. Increase
MaxNodeCount to accommodate dynamically created nodes with dynamic node
registrations and nodes created with scontrol.
.IP

.TP
\fBMaxStepCount\fR
The maximum number of steps that any job can initiate. This parameter
is intended to limit the effect of bad batch scripts.
The default value is 40000 steps.
.IP

.TP
\fBMaxTasksPerNode\fR
Maximum number of tasks Slurm will allow a job step to spawn
on a single node. The default \fBMaxTasksPerNode\fR is 512.
May not exceed 65533.
.IP

.TP
\fBMCSParameters\fR
MCS = Multi\-Category Security
MCS Plugin Parameters.
The supported parameters are specific to the \fBMCSPlugin\fR.
Changes to this value take effect when the Slurm daemons are reconfigured.
More information about MCS is available here
fR(8) manual.
.IP

.TP
\fBPreemptMode\fR
Mechanism used to preempt jobs or enable gang scheduling. When the
\fBPreemptType\fR parameter is set to enable preemption, the
\fBPreemptMode\fR selects the default mechanism used to preempt the eligible
jobs for the cluster.
.br
\fBPreemptMode\fR may be specified on a per partition basis to override this
default value if \fBPreemptType=preempt/partition_prio\fR. Alternatively, it
can be specified on a per QOS basis if \fBPreemptType=preempt/qos\fR. In either
case, a valid default \fBPreemptMode\fR value must be specified for the
cluster as a whole when preemption is enabled.
.br
The \fBGANG\fR option is used to enable gang scheduling independent of
whether preemption is enabled (i.e. independent of the \fBPreemptType\fR
setting). It can be specified in addition to a \fBPreemptMode\fR setting with
the two options comma separated (e.g. \fBPreemptMode=SUSPEND,GANG\fR).
.br
See <https://slurm.schedmd.com/preempt.html> and
fR(2) instead of \fIepoll\fR(7) for monitoring file descriptors.
.IP

.TP
\fBconmgr_connect_timeout\fR=\fI<seconds>\fR
Wait \fI<seconds>\fR before considering an outbound connection attempt to be
timed out. Defaults to the value of \fBMessageTimeout\fR.
.IP

.TP
\fBconmgr_read_timeout\fR=\fI<seconds>\fR
Wait \fI<seconds>\fR before considering a read from a file descriptor to be
timed out. Defaults to the value of \fBMessageTimeout\fR.
.IP

.TP
\fBconmgr_quiesce_timeout\fR=\fI<seconds>\fR
Wait \fI<seconds>\fR before considering quiesce to be timed out. Upon timeout,
all (non-listening) active connections will be closed to allow the quiesce to
start. Defaults to two times value of \fBMessageTimeout\fR.
.IP

.TP
\fBconmgr_wait_write_delay\fR=\fI<seconds>\fR
When waiting for kernel to flush outgoing buffer, poll kernel for changes every
\fI<seconds>\fR for changes. Defaults to the value of \fBMessageTimeout\fR.
\fBenable_async_reply\fR
Enable \fBslurmctld\fR to reply to incoming (supported) RPCs asynchronously
without blocking a thread in the conmgr thread pool.
.IP

.TP
\fBenable_configless\fR
Permit "configless" operation by the slurmd, slurmstepd, and user commands.
When enabled the slurmd will be permitted to retrieve config files and
\fBProlog\fR, \fBEpilog\fR, \fBTaskProlog\fR, and \fBTaskEpilog\fR scripts from
the slurmctld, and on any 'scontrol reconfigure' command new configs and scripts
will be automatically pushed out and applied to nodes that are running in this
"configless" mode. See https://slurm.schedmd.com/configless_slurm.html for more
details.

\fBNOTE\fR: Included files with the \fBInclude\fR directive will only be pushed
if the filename has no path separators and is located adjacent to slurm.conf.

\fBNOTE\fR: \fBProlog\fR and \fBEpilog\fR scripts will only be pushed if the
filenames have no path separators and are located adjacent to slurm.conf.
Glob patterns (See \fBglob\fR (7)) are not supported.
.IP

.TP
\fBenable_expedited_requeue\fR
Allow jobs to request an expedited requeue on certain events. An expedited
requeue ensures that the job is immediately eligible to run and gets placed at
the top of the queue.
.IP

.TP
\fBidle_on_node_suspend\fR
Mark nodes as idle, regardless of current state, when suspending nodes with
\fBSuspendProgram\fR so that nodes will be eligible to be resumed at a later
time.
.IP

.TP
\fBnode_reg_mem_percent\fR=\#
Percentage of memory a node is allowed to register with without being marked as
invalid with low memory. Default is 100. For State=CLOUD nodes, the default is
90. To disable this for cloud nodes set it to 100. \fIconfig_overrides\fR takes
precedence over this option.

It's recommended that \fItask/cgroup\fR with \fIConstrainRamSpace\fR is
configured. A memory cgroup limit won't be set more than the actual memory on
the node. If needed, configure \fIAllowedRamSpace\fR in the cgroup.conf to add
a buffer.
.IP

.TP
\fBno_quick_restart\fR
How often the power_save thread, at a minimum, looks to resume and suspend
nodes. Default is 0.
.IP

.TP
\fBmax_powered_nodes\fR
The max number of powered up nodes across the cluster. Once this is reached,
jobs requesting additional nodes will not start, and "scontrol power up
fR(2) instead of \fIepoll\fR(7) for monitoring file descriptors.
.IP

.TP
\fBconmgr_connect_timeout\fR=\fI<seconds>\fR
Wait \fI<seconds>\fR before considering an outbound connection attempt to be
timed out. Defaults to the value of \fBMessageTimeout\fR.
.IP

.TP
\fBconmgr_read_timeout\fR=\fI<seconds>\fR
Wait \fI<seconds>\fR before considering a read from a file descriptor to be
timed out. Defaults to the value of \fBMessageTimeout\fR.
.IP

.TP
\fBconmgr_quiesce_timeout\fR=\fI<seconds>\fR
Wait \fI<seconds>\fR before considering quiesce to be timed out. Upon timeout,
all (non-listening) active connections will be closed to allow the quiesce to
start. Defaults to two times value of \fBMessageTimeout\fR.
.IP

.TP
\fBconmgr_wait_write_delay\fR=\fI<seconds>\fR
When waiting for kernel to flush outgoing buffer, poll kernel for changes every
\fI<seconds>\fR for changes. Defaults to the value of \fBMessageTimeout\fR.
.IP

.TP
\fBconmgr_write_timeout\fR=\fI<seconds>\fR
Wait \fI<seconds>\fR before considering a write from a file descriptor to be
timed out. Defaults to the value of \fBMessageTimeout\fR.
number of nodes per socket may be configured.
Mutually exclusive with l3cache_as_socket.
Requires hwloc v2.
.IP

.TP
\fBshutdown_on_reboot\fR
If set, the Slurmd will shut itself down when a reboot request is received.
.IP

.TP
\fBcontain_spank\fR
If set and a job_container plugin is specified, the spank_user(),
spank_task_post_fork() and spank_task_exit() calls will be run inside the job
container.
.RE
.IP

.TP
\fBSlurmdPidFile\fR
Fully qualified pathname of a file into which the \fBslurmd\fR daemon may write
its process id. This may be used for automated signal processing.
The first "%h" within the name is replaced with the hostname on which the
\fBslurmd\fR is running.
The first "%n" within the name is replaced with the Slurm node name on which the
\fBslurmd\fR is running.
The default value is "/var/run/slurmd.pid".
.IP

.TP
\fBSlurmdPort\fR
The port number that the Slurm compute node daemon, \fBslurmd\fR, listens
to for work. The default value is SLURMD_PORT as established at system
build time. If none is explicitly specified, its value will be 6818.
\fBNOTE\fR: Either slurmctld and slurmd daemons must not execute
on the same nodes or the values of \fBSlurmctldPort\fR and \fBSlurmdPort\fR
must be different.

\fBNOTE\fR: On Cray systems, Realm\-Specific IP Addressing (RSIP) will
automatically try to interact with anything opened on ports 8192\-60000.
Configure SlurmdPort to use a port outside of the configured SrunPortRange
and RSIP's port range.
.IP

.TP
\fBSlurmdSpoolDir\fR
Fully qualified pathname of a directory into which the \fBslurmd\fR
daemon's state information and batch job script information are written. This
must be a common pathname for all nodes, but should represent a directory which
is local to each node (reference a local file system). The default value
is "/var/spool/slurmd".
The first "%h" within the name is replaced with the hostname on which the
.IP
.RS
.TP 10
\fBquiet\fR
Log nothing
.IP

.TP
\fBfatal\fR
Log only fatal errors
.IP

.TP
\fBerror\fR
Log only errors
.IP

.TP
\fBinfo\fR
Log errors and general informational messages
.IP

.TP
\fBverbose\fR
Log errors and verbose informational messages
.IP

.TP
\fBdebug\fR
Log errors and verbose informational messages and debugging messages
.IP

.TP
\fBdebug2\fR
Log errors and verbose informational messages and more debugging messages
.IP

.TP
\fBdebug3\fR
Log errors and verbose informational messages and even more debugging messages
.IP

.TP
\fBdebug4\fR
Log errors and verbose informational messages and even more debugging messages
.IP

.TP
\fBdebug5\fR
Log errors and verbose informational messages and even more debugging messages
.RE
.IP
will take responsibility for monitoring the state of each compute node
and its \fBslurmd\fR daemon.
Slurm's hierarchical communication mechanism is used to ping the \fBslurmd\fR
daemons in order to minimize system noise and overhead.
The default value is 300 seconds.
The value may not exceed 65533 seconds.
.IP

.TP
\fBSlurmdUser\fR
The name of the user that the \fBslurmd\fR daemon executes as.
This user must exist on all nodes of the cluster for authentication
of communications between Slurm components.
The default value is "root", which should be kept in almost all cases so that
slurmd can run jobs as the user that submitted them.
.IP

.TP
\fBSlurmSchedLogFile\fR
Fully qualified pathname of the scheduling event logging file.
The syntax of this parameter is the same as for \fBSlurmctldLogFile\fR.
In order to configure scheduler logging, set both the \fBSlurmSchedLogFile\fR
and \fBSlurmSchedLogLevel\fR parameters.
.IP

.TP
\fBSlurmSchedLogLevel\fR
The initial level of scheduling event logging, similar to the
\fBSlurmctldDebug\fR parameter used to control the initial level of
\fBslurmctld\fR logging.
Valid values for \fBSlurmSchedLogLevel\fR are "0" (scheduler logging
disabled) and "1" (scheduler logging enabled).
If this parameter is omitted, the value defaults to "0" (disabled).
In order to configure scheduler logging, set both the \fBSlurmSchedLogFile\fR
and \fBSlurmSchedLogLevel\fR parameters.
The scheduler logging level can be changed dynamically using \fBscontrol\fR.
.IP

.TP
\fBSlurmUser\fR
The name of the user that the \fBslurmctld\fR daemon executes as.
For security purposes, a user other than "root" is recommended.
This user must exist on all nodes of the cluster for authentication
of communications between Slurm components.
The default value is "root".
.IP

.TP
\fBSrunEpilog\fR
Fully qualified pathname of an executable to be run by srun following
the completion of a job step. The command line arguments for the
executable will be the command and arguments of the job step. This
allow only certain port range on their network.

\fBNOTE\fR: On Cray systems, Realm\-Specific IP Addressing (RSIP) will
automatically try to interact with anything opened on ports 8192\-60000.
Configure SrunPortRange to use a range of ports above those used by RSIP,
ideally 1000 or more ports, for example "SrunPortRange=60001\-63000".

\fBNOTE\fR: \fBSrunPortRange\fR must be large enough to cover the expected
number of srun ports created. A single srun opens 4 listening ports plus 2
more for every 48 hosts beyond the first 48. Use of the \fB\-\-pty\fR option
will result in an additional port being used.

Example:
.nf
srun \-N 1        will use 4 listening ports.
srun \-\-pty \-N 1  will use 5 listening ports.
srun \-N 48       will use 4 listening ports.
srun \-N 50       will use 6 listening ports.
srun \-N 200      will use 12 listening ports.
.fi
.IP

.TP
\fBSrunProlog\fR
Fully qualified pathname of an executable to be run by srun prior to
the launch of a job step. The command line arguments for the
executable will be the command and arguments of the job step. This
configuration parameter may be overridden by srun's \fB\-\-prolog\fR
parameter. Note that while the other "Prolog" executables (e.g.,
TaskProlog) are run by slurmd on the compute nodes where the tasks are
executed, the \fBSrunProlog\fR runs on the node where the "srun" is
executing.
.IP

.TP
\fBStateSaveLocation\fR
Fully qualified pathname of a directory into which the Slurm controller,
\fBslurmctld\fR, saves its state (e.g. "/usr/local/slurm/checkpoint").
Slurm state will saved here to recover from system failures.
\fBSlurmUser\fR must be able to create files in this directory.
If you have a secondary \fBSlurmctldHost\fR configured, this location should be
readable and writable by both systems.
Since all running and pending job information is stored here, the use of
a reliable file system (e.g. RAID) is recommended.
The default value is "/var/spool".
If any slurm daemons terminate abnormally, their core files will also be written
into this directory.
.IP

.TP
\fBSuspendExcNodes\fR
Specifies the nodes which are to not be placed in power save mode, even
Specifies the partitions whose nodes are to not be placed in power save
mode, even if the node remains idle for an extended period of time.
Multiple partitions can be identified and separated by commas.
By default no nodes are excluded.
This value may be updated with scontrol.
See \fBReconfigFlags=KeepPowerSaveSettings\fR for setting persistence.
.IP

.TP
\fBSuspendExcStates\fR
Specifies node states that are not to be powered down automatically.
Valid states include CLOUD, DOWN, DRAIN, DYNAMIC_FUTURE, DYNAMIC_NORM, FAIL,
INVALID_REG, MAINTENANCE, NOT_RESPONDING, PERFCTRS, PLANNED, and RESERVED.
By default, any of these states, if idle for \fBSuspendTime\fR, would be
powered down.
This value may be updated with scontrol.
See \fBReconfigFlags=KeepPowerSaveSettings\fR for setting persistence.
.IP

.TP
\fBSuspendProgram\fR
\fBSuspendProgram\fR is the program that will be executed when a node
remains idle for an extended period of time.
This program is expected to place the node into some power save mode.
This can be used to reduce the frequency and voltage of a node or
completely power the node off.
The program executes as \fBSlurmUser\fR.
The argument to the program will be the names of nodes to
be placed into power savings mode (using Slurm's hostlist
expression format).
By default, no program is run.
Programs will be killed if they run longer than the largest configured, global
or partition, \fBResumeTimeout\fR or \fBSuspendTimeout\fR.
.IP

.TP
\fBSuspendRate\fR
The rate at which nodes are placed into power save mode by \fBSuspendProgram\fR.
The value is number of nodes per minute and it can be used to prevent
a large drop in power consumption (e.g. after a large job completes).
A value of zero results in no limits being imposed.
The default value is 60 nodes per minute.
.IP

.TP
\fBSuspendTime\fR
Nodes which remain idle or down for this number of seconds will be placed into
power save mode by \fBSuspendProgram\fR.
Setting \fBSuspendTime\fR to anything but INFINITE (or \-1) will enable power
save mode. INFINITE is the default.
.IP

the following parameters are supported
(separate multiple parameters with a comma):
.IP

.RS
.TP
\fBvnis\fR=<\fImin\fR>-<\fImax\fR>
Range of VNIs to allocate for jobs and applications.
The default value is 1024-65535.
.IP

.TP
\fBdestroy_retries\fR=<\fIretry attempts\fR>
Configure the number of times destroying CXI services is retried at the end of
the step. There is a one second pause between each retry.
The default value is 5.
.IP

.TP
\fBtcs\fR=<\fIclass1\fR>[:<\fIclass2\fR>]...
Set of traffic classes to configure for applications.
Supported traffic classes are DEDICATED_ACCESS, LOW_LATENCY, BULK_DATA, and
BEST_EFFORT. The traffic classes may also be specified as TC_DEDICATED_ACCESS,
TC_LOW_LATENCY, TC_BULK_DATA, and TC_BEST_EFFORT.
.IP

.TP
\fBsingle_node_vni\fR=<\fIall\fR|\fIuser\fR|\fInone\fR>
If set to 'all', allocate a VNI for all job steps (by default, no VNI will be
allocated for single-node job steps).
If set to 'user', allocate a VNI for single-node job steps using the \fBsrun\fR
\fB\-\-network=single_node_vni\fR option or \fBSLURM_NETWORK=single_node_vni\fR
environment variable.
If set to 'none' (or if \fBsingle_node_vni\fR is not set), do not allocate any
VNI for single-node job steps.
For backwards compatibility, setting \fBsingle_node_vni\fR with no argument is
equivalent to 'all'.
.IP

.TP
\fBjob_vni\fR=<\fIall\fR|\fIuser\fR|\fInone\fR>
If set to 'all', allocate an additional VNI for jobs, shared among all job steps.
If set to 'user', allocate an additional VNI for any job using the \fBsrun\fR
\fB\-\-network=job_vni\fR option or \fBSLURM_NETWORK=job_vni\fR environment
variable.
If set to 'none' (or if \fBjob_vni\fR is not set), do not allocate any
additional VNI for jobs. For backwards compatibility, setting \fBjob_vni\fR with
no argument is equivalent to 'all'.
.IP

.TP
\fBadjust_limits\fR
absolutely needs a certain amount of resources to function, this option
will ensure that.
.IP

.TP
\fBhwcoll_addrs_per_job\fR
The number of Slingshot hardware collectives multicast addresses to allocate
per job. (That are larger than hwcoll_min_nodes nodes)
.IP

.TP
\fBhwcoll_num_nodes\fR
The minimum number of nodes for a job to be allocated Slingshot hardware
collectives. Because the hardware collective engine is not expected to offer a
meaningful performance boost for jobs spanning a small number of nodes.
.IP

.TP
\fBfm_url\fR
If set, slurm will use the configured URL to interface with the fabric
manager to enable Slingshot hardware collectives.
Note \fBenable_stepmgr\fR needs to be set for hardware collectives to run.
.IP

.TP
\fBfm_auth\fR
HPE fabric manager REST API authentication type
(BASIC or OAUTH, default OAUTH).
.IP

.TP
\fBfm_authdir\fR
Directory containing authentication info files (default /etc/fmsim
for BASIC authentication, /etc/wlm-client-auth for OAUTH authentication).
.IP

.TP
\fBfm_mtls_url\fR
This sets an alternative URL to \fBfm_url\fR that slurm daemons will use to
interface with the fabric manager to enable Slingshot hardware collectives when
mTLS authentication is enabled. If this is not set, \fBfm_url\fR will be used
instead. To enable mTLS authentication see \fBfm_mtls_ca\fR, \fBfm_mtls_cert\fR,
and \fBfm_mtls_key\fR.

\fBNote\fR: Setting \fBfm_url\fR and \fBenable_stepmgr\fR are required to enable
Slingshot hardware collectives.
.IP

.TP
\fBfm_mtls_ca\fR
Path to Certificate Authority (CA) bundle file or directory containing a file
signed by the fabric manager certificate. If set, the identity of the fabric

.TP
\fBfm_mtls_key\fR
Path to client private key. This is required to enable mTLS authentication to
the fabric manager when Slingshot hardware collectives are enabled.
See also \fBfm_mtls_ca\fR and \fBfm_mtls_cert\fR.
.IP

.TP
\fBnic_distribution_count\fR=<\fIval\fR>
The default number of NICs users will evenly distribute their tasks over.
Users can override this value by using
\fB\-\-network=nic_distribution_count\fR=<\fIval\fR> option or the
\fBSLURM_NETWORK=nic_distribution_count\fR=<\fIval\fR> environment variable.
Defaults to the number of NICs on each node.
.IP

.TP
\fBdef_<rsrc>\fR=<\fIval\fR>
Per-CPU reserved allocation for this resource.
.IP

.TP
\fBres_<rsrc>\fR=<\fIval\fR>
Per-node reserved allocation for this resource.
If set, overrides the per-CPU allocation.
.IP

.TP
\fBmax_<rsrc>\fR=<\fIval\fR>
Maximum per-node application for this resource.
.IP
.RE

The resources that may be configured are:
.IP

.RS
.TP
\fBtxqs\fR
Transmit command queues. The default is 2 per-CPU, maximum 1024 per-node.
.IP

.TP
\fBtgqs\fR
Target command queues. The default is 1 per-CPU, maximum 512 per-node.
.IP

.TP
\fBeqs\fR
Event queues. The default is 2 per-CPU, maximum 2047 per-node.
.IP
.IP

.TP
\fBles\fR
List entries. The default is 16 per-CPU, maximum 16384 per-node.
.IP

.TP
\fBacs\fR
Addressing contexts. The default is 2 per-CPU, maximum 1022 per-node.
.IP
.RE

On systems configured with \fBSwitchType=switch/nvidia_imex\fR, the following
parameters are supported:
.RS
.TP
\fBimex_channel_count\fR
Number of channels that can be configured. Channels allow nodes to create a
secure method of sharing memory. The default value is 2048.

By default, each job is allocated one IMEX channel that is accessible by the
batch, interactive, and normal job steps on all nodes within the job. If using
\fB\-\-network=unique-channel-per-segment\fR on job submission and
\fBtopology/block\fR is configured, then each segment will be allocated one
IMEX channel that is accessible by the batch, interactive, and normal job steps
on all nodes within that particular segment.
.RE
.IP

.TP
\fBSwitchType\fR
Identifies the type of switch or interconnect used for application
communications.
The default value is no special plugin requiring special processing for job
launch or termination (Ethernet, and InfiniBand).
All Slurm daemons, commands and running jobs must be restarted or reconfigured
for a change in \fBSwitchType\fR to take effect.
If running jobs exist at the time \fBslurmctld\fR is restarted with a new
value of \fBSwitchType\fR, records of all jobs in any state may be lost.
Acceptable values include:
.IP
.RS
.TP 15
\fBswitch/hpe_slingshot\fR
For HPE Slingshot systems.
.IP

.TP
\fBswitch/nvidia_imex\fR
For allocating unique channels within an NVIDIA IMEX domain.
.RE
processors). More than one task plugin can be specified in a comma\-separated
list. The prefix of "task/" is optional. Unset by default.
Acceptable values include:
.IP
.RS
.TP 15
\fBtask/affinity\fR
binds processes to specified resources using sched_setaffinity().
This enables the \-\-cpu\-bind and/or \-\-mem\-bind srun options.
.IP

.TP
\fBtask/cgroup\fR
enables process containment to specified resources using Cgroups cpuset
interface. This enables the \-\-cpu\-bind and/or \-\-mem\-bind srun options.
\fBNOTE\fR: see "man cgroup.conf" for configuration details.
.RE
.IP

.RS
\fBNOTE\fR: It is recommended to stack \fBtask/cgroup,task/affinity\fR together
when configuring TaskPlugin, and setting \fBConstrainCores=yes\fR in
\fBcgroup.conf\fR. This setup uses the task/affinity plugin for setting the
cpu mask for tasks and uses the task/cgroup plugin to fence tasks into the
allocated cpus.
.RE
.IP

.TP
\fBTaskPluginParam\fR
Optional parameters for the task plugin.
Multiple options should be comma separated.
\fBNone\fR, \fBSockets\fR, \fBCores\fR and \fBThreads\fR are mutually
exclusive and treated as a last possible source of \-\-cpu\-bind default. See also
Node and Partition CpuBind options.
.IP
.RS
.TP
\fBCores\fR
Bind tasks to cores by default.
Overrides automatic binding.
.IP

.TP
\fBNone\fR
Perform no task binding by default.
Overrides automatic binding.
.IP

.TP
\fBSockets\fR
Bind to sockets by default.
cgroup/v1, slurmd and slurmstepd daemons will now not be able to use any
of these resources. While in normal behavior, cgroup/v1 constrains the
daemons to CpuSpecList and MemSpecLimit.
.IP

.TP
\fBSlurmdOffSpec\fR
If specialized cores or CPUs are identified for the node (i.e. the
\fBCoreSpecCount\fR or \fBCpuSpecList\fR are configured for the node),
then Slurm daemons running on the compute node (i.e. slurmd and slurmstepd)
should run outside of those resources (i.e. specialized resources are
completely unavailable to Slurm daemons and jobs spawned by Slurm).
.IP

.TP
\fBOOMKillStep\fR
Set this parameter to kill the whole step in all the nodes in case an OOM event
is triggered in any task of the step.

This applies to entire allocations but does not apply to the external step.
It can be overwritten by the user.

\fBNOTE\fR: This parameter requires the \fBtask/cgroup\fR plugin, Cgroups v2,
and a kernel newer than 4.19.
.IP

.TP
\fBVerbose\fR
Verbosely report binding before tasks run by default.
.IP

.TP
\fBAutobind\fR
Set a default binding in the event that "auto binding" doesn't find a match.
Set to Threads, Cores or Sockets (E.g. TaskPluginParam=autobind=threads).
.RE
.IP

.TP
\fBTaskProlog\fR
Fully qualified pathname of a program to be executed as the slurm job's user
prior to initiation of each task. Will run inside of the job's container if
configured. Should not be used for policy enforcement.
Besides the normal environment variables, this has SLURM_TASK_PID
available to identify the process ID of the task being started.
Standard output from this program can be used to control the environment
variables and output for the user program.
.IP
.RS
.TP 20
\fBexport NAME=value\fR
Will set environment variables for the task being spawned.
.IP

.TP
The order of task prolog/epilog execution is as follows:
.IP

.TP
\fB1. pre_launch_priv()\fR
Function in TaskPlugin
.IP

.TP
\fB1. pre_launch()\fR
Function in TaskPlugin
.IP

.TP
\fB2. TaskProlog\fR
System\-wide per task program defined in slurm.conf
.IP

.TP
\fB3. User prolog\fR
Job\-step\-specific task program defined using
\fBsrun\fR's \fB\-\-task\-prolog\fR option or \fBSLURM_TASK_PROLOG\fR
environment variable
.IP

.TP
\fB4. Task\fR
Execute the job step's task
.IP

.TP
\fB5. User epilog\fR
Job\-step\-specific task program defined using
\fBsrun\fR's \fB\-\-task\-epilog\fR option or \fBSLURM_TASK_EPILOG\fR
environment variable
.IP

.TP
\fB6. TaskEpilog\fR
System\-wide per task program defined in slurm.conf
.IP

.TP
\fB7. post_term()\fR
Function in TaskPlugin
.RE
.IP

.TP
SlurmUser/root.

Default path is "ca_cert.pem" in the Slurm configuration directory
.IP

.TP
\fBctld_cert_file=\fR
Path of certificate used by slurmctld. Must chain to \fBca_cert_file\fR. Should
only exist on host running slurmctld. File permissions must be 600, and owned
by SlurmUser.

Default path is "ctld_cert.pem" in the Slurm configuration directory
.IP

.TP
\fBctld_cert_key_file=\fR
Path of private key that accompanies \fBctld_cert_file\fR. Should only exist on
host running slurmctld. File permissions must be 600, and owned by SlurmUser.

Default path is "ctld_cert_key.pem" in the Slurm configuration directory
.IP

.TP
\fBrestd_cert_file=\fR
Path of certificate used by slurmrestd. Must chain to \fBca_cert_file\fR. Should
only exist on host running slurmrestd. File permissions must be 600, and owned
by the user that runs slurmrestd.

Default path is "restd_cert.pem" in the Slurm configuration directory
.IP

.TP
\fBrestd_cert_key_file=\fR
Path of private key that accompanies \fBrestd_cert_file\fR. Should only exist
on host running slurmrestd. File permissions must be 600, and owned by the user
that runs slurmrestd.

Default path is "restd_cert_key.pem" in the Slurm configuration directory
.IP

.TP
\fBsackd_cert_file=\fR
Path of certificate used by sackd. Must chain to \fBca_cert_file\fR. Should
only exist on host running sackd. File permissions must be 600, and owned
by SlurmUser.

Default path is "sackd_cert.pem" in the Slurm configuration directory

NOTE: If not using the certmgr plugin, this file needs to exist.
.IP

.TP

Default path is "slurmd_cert.pem" in the Slurm configuration directory

NOTE: If not using the certmgr plugin, this file needs to exist.
.IP

.TP
\fBslurmd_cert_key_file=\fR
Path of private key that accompanies \fBslurmd_cert_file\fR. Should only exist on
host running slurmd. File permissions must be 600, and owned by SlurmUser.

Default path is "slurmd_cert_key.pem" in the Slurm configuration directory

NOTE: If not using the certmgr plugin, this file needs to exist.
.IP

.TP
\fBload_system_certificates\fR
Load certificates found in default system locations (e.g. /etc/ssl) into trust store.

Default is to not load system certificates, and to rely solely on
\fBca_cert_file\fR to establish trust.
.IP

.TP
\fBsecurity_policy_version=\fR
Security policy version used by s2n. See s2n documentation for more details.
Default security policy is "20230317", which is FIPS compliant and includes TLS 1.3.
.RE
.IP

.TP
\fBTLSType\fR
Specify the TLS implementation that will be used. Unset by default.
Acceptable values at present:
.IP
.RS
.TP
\fBtls/s2n\fR
Use the s2n TLS plugin. Requires additional configuration and causes significant
processing overhead, but allows all Slurm communication to be encrypted. Refer
to the TLS guide for more details: <https://slurm.schedmd.com/tls.html>
.RE
.IP

.TP
\fBTmpFS\fR
Fully qualified pathname of the file system available to user jobs for
temporary storage. This parameter is used in establishing a node's \fBTmpDisk\fR
space.
The default value is "/tmp".
.IP
Instead of using the plugin's default route calculation, use partition node
lists to route communications from the controller. Once on the compute node,
communications will be routed using the requested plugin's normal algorithm,
following TreeWidth if applicable. If a node is in multiple partitions,
the first partition seen will be used. The controller will communicate directly
with any nodes that aren't in a partition.
.IP

.TP
\fBBlockAsNodeRank\fR
Assign the same node rank to all nodes under one base block.
This can be useful if the naming convention for the nodes does not match the
network topology.
Valid when topology/block is a cluster default topology.
.IP

.TP
\fBSwitchAsNodeRank\fR
Assign the same node rank to all nodes under one leaf switch.
This can be useful if the naming convention for the nodes does not match the
network topology.
Valid when topology/tree is a cluster default topology.
.IP

.TP
\fBRouteTree\fR
Use the switch hierarchy defined in a \fItopology.conf\fR file for routing
instead of just scheduling.
Valid when TopologyPlugin=topology/tree.
Incompatible with dynamic nodes.
.IP

.TP
\fBTopoMaxSizeUnroll\fR=\#
Maximum number of individual job sizes automatically unrolled
from min-max nodes job specification.
Default: -1 (option disabled).
Valid when TopologyPlugin=topology/block.
.IP

.TP
\fBTopoOptional\fR
Only optimize allocation for network topology if the job includes a switch
option. Since optimizing resource allocation for topology involves much higher
system overhead, this option can be used to impose the extra overhead only on
jobs which can take advantage of it. If most job allocations are not optimized
for network topology, they may fragment resources to the point that topology
optimization for other jobs will be difficult to achieve.
\fBNOTE\fR: Jobs may span across nodes without common parent switches with
this enabled.
.RE
.IP
man page
.IP

.TP
\fBtopology/flat\fR
best\-fit logic over one\-dimensional topology. This is the default.
.IP

.TP
\fBtopology/tree\fR
used for a hierarchical network with the select/cons_tres plugin,
as described in the \fBtopology.conf\fR(5)
man page
.RE
\fBNOTE\fR: This option is ignored if topology.yaml exists.
.IP

.TP
\fBTrackWCKey\fR
Boolean yes or no. Used to set display and track of the Workload
Characterization Key. Must be set to track correct wckey usage.
\fBNOTE\fR: You must also set TrackWCKey in your slurmdbd.conf file to create
historical usage reports.
.IP

.TP
\fBTreeWidth\fR
\fBSlurmd\fR daemons use a virtual tree network for communications.
\fBTreeWidth\fR specifies the width of the tree (i.e. the fanout).
The default value is 16, meaning each slurmd daemon can
communicate with up to 16 other slurmd daemons. This value balances offloading
slurmctld (max 16 threads running), time of communication, and node fault
tolerance (4368 nodes can be contacted with three message hops). The default
value will work well for most clusters however on bigger systems this value can
be increased to avoid long timeouts and retransmissions in case of unresponsive
nodes. The value may not exceed 65533.
.IP

.TP
\fBUnkillableStepProgram\fR
If the processes in a job step are determined to be unkillable for a period
of time specified by the \fBUnkillableStepTimeout\fR variable, the program
specified by \fBUnkillableStepProgram\fR will be executed.
By default no program is run.

See section \fBUNKILLABLE STEP PROGRAM SCRIPT\fR for more information.
.IP

.TP
\fBUnkillableStepTimeout\fR
The length of time, in seconds, that Slurm will wait before deciding that
processes in a job step are unkillable (after they have been signaled with
Acceptable values at present:
.IP
.RS
.TP
\fBurl_parser/libhttp_parser\fR
Use the libhttp_parser based plugin.
.RE
.IP

.TP
\fBUsePAM\fR
If set to 1, PAM (Pluggable Authentication Modules for Linux) will be enabled.
PAM is used to establish the upper bounds for resource limits. With PAM support
enabled, local system administrators can dynamically configure system resource
limits. Changing the upper bound of a resource limit will not alter the limits
of running jobs, only jobs started after a change has been made will pick up
the new limits.
The default value is 0 (not to enable PAM support).
Remember that PAM also needs to be configured to support Slurm as a service.
For sites using PAM's directory based configuration option, a configuration
file named \fBslurm\fR should be created. The module\-type, control\-flags, and
module\-path names that should be included in the file are:
.br
auth        required      pam_localuser.so
.br
auth        required      pam_shells.so
.br
account     required      pam_unix.so
.br
account     required      pam_access.so
.br
session     required      pam_unix.so
.br
For sites configuring PAM with a general configuration file, the appropriate
lines (see above), where \fBslurm\fR is the service\-name, should be added.
See <https://slurm.schedmd.com/pam_slurm_adopt.html> for more details.

\fBNOTE\fR: UsePAM option has nothing to do with the
\fBcontribs/pam/pam_slurm\fR and/or \fBcontribs/pam_slurm_adopt\fR modules. So
these two modules can work independently of the value set for UsePAM.
.IP

.TP
\fBVSizeFactor\fR
Memory specifications in job requests apply to real memory size (also known
as resident set size). It is possible to enforce virtual memory limits for
both jobs and job steps by limiting their virtual memory to some percentage
of their real memory allocation. The \fBVSizeFactor\fR parameter specifies
the job's or job step's virtual memory limit as a percentage of its real
memory limit. For example, if a job's real memory limit is 500MB and
VSizeFactor is set to 101 then the job will be killed if its real memory
exceeds 500MB or its virtual memory exceeds 505MB (101 percent of the
the first task terminates before terminating all remaining tasks. The
"\-\-wait" option on the srun command line overrides this value.
The default value is 0, which disables this feature.
May not exceed 65533 seconds.
.IP

.TP
\fBX11Parameters\fR
For use with Slurm's built\-in X11 forwarding implementation.
.IP
.RS
.TP 8
\fBhome_xauthority\fR
If set, xauth data on the compute node will be placed in \fB~/.Xauthority\fR
rather than in a temporary file under \fBTmpFS\fR.
.RE
.IP

.SH "NODE CONFIGURATION"
The configuration of nodes (or machines) to be managed by Slurm is
also specified in \fB/etc/slurm.conf\fR.
Changes in node configuration (e.g. adding nodes, changing their
processor count, etc.) require restarting or reconfiguring all slurmctld
and slurmd daemons.
All slurmd daemons must know each node in the system to forward
messages in support of hierarchical communications.
Only the NodeName must be supplied in the configuration file.
All other node configuration information is optional.
It is advisable to establish baseline node configurations,
especially if the cluster is heterogeneous.
Nodes which register to the system with less than the configured resources
(e.g. too little memory), will be placed in the "DOWN" state to
avoid scheduling jobs on them.
Establishing baseline configurations will also speed Slurm's
scheduling process by permitting it to compare job requirements
against these (relatively few) configuration parameters and
possibly avoid having to check job requirements
against every individual node's configuration.
The resources checked at node registration time are: CPUs,
RealMemory and TmpDisk.
.LP
Default values can be specified with a record in which
\fBNodeName\fR is "DEFAULT".
The default entry values will apply only to lines following it in the
configuration file and the default values can be reset multiple times
in the configuration file with multiple entries where "NodeName=DEFAULT".
Each line where \fBNodeName\fR is "DEFAULT" will replace or add to previous
default values and will not reinitialize the default values.
The "NodeName=" specification must be placed on every line
describing the configuration of nodes.
A single node name can not appear as a NodeName value in more than one line
(duplicate node name records will be ignored).
Slurm to arbitrarily order a job step's tasks.
.LP
Multiple node names may be comma separated (e.g. "alpha,beta,gamma")
and/or a simple node range expression may optionally be used to
specify numeric ranges of nodes to avoid building a configuration
file with large numbers of entries.
The node range expression can contain one pair of square brackets
with a sequence of comma\-separated numbers and/or ranges of numbers
separated by a "\-" (e.g. "linux[0\-64,128]", or "lx[15,18,32\-33]").
Note that the numeric ranges can include one or more leading
zeros to indicate the numeric portion has a fixed number of digits
(e.g. "linux[0000\-1023]").
Multiple numeric ranges can be included in the expression
(e.g. "rack[0\-63]_blade[0\-41]").
If one or more numeric expressions are included, one of them
must be at the end of the name (e.g. "unit[0\-31]rack" is invalid),
but arbitrary names can always be used in a comma\-separated list.
.LP
The node configuration specified the following information:

.TP
\fBNodeName\fR
Name that Slurm uses to refer to a node.
Typically this would be the string that "/bin/hostname \-s" returns.
It may also be the fully qualified domain name as returned by "/bin/hostname \-f"
(e.g. "foo1.bar.com"), or any valid domain name associated with the host
through the host database (/etc/hosts) or DNS, depending on the resolver
settings. Note that if the short form of the hostname is not used, it
may prevent use of hostlist expressions (the numeric portion in brackets
must be at the end of the string).
It may also be an arbitrary string if \fBNodeHostname\fR is specified.
If the \fBNodeName\fR is "DEFAULT", the values specified
with that record will apply to subsequent node specifications
unless explicitly set to other values in that node record or
replaced with a different set of default values.
Each line where \fBNodeName\fR is "DEFAULT" will replace or add to previous
default values and not reinitialize the default values.
For architectures in which the node order is significant,
nodes will be considered consecutive in the order defined.
For example, if the configuration for "NodeName=charlie" immediately
follows the configuration for "NodeName=baker" they will be
considered adjacent in the computer.
\fBNOTE\fR: If the \fBNodeName\fR is "ALL" the process parsing the configuration
will exit immediately as it is an internally reserved word.
.IP

.TP
\fBNodeHostname\fR
Typically this would be the string that "/bin/hostname \-s" returns.
It may also be the fully qualified domain name as returned by "/bin/hostname \-f"
(e.g. "foo1.bar.com"), or any valid domain name associated with the host
through the host database (/etc/hosts) or DNS, depending on the resolver
a communications path.
This name will be used as an
argument to the getaddrinfo() function for identification.
If a node range expression is used to designate multiple nodes,
they must exactly match the entries in the \fBNodeName\fR
(e.g. "NodeName=lx[0\-7] NodeAddr=elx[0\-7]").
\fBNodeAddr\fR may also contain IP addresses.
By default, the \fBNodeAddr\fR will be identical in value to
\fBNodeHostname\fR.
.IP

.TP
\fBBcastAddr\fR
Alternate network path to be used for sbcast network traffic to a given node.
This name will be used as an argument to the getaddrinfo() function.
If a node range expression is used to designate multiple nodes,
they must exactly match the entries in the \fBNodeName\fR
(e.g. "NodeName=lx[0\-7] BcastAddr=elx[0\-7]").
\fBBcastAddr\fR may also contain IP addresses.
By default, the \fBBcastAddr\fR is unset, and sbcast traffic will be routed
to the \fBNodeAddr\fR for a given node.
Note: cannot be used with CommunicationParameters=NoInAddrAny.
.IP

.TP
\fBBoards\fR
Number of Baseboards in nodes with a baseboard controller.
Note that when Boards is specified, SocketsPerBoard,
CoresPerSocket, and ThreadsPerCore should be specified.
The default value is 1.
.IP

.TP
\fBCoreSpecCount\fR
Number of cores reserved for system use.
Depending upon the \fBTaskPluginParam\fR option of \fBSlurmdOffSpec\fR,
the Slurm daemon slurmd may either be confined to these
resources (the default) or prevented from using these resources.
If cgroup/v1 is used, the same applies to the slurmstepd processes.
Isolation of slurmd from user jobs may improve application performance.
A job can use these cores if AllowSpecResourcesUsage=yes and the user
explicitly requests less than the configured CoreSpecCount.
If this option and \fBCpuSpecList\fR are both designated for a
node, an error is generated. For information on the algorithm used by Slurm
to select the cores refer to the core specialization documentation
( https://slurm.schedmd.com/core_spec.html ).
.IP

.TP
\fBCoresPerSocket\fR
Number of cores in a single physical processor socket (e.g. "2").
The CoresPerSocket value describes physical cores, not the
CpuBind are \fBnone\fR, \fBsocket\fR, \fBldom\fR (NUMA), \fBcore\fR and
\fBthread\fR.
.IP

.TP
\fBCPUs\fR
Number of logical processors on the node (e.g. "2").
It can be set to the total
number of sockets(supported only by select/linear), cores or threads.
This can be useful when you want to schedule only the cores on a hyper\-threaded
node. If \fBCPUs\fR is omitted, its default will be set equal to the product of
\fBBoards\fR, \fBSockets\fR, \fBCoresPerSocket\fR, and \fBThreadsPerCore\fR.
.IP

.TP
\fBCpuSpecList\fR
A comma\-delimited list of Slurm abstract CPU IDs reserved for system use.
The list will be expanded to include all other CPUs, if any, on the same cores.
Depending upon the \fBTaskPluginParam\fR option of \fBSlurmdOffSpec\fR,
the Slurm daemon slurmd may either be confined to these
resources (the default) or prevented from using these resources.
If cgroup/v1 is used, the same applies to the slurmstepd processes.
Isolation of slurmd from user jobs may improve application performance.
A job can use these cores if AllowSpecResourcesUsage=yes and the user
explicitly requests less than the number of CPUs in this list.
If this option and \fBCoreSpecCount\fR are both designated for a node,
an error is generated.
This option has no effect unless cgroup job confinement is also configured
(i.e. the \fItask/cgroup\fR \fBTaskPlugin\fR is enabled and
\fBConstrainCores=yes\fR is set in cgroup.conf).
.IP

.TP
\fBFeatures\fR
A comma\-delimited list of arbitrary strings indicative of some
characteristic associated with the node.
There is no value or count associated with a feature at this time, a node
either has a feature or it does not.
A desired feature may contain a numeric component indicating,
for example, processor speed but this numeric component will be considered to
be part of the feature string. Features are intended to be used to filter nodes
eligible to run jobs via the \fB\-\-constraint\fR argument.
By default a node has no features.
Also see \fBGres\fR for being able to have more control such as types and
count. Using features is faster than scheduling against GRES but is limited to
Boolean operations.

\fBNOTE\fR: The hostlist function \fBfeature{myfeature}\fR expands to all nodes
with the specified feature. This may be used in place of or alongside regular
hostlist expressions in commands or configuration files that interact with the
slurmctld.
For example: \fBscontrol update node=feature{myfeature} state=resume\fR or
generic resource does not have a finite number of that resource that gets
consumed as it is requested. The no_consume field is a GRES specific setting
and applies to the GRES, regardless of the type specified.
It should not be used with GRES that has a dedicated plugin, if you're looking
for a way to overcommit GPUs to multiple processes at the time you may be
interested in using "shard" GRES instead.
The final field must specify a generic resources count.
A suffix of "K", "M", "G", "T" or "P" may be used to multiply the number by
1024, 1048576, 1073741824, etc. respectively.
(e.g."Gres=gpu:tesla:1,gpu:kepler:1,bandwidth:lustre:no_consume:4G").
By default a node has no generic resources and its maximum count is
that of an unsigned 64bit integer.
Also see \fBFeatures\fR for Boolean flags to filter nodes using job constraints.
.IP

.TP
\fBMemSpecLimit\fR
Amount of \fBRealMemory\fR, in megabytes, reserved for system use and not
available for user allocations. Must be less than the amount defined for
\fBRealMemory\fR.
If the task/cgroup plugin is configured and that plugin constrains memory
allocations (i.e. the \fItask/cgroup\fR \fBTaskPlugin\fR is enabled and
\fBConstrainRAMSpace=yes\fR is set in cgroup.conf), then the slurmd will be
allocated the specified memory limit. If cgroup/v1 is used the slurmstepd will
also be allocated the specified memory limit. If cgroup/v2 is used, the
slurmstepd's consumption is completely dependent on the topology of the job.
Note that having the Memory set in \fBSelectTypeParameters\fR as any of the
options that has it as a consumable resource is needed for this option to work.
The daemons will not be killed if they exhaust the memory allocation
(i.e. the Out\-Of\-Memory Killer is disabled for the daemon's memory cgroup).
If the task/cgroup plugin is not configured, the specified memory will only be
unavailable for user allocations.
.IP

.TP
\fBParameters\fR
Allows for node\-specific additions to the global \fBSlurmdParameters\fR.
Options are appended, and cannot override global options.
.IP

.TP
\fBPort\fR
The port number that the Slurm compute node daemon, \fBslurmd\fR, listens
to for work on this particular node. By default there is a single port number
for all \fBslurmd\fR daemons on all compute nodes as defined by the
\fBSlurmdPort\fR configuration parameter. Use of this option is not generally
recommended except for development or testing purposes. If multiple
\fBslurmd\fR daemons execute on a node this can specify a range of ports.

\fBNOTE\fR: On Cray systems, Realm\-Specific IP Addressing (RSIP) will
automatically try to interact with anything opened on ports 8192\-60000.
Configure Port to use a port outside of the configured SrunPortRange and
resource in \fBSelectTypeParameters\fR. So one of the *_Memory
options need to be enabled for that goal to be accomplished.
Also see \fBMemSpecLimit\fR.
.IP

.TP
\fBReason\fR
Identifies the reason for a node being in state "DOWN", "DRAINED"
"DRAINING", "FAIL" or "FAILING".
Use quotes to enclose a reason having more than one word.
.IP

.TP
\fBRestrictedCoresPerGPU\fR
Number of cores per GPU restricted for only GPU use. If a job does not request a
GPU it will not have access to these cores. The node's GPUs must either be
autodetected or have valid cores configured in \fBgres.conf\fR(5).

\fBNOTE\fR: Configuring multiple GPU types on overlapping sockets can result in
erroneous GPU type and restricted core pairings in allocations requesting gpus
without specifying a type.
\fBNOTE\fR: Shared gpu gres (shards or mps) will have access to these cores, but
there is no guarantee that reserved cores are used in proportion to the shared
gres allocation.
.IP

.TP
\fBSockets\fR
Number of physical processor sockets/chips on the node (e.g. "2").
If Sockets is omitted, it will be inferred from
\fBCPUs\fR, \fBCoresPerSocket\fR, and \fBThreadsPerCore\fR.
\fBNOTE\fR: If you have multi\-core processors, you will likely
need to specify these parameters.
Sockets and SocketsPerBoard are mutually exclusive.
If Sockets is specified when Boards is also used,
Sockets is interpreted as SocketsPerBoard rather than total sockets.
The default value is 1.
.IP

.TP
\fBSocketsPerBoard\fR
Number of physical processor sockets/chips on a baseboard.
Sockets and SocketsPerBoard are mutually exclusive.
The default value is 1.
.IP

.TP
\fBState\fR
State of the node with respect to the initiation of user jobs.
Acceptable values are \fICLOUD\fR, \fIDOWN\fR, \fIDRAIN\fR, \fIFAIL\fR,
\fIFAILING\fR, \fIFUTURE\fR and \fIUNKNOWN\fR.
Node states of \fIBUSY\fR and \fIIDLE\fR should not be specified in the node
.IP

.TP
\fBDOWN\fP
Indicates the node failed and is unavailable to be allocated work.
.IP

.TP
\fBDRAIN\fP
Indicates the node is unavailable to be allocated work.
.IP

.TP
\fBFAIL\fP
Indicates the node is expected to fail soon, has
no jobs allocated to it, and will not be allocated
to any new jobs.
.IP

.TP
\fBFAILING\fP
Indicates the node is expected to fail soon, has
one or more jobs allocated to it, but will not be allocated
to any new jobs.
.IP

.TP
\fBFUTURE\fP
Indicates the node is defined for future use and need not
exist when the Slurm daemons are started. These nodes can be made available
for use simply by updating the node state using the scontrol command rather
than restarting the slurmctld daemon. After these nodes are made available,
change their \fRState\fR in the slurm.conf file. Until these nodes are made
available, they will not be seen using any Slurm commands or nor will
any attempt be made to contact them. FUTURE nodes retain non\-FUTURE state on
restart. Use scontrol to put node back into FUTURE state.

.IP
.RS
.TP
\fBDynamic Future Nodes\fR
A \fBslurmd\fR started with \-F[<feature>] will be associated with a FUTURE
node that matches the same configuration (sockets, cores, threads) as reported
by \fBslurmd\fR \-C. The node's NodeAddr and NodeHostname will automatically be
retrieved from the \fBslurmd\fR and will be cleared when set back to the FUTURE
state.
.RE
.IP

.TP
\fBUNKNOWN\fP
Indicates the node's state is undefined but will be established
A job can execute a one task per thread from within one job step or
execute a distinct job step on each of the threads.
Note also if you are running with more than 1 thread per core and running
the select/cons_tres plugin then you will want to set
the SelectTypeParameters
variable to something other than CR_CPU to avoid unexpected results.
The default value is 1.
.IP

.TP
\fBTmpDisk\fR
Total size of temporary disk storage in \fBTmpFS\fR in megabytes
(e.g. "16384"). \fBTmpFS\fR (for "Temporary File System")
identifies the location which jobs should use for temporary storage.
Note this does not indicate the amount of free
space available to the user on the node, only the total file
system size. The system administration should ensure this file
system is purged as needed so that user jobs have access to
most of this space.
The Prolog and/or Epilog programs (specified in the configuration file)
might be used to ensure the file system is kept clean.
The default value is 0.
.IP

.TP
\fBTopology\fR
Comma-separated list of pairs in the format
\fR\fI<topology_name>\fR\fB:\fR\fI<topology_unit>\fR.
Where <\fItopology_unit\fR> is the block name or the name of a leaf switch.
Intermediate switch names -- ':' delimited -- can be provided and will be
created if needed (e.g. Topology=topo-tree:sw_root:s1:s2).
This setting overwrites the node topology affiliation configuration specified
in \fBtopology.conf\fR(5) and \fBtopology.yaml\fR(5).

.IP

.TP
\fBWeight\fR
The priority of the node for scheduling purposes.
All things being equal, jobs will be allocated the nodes with
the lowest weight which satisfies their requirements.
For example, a heterogeneous collection of nodes might
be placed into a single partition for greater system
utilization, responsiveness and capability. It would be
preferable to allocate smaller memory nodes rather than larger
memory nodes if either will satisfy a job's requirements.
The units of weight are arbitrary, but larger weights
should be assigned to nodes with more processors, memory,
disk space, higher processor speed, etc.
Note that if a job allocation request can not be satisfied
using the nodes with the lowest weight, the set of nodes
with the next lowest weight is added to the set of nodes
.SH "DOWN NODE CONFIGURATION"
The \fBDownNodes=\fR parameter permits you to mark certain nodes as in a
\fIDOWN\fR, \fIDRAIN\fR, \fIFAIL\fR, \fIFAILING\fR or \fIFUTURE\fR state
without altering the permanent configuration information listed under a
\fBNodeName=\fR specification.

.TP
\fBDownNodes\fR
Any node name, or list of node names, from the \fBNodeName=\fR specifications.
.IP

.TP
\fBReason\fR
Identifies the reason for a node being in state \fIDOWN\fR, \fIDRAIN\fR,
\fIFAIL\fR, \fIFAILING\fR or \fIFUTURE\fR.
\Use quotes to enclose a reason having more than one word.
.IP

.TP
\fBState\fR
State of the node with respect to the initiation of user jobs.
Acceptable values are \fIDOWN\fR, \fIDRAIN\fR, \fIFAIL\fR, \fIFAILING\fR
and \fIFUTURE\fR.
For more information about these states see the descriptions under \fBState\fR
in the \fBNodeName=\fR section above.
The default value is \fIDOWN\fR.
.IP

.SH "NODESET CONFIGURATION"
The nodeset configuration allows you to define a name for a specific set of
nodes which can be used to simplify the partition configuration section,
especially for heterogeneous or condo\-style systems. Each nodeset may be defined
by an explicit list of nodes, and/or by filtering the nodes by a particular
configured feature. If both \fBFeature=\fR and \fBNodes=\fR are used the
nodeset shall be the union of the two subsets.
Note that the nodesets are only used to simplify the partition definitions
at present, and are not usable outside of the partition configuration.

.TP
\fBFeature\fR
All nodes with this feature will be included as part of this nodeset. Only a
single feature is allowed.
.IP

.TP
\fBNodes\fR
List of nodes in this set.
.IP

.TP
\fBNodeSet\fR
Unique name for a set of nodes. Must not overlap with any NodeName definitions.
configuration file and the default values can be reset multiple times
in the configuration file with multiple entries where "PartitionName=DEFAULT".
The "PartitionName=" specification must be placed on every line
describing the configuration of partitions.
Each line where \fBPartitionName\fR is "DEFAULT" will replace or add to previous
default values and not reinitialize the default values.
A single partition name can not appear as a PartitionName value in more than
one line (duplicate partition name records will be ignored).
If a partition that is in use is deleted from the configuration and slurm
is restarted or reconfigured (scontrol reconfigure), jobs using the partition
are canceled.
\fBNOTE\fR: Put all parameters for each partition on a single line.
Each line of partition configuration information should
represent a different partition.
The partition configuration file contains the following information:

.TP
\fBAllocNodes\fR
Comma\-separated list of nodes from which users can submit jobs in the
partition.
Node names may be specified using the node range expression syntax
described above.
The default value is "ALL".
.IP

.TP
\fBAllowAccounts\fR
Comma\-separated list of accounts which may execute jobs in the partition.
The default value is "ALL". This list is hierarchical, meaning subaccounts
are included automatically.
\fBNOTE\fR: If AllowAccounts is used then DenyAccounts will not be enforced.
Also refer to DenyAccounts.
.IP

.TP
\fBAllowGroups\fR
Comma\-separated list of group names which may execute jobs in this
partition.
A user will be permitted to submit a job to this partition if
AllowGroups has \fBat least one\fR group associated with the user.
Jobs executed as user root or as user SlurmUser will be allowed to
use any partition, regardless of the value of AllowGroups. In addition, a Slurm
Admin or Operator will be able to view any partition, regardless of the value
of AllowGroups.
If user root attempts to execute a job as another user (e.g. using
srun's \-\-uid option), then the job will be subject to AllowGroups as if it
were submitted by that user.
By default, AllowGroups is unset, meaning all groups are allowed to use this
partition. The special value 'ALL' is equivalent to this.
Users who are not members of the specified group will not see information
about this partition by default. However, this should not be treated as a
security mechanism, since job information will be returned if a user requests
\fBAllowQos\fR
Comma\-separated list of Qos which may execute jobs in the partition.
Jobs executed as user root can use any partition without regard to
the value of AllowQos.
The default value is "ALL".
\fBNOTE\fR: If AllowQos is used then DenyQos will not be enforced.
Also refer to DenyQos.
.IP

.TP
\fBAlternate\fR
Partition name of alternate partition to be used if the state of this partition
is "DRAIN" or "INACTIVE."
.IP

.TP
\fBCpuBind\fR
If a job step request does not specify an option to control how tasks are bound
to allocated CPUs (by using \-\-cpu\-bind) and all nodes allocated to the job
do not have the same \fBCpuBind\fR option for the node, then the partition's
\fBCpuBind\fR option will control how tasks are bound to allocated resources.
The \fBTaskPluginParam\fR will be used as a last resort, with the default being
no binding. Supported values for CpuBind are \fBnone\fR, \fBsocket\fR,
\fBldom\fR (NUMA), \fBcore\fR and \fBthread\fR.
.IP

.TP
\fBDefault\fR
If this keyword is set, jobs submitted without a partition
specification will utilize this partition.
Possible values are "YES" and "NO".
The default value is "NO".
.IP

.TP
\fBDefaultTime\fR
Run time limit used for jobs that don't specify a value. If not set
then MaxTime will be used.
Format is the same as for MaxTime.
.IP

.TP
\fBDefCpuPerGPU\fR
Default count of CPUs allocated per allocated GPU. This value is used only if
the job didn't specify \-\-cpus\-per\-task and \-\-cpus\-per\-gpu.
.IP

.TP
\fBDefMemPerCPU\fR
Default real memory size available per allocated CPU in megabytes.
Used to avoid over\-subscribing memory and causing paging.
\fBDefMemPerCPU\fR would generally be used if individual processors
Also see \fBDefMemPerCPU\fR, \fBDefMemPerNode\fR and \fBMaxMemPerCPU\fR.
\fBDefMemPerCPU\fR, \fBDefMemPerGPU\fR and \fBDefMemPerNode\fR are mutually
exclusive.
.IP

.TP
\fBDefMemPerNode\fR
Default real memory size available per allocated node in megabytes.
Used to avoid over\-subscribing memory and causing paging.
\fBDefMemPerNode\fR would generally be used if whole nodes
are allocated to jobs (\fBSelectType=select/linear\fR) and
resources are over\-subscribed (\fBOverSubscribe=yes\fR or
\fBOverSubscribe=force\fR).
If not set, the \fBDefMemPerNode\fR value for the entire cluster will be used.
Also see \fBDefMemPerCPU\fR, \fBDefMemPerGPU\fR and \fBMaxMemPerCPU\fR.
\fBDefMemPerCPU\fR, \fBDefMemPerGPU\fR and \fBDefMemPerNode\fR are mutually
exclusive.
.IP

.TP
\fBDenyAccounts\fR
Comma\-separated list of accounts which may not execute jobs in the partition.
By default, no accounts are denied access. This list is hierarchical,
meaning subaccounts are included automatically.
\fBNOTE\fR: If AllowAccounts is used then DenyAccounts will not be enforced.
Also refer to AllowAccounts.
.IP

.TP
\fBDenyQos\fR
Comma\-separated list of Qos which may not execute jobs in the partition.
By default, no QOS are denied access
\fBNOTE\fR: If AllowQos is used then DenyQos will not be enforced.
Also refer AllowQos.
.IP

.TP
\fBDisableRootJobs\fR
If set to "YES" then user root will be prevented from running any jobs
on this partition.
The default value will be the value of \fBDisableRootJobs\fR set
outside of a partition specification (which is "NO", allowing user
root to execute jobs).
.IP

.TP
\fBExclusiveTopo\fR
If set to "YES," then only one job may be run on a single topology segment.
This capability is also available on a per\-job basis by using the
\fB\-\-exclusive=topo\fR option.
.IP

The default value is zero, no preemption grace time is allowed on
this partition.
Once a job has been selected for preemption, its end time is set to the current
time plus GraceTime. The job's tasks are immediately sent SIGCONT and SIGTERM
signals in order to provide notification of its imminent termination.
This is followed by the SIGCONT, SIGTERM and SIGKILL signal sequence upon
reaching its new end time. This second set of signals is sent to both the
tasks \fBand\fR the containing batch script, if applicable.
See also the global \fBKillWait\fR configuration parameter.
.br
\fBNOTE\fR: This parameter does not apply to \fBPreemptMode=SUSPEND\fR.
For setting the preemption grace time when using \fBPreemptMode=SUSPEND\fR,
see \fBPreemptParameters=suspend_grace_time\fR.
.IP

.TP
\fBHidden\fR
Specifies if the partition and its jobs are to be hidden by default.
Hidden partitions will by default not be reported by the Slurm APIs or commands.
Possible values are "YES" and "NO".
The default value is "NO".
Note that partitions that a user lacks access to by virtue of the
\fBAllowGroups\fR parameter will also be hidden by default.
.IP

.TP
\fBLLN\fR
Schedule resources to jobs on the least loaded nodes (based upon the number
of idle CPUs). This is generally only recommended for an environment with
serial jobs as idle resources will tend to be highly fragmented, resulting
in parallel jobs being distributed across many nodes.
Note that node \fBWeight\fR takes precedence over how many idle resources are
on each node.
Also see the \fBSelectTypeParameters\fR configuration parameter \fBCR_LLN\fR to
use the least loaded nodes in every partition.
.IP

.TP
\fBMaxCPUsPerNode\fR
Maximum number of CPUs on any node available to all jobs from this partition.
This can be especially useful to schedule GPUs. For example a node can be
associated with two Slurm partitions (e.g. "cpu" and "gpu") and the
partition/queue "cpu" could be limited to only a subset of the node's CPUs,
ensuring that one or more CPUs would be available to jobs in the "gpu"
partition/queue.
Also see \fBMaxCPUsPerSocket\fR.
.IP

.TP
\fBMaxCPUsPerSocket\fR
Maximum number of CPUs on any node available on the all jobs from this
partition. This can be especially useful to schedule GPUs.
.TP
\fBMaxMemPerNode\fR
Maximum real memory size available per allocated node in a job allocation in
megabytes. Used to avoid over\-subscribing memory and causing paging.
\fBMaxMemPerNode\fR would generally be used if whole nodes
are allocated to jobs (\fBSelectType=select/linear\fR) and
resources are over\-subscribed (\fBOverSubscribe=yes\fR or
\fBOverSubscribe=force\fR).
If not set, the \fBMaxMemPerNode\fR value for the entire cluster will be used.
Also see \fBDefMemPerNode\fR and \fBMaxMemPerCPU\fR.
\fBMaxMemPerCPU\fR and \fBMaxMemPerNode\fR are mutually exclusive.
.IP

.TP
\fBMaxNodes\fR
Maximum count of nodes which may be allocated to any single job.
The default value is "UNLIMITED", which is represented internally as \-1.
.IP

.TP
\fBMaxTime\fR
Maximum run time limit for jobs.
Format is minutes, minutes:seconds, hours:minutes:seconds,
days\-hours, days\-hours:minutes, days\-hours:minutes:seconds or
"UNLIMITED".
Time resolution is one minute and second values are rounded up to
the next minute.
The job TimeLimit may be updated by root, SlurmUser or an Operator to a
value higher than the configured MaxTime after job submission.
.IP

.TP
\fBMinNodes\fR
Minimum count of nodes which may be allocated to any single job.
The default value is 0.
.IP

.TP
\fBNodes\fR
Comma\-separated list of nodes or nodesets which are associated with this
partition.
Node names may be specified using the node range expression syntax
described above. A blank list of nodes
(i.e. Nodes="") can be used if one wants a partition to exist,
but have no resources (possibly on a temporary basis).
A value of "ALL" is mapped to all nodes configured in the cluster.
.IP

.TP
\fBOverSubscribe\fR
Controls the ability of the partition to execute more than one job at a
time on each resource (node, socket or core depending upon the value
.na
\fIhttps://slurm.schedmd.com/cons_tres.html\fR
.br
\fIhttps://slurm.schedmd.com/cons_tres_share.html\fR
.br
\fIhttps://slurm.schedmd.com/gang_scheduling.html\fR
.br
\fIhttps://slurm.schedmd.com/preempt.html\fR
.ad
.IP
.RS
.TP 12
\fBEXCLUSIVE\fR
Allocates entire nodes to jobs even with \fBSelectType=select/cons_tres\fR
configured.
Jobs that run in partitions with \fBOverSubscribe=EXCLUSIVE\fR will have
exclusive access to all allocated nodes.
These jobs are allocated all CPUs and GRES on the nodes, but they are only
allocated as much memory as they ask for. This is by design to support gang
scheduling, because suspended jobs still reside in memory. To request all the
memory on a node, use \fB\-\-mem=0\fR at submit time.
.IP

.TP
\fBFORCE\fR
Makes all resources (except GRES) in the partition available for
oversubscription without any means for users to disable it.
May be followed with a colon and maximum number of jobs in
running or suspended state.
For example \fBOverSubscribe=FORCE:4\fR enables each node, socket or
core to oversubscribe each resource four ways.
Recommended only for systems using \fBPreemptMode=suspend,gang\fR.

\fBNOTE\fR: \fBOverSubscribe=FORCE:1\fR is a special case that is not exactly
equivalent to \fBOverSubscribe=NO\fR. \fBOverSubscribe=FORCE:1\fR disables
the regular oversubscription of resources in the same partition but it will
still allow oversubscription due to preemption or on overlapping partitions
with the same PriorityTier. Setting \fBOverSubscribe=NO\fR
will prevent oversubscription from happening in all cases.

\fBNOTE\fR: If using \fBPreemptType=preempt/qos\fR you can specify a value for
\fBFORCE\fR that is greater than 1. For example, \fBOverSubscribe=FORCE:2\fR
will permit two jobs per resource normally, but a third job can be started
only if done so through preemption based upon QOS.

\fBNOTE\fR: If \fBOverSubscribe\fR is configured to \fBFORCE\fR or \fBYES\fR
in your slurm.conf and the system is not configured to use preemption
(\fBPreemptMode=OFF\fR) accounting can easily grow to values greater than
the actual utilization. It may be common on such systems to get error messages
in the slurmdbd log stating: "We have more allocated time than is possible."
.IP

.TP
\fBNO\fR
Selected resources are allocated to a single job. No resource will be
allocated to more than one job.

\fBNOTE\fR: Even if you are using \fBPreemptMode=suspend,gang\fR, setting
\fBOverSubscribe=NO\fR will disable preemption on that partition. Use
\fBOverSubscribe=FORCE:1\fR if you want to disable normal oversubscription
but still allow suspension due to preemption.
.RE
.IP

.TP
\fBOverTimeLimit\fR
Number of minutes by which a job can exceed its time limit before
being canceled.
Normally a job's time limit is treated as a \fIhard\fR limit and the job will be
killed upon reaching that limit.
Configuring \fBOverTimeLimit\fR will result in the job's time limit being
treated like a \fIsoft\fR limit.
Adding the \fBOverTimeLimit\fR value to the \fIsoft\fR time limit provides a
\fIhard\fR time limit, at which point the job is canceled.
This is particularly useful for backfill scheduling, which bases upon
each job's soft time limit.
If not set, the \fBOverTimeLimit\fR value for the entire cluster will be used.
May not exceed 65533 minutes.
A value of "UNLIMITED" is also supported.
.IP

.TP
\fBPartitionName\fR
Name by which the partition may be referenced (e.g. "Interactive").
This name can be specified by users when submitting jobs.
If the \fBPartitionName\fR is "DEFAULT", the values specified
with that record will apply to subsequent partition specifications
unless explicitly set to other values in that partition record or
replaced with a different set of default values.
Each line where \fBPartitionName\fR is "DEFAULT" will replace or add to previous
default values and not reinitialize the default values.
.IP

.TP
\fBPowerDownOnIdle\fR
If set to "YES" and power saving is enabled for the partition, then nodes
allocated from this partition will be requested to power down after being
allocated at least one job.
These nodes will not power down until they transition from COMPLETING to IDLE.
If set to "NO" then power saving will operate as configured for the partition.
The default value is "NO".
See <https://slurm.schedmd.com/power_save.html> and
fR(5), \fBgetaddrinfo\fR(3),
\fBgetrlimit\fR(2), \fBgres.conf\fR(5), \fBgroup\fR(5), \fBhostname\fR(1),
\fBscontrol\fR(1), \fBslurmctld\fR(8), \fBslurmd\fR(8),
\fBslurmdbd\fR(8), \fBslurmdbd.conf\fR(5), \fBsrun\fR(1),
\fBspank\fR(8), \fBsyslog\fR(3), \fBtopology.conf\fR(5)