目录

Slurm

This contains the TorchX Slurm scheduler which can be used to run TorchX components on a Slurm cluster.

class torchx.schedulers.slurm_scheduler.SlurmScheduler(session_name: str)[source]

Bases: torchx.schedulers.api.Scheduler, torchx.workspace.dir_workspace.DirWorkspace

SlurmScheduler is a TorchX scheduling interface to slurm. TorchX expects that slurm CLI tools are locally installed and job accounting is enabled.

Each app def is scheduled using a heterogenous job via sbatch. Each replica of each role has a unique shell script generated with it’s resource allocations and args and then sbatch is used to launch all of them together.

Logs are available in combined form via torchx log, the programmatic API as well as in the job launch directory as slurm-<jobid>-<role>-<replica_id>.out. If TorchX is running in a different directory than where the job was created the logs won’t be able to be found.

Some of the config options passed to it are added as SBATCH arguments to each replica. See https://slurm.schedmd.com/sbatch.html#SECTION_OPTIONS for info on the arguments.

Slurm jobs inherit the currently active conda or virtualenv and run in the current working directory. This matches the behavior of the local_cwd scheduler.

For more info see:

$ torchx run --scheduler slurm utils.echo --msg hello
slurm://torchx_user/1234
$ torchx status slurm://torchx_user/1234
$ less slurm-1234.out
...

Config Options

    usage:
        [partition=PARTITION],[time=TIME],[nomem=NOMEM],[comment=COMMENT],[constraint=CONSTRAINT],[mail-user=MAIL-USER],[mail-type=MAIL-TYPE],[job_dir=JOB_DIR]

    optional arguments:
        partition=PARTITION (str, None)
            The partition to run the job in.
        time=TIME (str, None)
            The maximum time the job is allowed to run for. Formats:             "minutes", "minutes:seconds", "hours:minutes:seconds", "days-hours",             "days-hours:minutes" or "days-hours:minutes:seconds"
        nomem=NOMEM (bool, False)
            disables memory request to workaround https://github.com/aws/aws-parallelcluster/issues/2198
        comment=COMMENT (str, None)
            Comment to set on the slurm job.
        constraint=CONSTRAINT (str, None)
            Constraint to use for the slurm job.
        mail-user=MAIL-USER (str, None)
            User to mail on job end.
        mail-type=MAIL-TYPE (str, None)
            What events to mail users on.
        job_dir=JOB_DIR (str, None)
            The directory to place the job code and outputs. The
            directory must not exist and will be created. To enable log
            iteration, jobs will be tracked in ``.torchxslurmjobdirs``.
            

Compatibility

Feature

Scheduler Support

Fetch Logs

✔️

Distributed Jobs

✔️

Cancel Job

✔️

Describe Job

Partial support. SlurmScheduler will return job and replica status but does not provide the complete original AppSpec.

Workspaces / Patching

If ``job_dir`` is specified the DirWorkspace will create a new isolated directory with a snapshot of the workspace.

Mounts

describe(app_id: str)Optional[torchx.schedulers.api.DescribeAppResponse][source]

Describes the specified application.

Returns

AppDef description or None if the app does not exist.

log_iter(app_id: str, role_name: str, k: int = 0, regex: Optional[str] = None, since: Optional[datetime.datetime] = None, until: Optional[datetime.datetime] = None, should_tail: bool = False, streams: Optional[torchx.schedulers.api.Stream] = None)Iterable[str][source]

Returns an iterator to the log lines of the k``th replica of the ``role. The iterator ends end all qualifying log lines have been read.

If the scheduler supports time-based cursors fetching log lines for custom time ranges, then the since, until fields are honored, otherwise they are ignored. Not specifying since and until is equivalent to getting all available log lines. If the until is empty, then the iterator behaves like tail -f, following the log output until the job reaches a terminal state.

The exact definition of what constitutes a log is scheduler specific. Some schedulers may consider stderr or stdout as the log, others may read the logs from a log file.

Behaviors and assumptions:

  1. Produces an undefined-behavior if called on an app that does not exist The caller should check that the app exists using exists(app_id) prior to calling this method.

  2. Is not stateful, calling this method twice with same parameters returns a new iterator. Prior iteration progress is lost.

  3. Does not always support log-tailing. Not all schedulers support live log iteration (e.g. tailing logs while the app is running). Refer to the specific scheduler’s documentation for the iterator’s behavior.

3.1 If the scheduler supports log-tailing, it should be controlled

by``should_tail`` parameter.

  1. Does not guarantee log retention. It is possible that by the time this method is called, the underlying scheduler may have purged the log records for this application. If so this method raises an arbitrary exception.

  2. If should_tail is True, the method only raises a StopIteration exception when the accessible log lines have been fully exhausted and the app has reached a final state. For instance, if the app gets stuck and does not produce any log lines, then the iterator blocks until the app eventually gets killed (either via timeout or manually) at which point it raises a StopIteration.

    If should_tail is False, the method raises StopIteration when there are no more logs.

  3. Need not be supported by all schedulers.

  4. Some schedulers may support line cursors by supporting __getitem__ (e.g. iter[50] seeks to the 50th log line).

  5. Whitespace is preserved, each new line should include \n. To

    support interactive progress bars the returned lines don’t need to include \n but should then be printed without a newline to correctly handle \r carriage returns.

Parameters

streams – The IO output streams to select. One of: combined, stdout, stderr. If the selected stream isn’t supported by the scheduler it will throw an ValueError.

Returns

An Iterator over log lines of the specified role replica

Raises

NotImplementedError – if the scheduler does not support log iteration

run_opts()torchx.specs.api.runopts[source]

Returns the run configuration options expected by the scheduler. Basically a --help for the run API.

schedule(dryrun_info: torchx.specs.api.AppDryRunInfo[torchx.schedulers.slurm_scheduler.SlurmBatchRequest])str[source]

Same as submit except that it takes an AppDryRunInfo. Implementors are encouraged to implement this method rather than directly implementing submit since submit can be trivially implemented by:

dryrun_info = self.submit_dryrun(app, cfg)
return schedule(dryrun_info)
torchx.schedulers.slurm_scheduler.create_scheduler(session_name: str, **kwargs: Any)torchx.schedulers.slurm_scheduler.SlurmScheduler[source]
class torchx.schedulers.slurm_scheduler.SlurmBatchRequest(cmd: List[str], replicas: Dict[str, torchx.schedulers.slurm_scheduler.SlurmReplicaRequest], job_dir: Optional[str])[source]

Holds parameters used to launch a slurm job via sbatch.

materialize()str[source]

materialize returns the contents of the script that can be passed to sbatch to run the job.

class torchx.schedulers.slurm_scheduler.SlurmReplicaRequest(name: str, entrypoint: str, args: List[str], srun_opts: Dict[str, str], sbatch_opts: Dict[str, str], env: Dict[str, str])[source]

Holds parameters for a single replica running on slurm and can be materialized down to a bash script.

classmethod from_role(name: str, role: torchx.specs.api.Role, cfg: Mapping[str, Optional[Union[str, int, float, bool, List[str]]]])torchx.schedulers.slurm_scheduler.SlurmReplicaRequest[source]

from_role creates a SlurmReplicaRequest for the specific role and name.

materialize()Tuple[List[str], List[str]][source]

materialize returns the sbatch and srun groups for this role. They should be combined using : per slurm heterogenous groups.

文档

访问 PyTorch 的全面开发人员文档

查看文档

教程

获取面向初学者和高级开发人员的深入教程

查看教程

资源

查找开发资源并解答您的问题

查看资源