LIP6's clusters

EN  -  FR

Convergence

Hardware

convergence is composed of one frontend and ten compute nodes:

Computer Model Memory Processor Cores GPUs
front DELL PowerEdge R650xs 125 GB 2 x Intel Xeon Silver 4310 24 cores / 48 threads @ 2.10 GHz
node01 DELL PowerEdge XE8545 2 TB 2 x AMD EPYC 7543 64 cores / 128 threads @ 2.80 GHz 4 x NVIDIA A100 80Go SXM
node[02-06] DELL PowerEdge R750xa 2 TB 2 x Intel Xeon Gold 6330 56 cores / 112 threads @ 2.00 GHz 4 x NVIDIA A100 80Go PCIe
node[07-10] DELL PowerEdge R750xa 1 TB 2 x Intel Xeon Gold 6330 56 cores / 112 threads @ 2.00 GHz 4 x NVIDIA A100 80Go PCIe

On each node, 4 cores (8 threads) and 4 GB of RAM are reserved for the system and slurm.

By default when you reserve a GPU, slurm allocates you 4 cores (8 threads) and 64 GB of RAM.

MIG is used to partition the A100 80 GB GPUs into smaller GPUs. Each compute node presents :

In slurm reservations, you have to specify the type of GPU you want.

Storage

/home (300 TB) is hosted by front (DELL ME5084 disk array - SAS 12 Gb - 28 x HDD 16 TB) and exported to the compute nodes through NFS.

Each compute node has a local storage space mounted in /scratch (1.6 TB on NVME).

Network

Access to front is done through a 10Gb/s ethernet link.

Compute nodes and front are interconnected by a 200Gb/s Infiniband network (Mellanox QM8700).

Access to the cluster

To access Convergence, you need to establish a ssh connection to the cluster's frontend (front.convergence.lip6.fr).

LIP6's members automatically get access to Convergence.

Users that do not belong to the LIP6 can request an account at convergence@lip6.fr.

Use of the cluster

You can access compute resources through the slurm resource manager (see https://slurm.schedmd.com/).

Software

Contact

Send any requests about Convergence to convergence@lip6.fr.

To get news about Convergence you should subscribe to the convergence-news@listes.lip6.fr mailing list. Non LIP6 users are automatically added to this list when they got an account.