Student Cluster

This topic gives you a quick overview of the steps involved to run a job on the D-INFK student cluster.

Login in

If your course or project uses Jupyter then go to https://student-jupyter.inf.ethz.ch and log in there. More information can be found here.

For running jobs directly you can log in to

student-cluster.inf.ethz.ch

or to the actual login nodes student-cluster1.inf.ethz.ch and student-cluster2.inf.ethz.ch via secure shell (ssh). Both hosts have the same host keys with the following key fingerprints:

SHA256:uHitVtIWntg4nYTq7Rs83xhKl1x7XLPLagYJoRSppWs (ECDSA)
SHA256:HUDN67JaBd19Z67bobx4e1VDFG08KxhEzpFjINEdnZQ (ED25519)
SHA256:VSq8zIQ/Sg6UyibS0pDuEwyNDthrpihxpGfkf0JrxPY (RSA)

When you log in you will be informed about the remaining time per course or project and how much free space you have in your home directory. Keep an eye on these numbers so that you do not run out of time or space before a deadline.

Running Jobs

Please read on here.

GPUs

Currently there are the following NVidia GPUs in the cluster:

Priority Count Type CUDA Cores VRAM Compute Capability
1 32 RTX 5060 Ti 4608 16 GB 12.0
2 32 RTX 2080 Ti 4352 11 GB 7.5
3 192 GTX 1080 Ti 3584 11 GB 6.1
4 6 GB10 (ASUS Ascent GX10) 6144 128 GB (shared with CPUs) 12.1

Jobs without requesting a specific GPU are scheduled on nodes according to the priority above.

Limits

You have the following general limitations on resources that you can use.

Home Directory

You have 20 GB of space in your home directory, independent of how many courses or projects you have.

Scratch Space

Your individual scratch space under /work/scratch/{your user name} has a hard space limit of 100GB and 100000 files. Data in there has a retention period that depends on the amount of data:

Used Space Max Age
less than 10GB 7 days
10 GB to 50 GB 2 days
more than 50GB 1 day

The cleaning job that deletes data according to age starts 23:00 every day. You are not allowed to keep data alive by automatically updating modification time of files.

Work Space

Some courses and projects provide additional work space for you or your team under /work/courses, /work/projects or /work/users in which case your TAs or supervisors will inform you.

Jobs

Resources available for jobs have the following default limits:

GPU jobs CPU jobs
Number of running jobs 1 1
GPUs per job 1 -
CPU cores per job 2 1, with time sharing
RAM per job 24 GB 4 to 8 GB, course or project specific
Space in /tmp 40 GB 4 GB
Queued jobs 2 -

When selecting a specific GPU for a job then the limits of the specific node containing the GPU apply:

RTX 5060 Ti RTX 2080 Ti GTX 1080 Ti GB10
Number of running jobs 1 1 1 1
GPUs per job 1 1 1-4 1
CPU cores per job 3 4 2 20
RAM per job 24 GB 36 GB 24 GB 96GB
Space in /tmp 40 GB 40 GB 40 GB 850 GB
Queued jobs 2 2 2 2

Information about the specific nodes can be found here.

Time

The amount of hours that you have as well as maximum runtime per job depends on your courses and projects. Each course or project comes with its own budget which is displayed when you log in to a login node as well as in the spawner page of JupyterHub.

For courses and projects with special needs some of these limits may be different in which case your TAs or supervisors will inform you.

Login Nodes

On the login nodes you also have the following restrictions:

  • 3 CPU cores
  • 24 GB of RAM

Expiration of Access

For courses access to the cluster will be disabled in the morning of the last Monday of the semester holidays. For BsC and MsC projects the access ends on the date requested by the supervisor.

All data in your home directory will also be deleted. Please copy all data that you still need away before this happens.

Page URL: https://isg.inf.ethz.ch/bin/view/Main/HelpClusterComputingStudentCluster
08 Feb 2026
© 2026 Eidgenössische Technische Hochschule Zürich