Working on Digital Research Alliance of Canada Clusters

The Digital Research Alliance of Canada, or the Alliance, (formerly Compute Canada) is the organization that coordinates access to High Performance Computing (HPC) computing resources across Canada.

Before you can use the Alliance compute clusters and high capacity storage resources you need to Create Digital Research Alliance of Canada Account. You only need to do that once when you join the MOAD group, no matter how many different compute clusters you end up working on.

For each cluster that you work on, you need to do some initial setup. Our Alliance allocation that gives us higher than default priority for compute, and larger project and nearline storage allocations is on the graham.alliancecan.ca cluster located in Waterloo. The instructions below are for setup on graham.

We also have default allocations available on:

  • beluga.alliancecan.ca, located in Montréal.

  • cedar.alliancecan.ca, located in Burnaby.

  • narval.alliancecan.ca, located in Montréal.

Our jobs generally do not require sufficient resources to qualify to run on the niagara.alliancecan.ca cluster located in Toronto.

Create Digital Research Alliance of Canada Account

Digital Research Alliance of Canada (the Alliance) is the national network of shared advanced research computing (ARC) and storage that we use for most of our ocean model calculations. The BC DRI Group is the regional organization that coordinates the British Columbia partnership with the Alliance.

To access Alliance compute clusters and storage you need to register a Alliance account at https://ccdb.alliancecan.ca/account_application. To do so you will need an eoas.ubc.ca email address, and Susan’s Alliance CCRI code.

Note

When prompted to select an institution, choose BC DRI Group: University of British Columbia.

There are detailed information about the account creation process, and step by step instructions (with screenshots) for completing it at https://alliancecan.ca/en/services/advanced-research-computing/account-management/apply-account

Initial Setup on graham.alliancecan.ca

These are the setup steps that you need to do when you start using graham for the first time:

  1. Add an entry for graham to your $HOME/.ssh/config file. This will enable you to connect to graham by typing ssh graham instead of having to type ssh your-user-id@graham.alliancecan.ca.

    Create a $HOME/.ssh/config file on your laptop or a Waterhole machine containing the following (or append the following if $HOME/.ssh/config already exists):

    Host graham
      Hostname  graham.alliancecan.ca
      User  userid
      ForwardAgent  yes
    

    where userid is your Alliance user id.

    The first two lines establish graham as a short alias for graham.alliancecan.ca so that you can just type ssh graham.

    The third line sets the user id to use on graham, which is convenient if it differs from your EOAS user id.

    The last line enables agent forwarding so that authentication requests received on the remote system are passed back to your laptop or Waterhole machine for handling. That means that connections to GitHub (for instance) in your session on graham will be authenticated by your laptop or Waterhole machine. So, after you type your ssh key passphrase into your laptop or Waterhole machine once, you should not have to type it again until you log off and log in again.

  2. Add an entry for the data transfer nodes on graham to your $HOME/.ssh/config file. The data transfer nodes are optimized for file transfers to and from the cluster.

    Host graham-dtn
      HostName  gra-dtn1.alliancecan.ca
      User  userid
      ForwardAgent  no
    

    where userid is your Alliance user id.

  3. Follow the Alliance docs to install your ssh public key into into the CCDB system so that it will be available to give you access to all of the Alliance HPC clusters. On Mac or Linux your public key is stored in $HOME/.ssh/id_ed25519.pub and you can display it so that you can copy/paste it to CCDB with:

    cat $HOME/.ssh/id_ed25519.pub
    

    On Windows you can do that with:

    type %USERPROFILE%/.ssh/id_ed25519.pub
    

    Alternatively, you can open your id_ed25519.pub in VS Code and copy it from there to the CCDB page.

    Confirm that you can ssh into graham with

    $ ssh graham
    

    No userid, password, or lengthy host name required! :-)

  4. Create a PROJECT environment variable that points to our allocated storage on the /project/ file system. To ensure that PROJECT is set correctly every time you sign in to graham, use an editor to add the following line to your $HOME/.bash_profile file:

    export PROJECT=$HOME/projects/def-allen
    

    Exit your session on graham with exit, then ssh in again, and confirm that PROJECT is set correctly with:

    $ echo $PROJECT
    

    The output should be:

    /home/dlatorne/projects/def-allen/
    

    except with your Alliance userid instead of Doug’s.

  5. Set the permissions in your $PROJECT/$USER/ directory so that other members of the def-allen group have access, and permissions from the top-level directory are inherited downward in the tree:

    $ chmod g+rwxs $PROJECT/$USER
    $ chmod o+rx $PROJECT/$USER
    

    Check the results of those operations with ls -al $PROJECT/$USER. They should look like:

    $ ls -al $PROJECT/$USER
    total 90
    drwxrwsr-x  3 dlatorne def-allen 33280 Apr  9 15:04 ./
    drwxrws--- 16 allen    def-allen 33280 Apr  8 18:14 ../
    

    with your user id instead of Doug’s in the ./ line.

  6. Set the group and permissions in your $SCRATCH/ directory so that other members of the def-allen group have access, and permissions from the top-level directory are inherited downward in the tree:

    $ chgrp def-allen $SCRATCH
    $ chmod g+rwxs $SCRATCH
    $ chmod o+rx $SCRATCH
    

    Check the results of those operations with ls -al $SCRATCH. They should look like:

    $ ls -al $SCRATCH
    total 3015
    drwxrwsr-x    26 dlatorne def-allen   41472 Apr 26 17:23 ./
    drwxr-xr-x 16366 root     root      2155008 Apr 29 15:31 ../
    

    with your user id instead of Doug’s in the ./ line.

  7. Follow the Git Configuration docs to create your $HOME/.gitconfig Git configuration file.

  8. Alliance clusters use the module load command to load software components. On graham the module loads that are required to build and run NEMO are:

    module load StdEnv/2020
    module load netcdf-fortran-mpi/4.6.0
    module load perl/5.30.2
    

    You can manually load the modules each time you log in, or you can add the above lines to your $HOME/.bashrc file so that they are automatically loaded upon login.

  9. Follow the Create a Workspace and Clone the Repositories docs to set up your $PROJECT/$USER/MEOPAR/ workspace and clone the repositories required to build and run NEMO.

  10. Follow the Install the Command Processor Packages docs to install the SalishSeaCast NEMO Command Processor and its dependencies in a conda environment.

  11. Follow the MEOPAR on graham docs to build XIOS-2.

  12. Follow the Compile NEMO-3.6 docs to build NEMO-3.6.

  13. If you are using VS Code as your editor, consider setting up the Fortran Language Server (fortls)