diff --git a/docs/source/_static/noaacsp_cluster_1.png b/docs/source/_static/noaacsp_cluster_1.png deleted file mode 100644 index 3fdc0e68b8..0000000000 Binary files a/docs/source/_static/noaacsp_cluster_1.png and /dev/null differ diff --git a/docs/source/_static/noaacsp_cluster_2.png b/docs/source/_static/noaacsp_cluster_2.png deleted file mode 100644 index 0fc3b2896d..0000000000 Binary files a/docs/source/_static/noaacsp_cluster_2.png and /dev/null differ diff --git a/docs/source/_static/noaacsp_cluster_3.png b/docs/source/_static/noaacsp_cluster_3.png deleted file mode 100644 index bf3991b7ff..0000000000 Binary files a/docs/source/_static/noaacsp_cluster_3.png and /dev/null differ diff --git a/docs/source/_static/noaacsp_cluster_4.png b/docs/source/_static/noaacsp_cluster_4.png deleted file mode 100644 index 9294d40bbe..0000000000 Binary files a/docs/source/_static/noaacsp_cluster_4.png and /dev/null differ diff --git a/docs/source/_static/noaacsp_cluster_5.png b/docs/source/_static/noaacsp_cluster_5.png deleted file mode 100644 index 9fd7a96e40..0000000000 Binary files a/docs/source/_static/noaacsp_cluster_5.png and /dev/null differ diff --git a/docs/source/_static/noaacsp_cluster_6.png b/docs/source/_static/noaacsp_cluster_6.png deleted file mode 100644 index 79287bc1e7..0000000000 Binary files a/docs/source/_static/noaacsp_cluster_6.png and /dev/null differ diff --git a/docs/source/_static/noaacsp_instance_1.png b/docs/source/_static/noaacsp_instance_1.png deleted file mode 100644 index 0e06fe345b..0000000000 Binary files a/docs/source/_static/noaacsp_instance_1.png and /dev/null differ diff --git a/docs/source/_static/noaacsp_instance_2.png b/docs/source/_static/noaacsp_instance_2.png deleted file mode 100644 index 7c74d32853..0000000000 Binary files a/docs/source/_static/noaacsp_instance_2.png and /dev/null differ diff --git a/docs/source/_static/noaacsp_instance_3.png b/docs/source/_static/noaacsp_instance_3.png deleted file mode 100644 index f1031fb576..0000000000 Binary files a/docs/source/_static/noaacsp_instance_3.png and /dev/null differ diff --git a/docs/source/_static/noaacsp_instance_4.png b/docs/source/_static/noaacsp_instance_4.png deleted file mode 100644 index f4aedb27d1..0000000000 Binary files a/docs/source/_static/noaacsp_instance_4.png and /dev/null differ diff --git a/docs/source/_static/noaacsp_login.png b/docs/source/_static/noaacsp_login.png deleted file mode 100644 index fd2ea73144..0000000000 Binary files a/docs/source/_static/noaacsp_login.png and /dev/null differ diff --git a/docs/source/_static/noaacsp_using_1.png b/docs/source/_static/noaacsp_using_1.png deleted file mode 100644 index 68550db050..0000000000 Binary files a/docs/source/_static/noaacsp_using_1.png and /dev/null differ diff --git a/docs/source/_static/noaacsp_using_2.png b/docs/source/_static/noaacsp_using_2.png deleted file mode 100644 index 4d1899f8f5..0000000000 Binary files a/docs/source/_static/noaacsp_using_2.png and /dev/null differ diff --git a/docs/source/conf.py b/docs/source/conf.py index 81f231f6b0..2c462a2675 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -25,6 +25,7 @@ # The full version, including alpha/beta/rc tags release = '0.1' +numfig = True # -- General configuration --------------------------------------------------- @@ -42,7 +43,7 @@ 'sphinx.ext.viewcode', 'sphinx.ext.githubpages', 'sphinx.ext.napoleon', - 'sphinxcontrib.bibtex' + 'sphinxcontrib.bibtex', ] bibtex_bibfiles = ['references.bib'] @@ -120,3 +121,4 @@ def setup(app): # Output file base name for HTML help builder. htmlhelp_basename = 'Global-Workflow' + diff --git a/docs/source/index.rst b/docs/source/index.rst index 637a4ef70a..e6513b743a 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -41,5 +41,6 @@ Table of Contents hpc.rst output.rst run.rst + wave.rst noaa_csp.rst errors_faq.rst diff --git a/docs/source/noaa_csp.rst b/docs/source/noaa_csp.rst index 66317efe92..98754335e2 100644 --- a/docs/source/noaa_csp.rst +++ b/docs/source/noaa_csp.rst @@ -4,12 +4,11 @@ Configuring NOAA Cloud Service Providers ######################################## -The NOAA Cloud Service Providers (CSP) support the forecast-only -configurations for the global workflow. Once a suitable CSP instance -and cluster is defined/created, the global workflow may be executed as -on the other platforms discussed in the previous sections. In order -successfully execute the global-workflow, a suitable CSP cluster must -be created. Currently the global-workflow supports the following +The NOAA Cloud Service Providers (CSPs) support the forecast-only, +coupled, and GEFS configurations for global-workflow. +Once a suitable CSP instance and cluster is defined/created, +the global-workflow may be executed similarly to the on-premises (on-prem) machines. +Currently, the global-workflow supports the following instance and storage types as a function of CSP and forecast resolution. @@ -24,177 +23,398 @@ resolution. - **Instance Type** - **Partition** - **File System** - * - Amazon Web Services Parallel Works - - C48 - - ``ATM`` - - ``c5.9xlarge (36 vCPUs, 72 GB Memory, amd64)`` + * - Amazon Web Services ParallelWorks + - C48, C96, C192, C384 + - ``ATM``, ``GEFS`` + - ``c5.18xlarge (72 vCPUs, 144 GB Memory, amd64)`` - ``compute`` - - ``/lustre`` + - ``/lustre``, ``/bucket`` + * - Azure ParallelWorks + - C48, C96, C192, C384 + - ``ATM``, ``GEFS`` + - ``Standard_F48s_v2 (48 vCPUs, 96 GB Memory, amd64)`` + - ``compute`` + - ``/lustre``, ``/bucket`` + * - GCP ParallelWorks + - C48, C96, C192, C384 + - ``ATM``, ``GEFS`` + - ``c3d-standard-60-lssd (60 vCPUs, 240 GB Memory, amd64)`` + - ``compute`` + - ``/lustre``, ``/bucket`` -Instructions regarding configuring the respective CSP instance and -cluster follows. +Instructions regarding configuring the respective CSP instances and +clusters follow. ********************* Login to the NOAA CSP ********************* -Log in to the `NOAA CSP `_ and into +Log in to the `NOAA cloud `_, and into the resources configuration. The user should arrive at the following -screen. +screen as in :numref:`Figure %s `. +Click the blue box indicated by the red arrow to login. + +.. _pw-home: + +.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_login_1.png + :name: noaacsp_login_1 + :class: with-border + :align: center + + NOAA-PARALLELWORKS Home Page -Note that the ``Username or email`` query is case-sensitive. The user -will then be prompted for their respective RSA token key using the -same application use for the other RDHPCS machines (i.e., Hera, Jet, -etc.,). +As shown in :numref:`Figure %s `, Fill the ``Username / Email`` box with your username or NOAA email (usually in "FirstName.LastName" format). +Note that the ``Username or email`` query field is case-sensitive. +Then enter the respective ``Pin + RSA`` combination using the same RSA token application used +for access to other RDHPCS machines (e.g., Hera, Gaea). -.. image:: _static/noaacsp_login.png +.. _login2: + +.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_login_2.png + :name: noaacsp_login_2 + :class: with-border + :align: center + + NOAA-SSO-PARALLELWORKS Login Page ******************************* Configure the NOAA CSP Instance ******************************* -Once logged into the NOAA CSP, navigate to the :red-text:`RESOURCES` section -and click the ``+ Add Resource`` button in the upper-right corner as -illustrated below. +Once logged into the NOAA CSP, navigate to the ``Marketplace`` section +in the left sidebar as indicated by the red arrow in :numref:`Figure %s `, and click. +Scroll down to select "AWS EPIC Wei CentOS", circled in red. +Note that the current global-workflow is still using CentOS-built spack-stack, +but it will be updated to Rocky 8 soon. + +.. _pw-marketplace: + +.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_instance_1.png + :name: noaacsp_instance_1 + :class: with-border + :align: center + + ParallelWorks Marketplace + +Next, click "Fork latest" as shown in the red-circle in :numref:`Figure %s`. + +.. _fork-latest: + +.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_instance_2.png + :name: noaacsp_instance_2 + :class: with-border + :align: center -.. image:: _static/noaacsp_instance_1.png + Fork Instance From Marketplace -Next, the mandatory attributes for the respective instance must be -defined as shown in the illustration below. +Please provide a unique name in the "New compute node" field for the instance +(see the box pointer by the red arrow in :numref:`Figure %s `). +Best practices suggest one that is clear, concise, and relevant to the application. +Click ``Fork`` (in the red-circle) to fork an instance. -.. image:: _static/noaacsp_instance_2.png +.. _create-fork: -The annotated attributes and their respective descriptions are as -follows. +.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_instance_3.png + :name: noaacsp_instance_3 + :class: with-border + :align: center + + Create the Fork -1. A unique name for the instance. Best practices suggest one that is - clear, concise, and relevant to the application. -2. A short description of the instance, i.e., ``This instance supports - this task.`` -3. Tag(s) describing and identifying the respective instance. These - allow for improved bookkeeping, especially when a user has multiple - or concurrent instance types. +Now, an instance is forked, and it is time to configure the cluster. Follow these steps as shown in :numref:`Figure %s `: -Next, the cluster is defined as shown in the following illustration. +#. Select a *Resource Account*; usually it is *NOAA AWS Commercial Vault*. +#. Select a *Group*, which will be something like: ``ca-epic``, ``ca-sfs-emc``, etc. +#. Copy and paste your public key (e.g., ``.ssh/id_rsa.pub``, ``.ssh/id_dsa.pub`` from your laptop). +#. Modify *User Bootstrap*. If you are not using the ``ca-epic`` group, please UNCOMMENT line 2. +#. Keep *Health Check* as it is. -.. image:: _static/noaacsp_instance_3.png +Click *Save Changes* at the top-right as shown in the red circle. -The NOAA Parallel Works initiative currently provides 2 CSPs for the -global-workflow; **AWS** (Amazon Web Services) and **Azure** -(Microsoft Azure). Existing clusters may also be modified. However -this is neither recommended or supported. +.. _config-cluster: -Finally, when satisfied with the CSP instance configure, click ``Add -Resource`` as illustrated below. +.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_instance_4.png + :name: noaacsp_instance_4 + :class: with-border + :align: center -.. image:: _static/noaacsp_instance_4.png + Configure & Save the Instance + +NOAA ParallelWorks (PW) currently provides 3 CSPs: +**AWS** (Amazon Web Services), **Azure** (Microsoft Azure), +and **GCP** (Google Cloud Platform). +Existing clusters may also be modified. +However, it is best practice to fork from Marketplace with something similar to your requests +(rather than modifying an existing cluster). ****************************** -Configure the NOAA CSP Cluster +Add CSP Lustre Filesystem ****************************** -Navigate to the tab and locate the CSP instance configured in the -previous section and click on the link, `globalworkflowdemo` for this -example. +To run global-workflow on CSPs, we need to attach the ``/lustre`` filesystem as a run directory. +First, we need to add/define our ``/lustre`` filesystem. +To do so, navigate to the middle of the NOAA PW website left side panel and select *Lustre* +(see the red arrow in :numref:`Figure %s `), and then click *Add Storage* +at the top right, as shown in the red circle. + +.. _select-lustre: -.. image:: _static/noaacsp_cluster_1.png +.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_lustre_1.png + :name: noaacsp_lustre_1 + :class: with-border + :align: center + + Add Lustre Storage + +Select `FSx` for the AWS FSx ``/lustre`` filesystem as shown in the red circle. -The respective CSP cluster maybe then be configured. The mandatory -configuration attributes are as follows. +Define ``/lustre`` with steps in :numref:`Figure %s `: -- Availability zone; -- Disk size and storage type(s); -- Available compute and resource partitions. +#. Provide a clear and meaningful *Resource name*, as shown by the first red arrow +#. Provide a short sentence for *Description*, as shown in the second red arrow +#. Choose **linux** for *Tag*, as shown by red arrow #3 -The following image describes the general settings for the respective -cluster. These attributes are specific to the user and the respective -user's group allocation. The right-most panel provides a breakdown of -the costs related to the requested compute and storage -resources. While there is a space to place an SSH key here, RDHPCS -recommends adding any SSH keys under the respective user's -``Accountâž¡Authentication instead``. This will allow you to connect -from another machine instead of using the Parallel Works web terminal. +Click *Add Storage* as in the red box at the top right corner. -.. image:: _static/noaacsp_cluster_2.png +This will create a ``/lustre`` filesystem template after clicking on the red square shown in :numref:`Figure %s `. + +.. _define-lustre: + +.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_lustre_2.png + :name: noaacsp_lustre_2 + :class: with-border + :align: center + + Define Lustre Attributes -The following image describes the controller settings for a cluster -created for a C48 atmosphere forecast-only configuration. Here the -user must define the instance type (see the table above), the number -of image disks and the image disk sizes. +After creating the template, we need to fill in information for this ``/lustre`` filesystem. +To do so, go to the NOAA PW website, and click *Lustre* on the left side panel, as +indicated by red arrow 1 in :numref:`Figure %s `. Then select the filesystem defined by *Resource name* in :numref:`Figure %s above `, +as shown in the red box. Here, the user can delete this resource if not needed by +clicking the trash can (indicated by red arrow 2 in :numref:`Figure %s `). + +.. _check-lustre: + +.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_lustre_3.png + :name: noaacsp_lustre_3 + :class: with-border + :align: center + + Show Lustre on the PW page -.. image:: _static/noaacsp_cluster_3.png +By clicking the filesystem in the red box of the image above, +users will be led to the ``/lustre`` definition page. -Next the partitions for the cluster may be defined. A partition -configuration for the aforementioned C48 atmosphere forecast-only -application is illustrated in the figure below. Note that the instance -type beneath ``Controller Settings`` and ``Partitions`` must be -identical. Other configurations are not supported by the -global-workflow team. Once the partitions are configured, click the -``+ Add Partition`` button in the upper-right corner. +Then follow the steps illustrated in :numref:`Figure %s ` below: -.. image:: _static/noaacsp_cluster_4.png +#. Choose a size in the *Storage Capacity (GB)* box, as indicated by red arrow 1. + There is a minimum of 1200 for AWS. For the C48 ATM/GEFS case this will be enough. + For SFS-C96 case or C768 ATM/S2S case, it should probably be increased to 12000. +#. For *File System Deployment*, choose "SCRATCH_2" for now as indicated by red arrow 2. + Do not use SCRATCH_1, as it is used for testing by PW. +#. Choose **NONE** for *File System Compression* as pointed by red arrow 3. + Only choose LZ4 if you understand what it means. +#. Leave *S3 Import Path* and *S3 Export Path* blank for now. +#. Click **Save Changes** in the red circle to save the definition/changes made. -For the storage do be allocated for the global-workflow application it -is suggested that the ``Mount Point`` be ``/lustre``. Once the storage -has been configured, click the ``+ Add Attached Storage`` button in -the upper-right corner. This is illustrated in the following image. +.. _config-lustre: + +.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_lustre_4.png + :name: noaacsp_lustre_4 + :class: with-border + :align: center + + Defining the Lustre Filesystem Capacity + +For the storage to be allocated for the global-workflow application, +it is suggested that the ``Mount Point`` be ``/lustre``. Once the storage +has been configured, follow the steps below to attach the ``/lustre`` filesystem. + +****************************** +Attach CSP Lustre Filesystem +****************************** + +Now we need to attach the defined filesystem to our cluster. +Go back to our the NOAA PW website (https://noaa.parallel.works), and click *Cluster* +as shown in :numref:`Figure %s ` below, then select the cluster you made (e.g., ``AWS EPIC Wei CentOS example`` cluster, as shown in the red box below). +Note, one can remove/delete this cluster if no longer needed by +clicking the trash can shown in the red circle at right. + +.. _select-cluster: + +.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_filesystem_1.png + :name: noaacsp_filesystem_1 + :class: with-border + :align: center -.. image:: _static/noaacsp_cluster_5.png + Add Attached Filesystems -Finally, the following illustrates a JSON version of the cluster -configuration created from the steps above. When opening issues -related to the NOAA CSP global-workflow applications please include -the JSON content. +When we get into the cluster page, click the *Definition* in the top menu as +in the red-box in :numref:`Figure %s `. Then we can attach the defined filesystems. +When finished, remeber to click *Save Changes* to save the changes. -.. image:: _static/noaacsp_cluster_6.png +.. _add-filesystem: + +.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_filesystem_2.png + :name: noaacsp_filesystem_2 + :class: with-border + :align: center + + Add Attached ``/lustre`` and/or ``/bucket`` Filesystems + +Scroll down to the bottom as show in :numref:`Figure %s `, and click *Add Attached Filesystems* as in the red circle. + +.. _click-add-fs: + +.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_filesystem_3.png + :name: noaacsp_filesystem_3 + :class: with-border + :align: center + + Attach ``/lustre`` and/or ``/bucket`` Filesystems + +After clicking *Add Attached Filesystems*, go to *Attached Filesystems settings*, and follow the steps listed here, +which are also shown in :numref:`Figure %s `. + +#. In the *Storage* box, select the lustre filesystem defined above, as in red arrow 1. +#. In the *Mount Point* box, name it ``/lustre`` (the common and default choice), as indicated by red arrow 2. + If you choose a different name, make sure that the name chosen here uses the name from the global-workflow setup step. + +If you have a S3 bucket, one can attached as: + +#. In the *Storage* box, select the bucket you want to use, as in red arrow 3. +#. In the *Mount Point* box, name it ``/bucket`` (the common and default choice) as indicated by red arrow 4. + +.. _change-settings: + +.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_filesystem_4.png + :name: noaacsp_filesystem_4 + :class: with-border + :align: center + + Adjust Attached ``/lustre`` and/or ``/bucket`` Filesystem Settings + +Always remember to click *Save Changes* after making any changes to the cluster. ************************** Using the NOAA CSP Cluster ************************** -To activate the cluster, click the button circled in -:red-text:red. The cluster status is denoted by the color-coded button -on the right. The amount of time required to start the cluster is -variable and not immediate and may take several minutes for the -cluster to become. +To activate the cluster, click *Clusters* on the left panel of the NOAA PW website shown in :numref:`Figure %s `, +as indicated by the red arrow. Then click the *Sessions* button in the red square, and click the power +button in the red circle. The cluster status is denoted by the color-coded button +on the right: red means stopped; orange means requested; green means active. The amount of time required to start +the cluster varies and is not immediate; it may take several minutes (often 10-20) for the cluster to become active. + +.. _activate-cluster: + +.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_using_1.png + :name: noaacsp_using_1 + :class: with-border + :align: center + + Activate the Cluster + +When the cluster is activated, users will see the following indicators of success listed below as seen in :numref:`Figure %s `: -.. image:: _static/noaacsp_using_1.png +#. A green dot on the left beside the AWS logo means that the cluster is active (indicated by red arrow 1). +#. A green dot on the right labeled "active" means that the cluster is active (indicated by red arrow 2). +#. A green power button means the cluster is active (indicated by red arrow 3). +#. Clicking the clipboard icon (blue square with arrow inside), indicated by red arrow 4 will copy the cluster's IP address into the clipboard. Then, + you can open a laptop terminal window (such as xterm), and run ``ssh username@the-ip-address``. This will connect you + to the AWS cluster, and you can do your work there. +#. Alternatively, users can click directly on the ``username@the-ip-address``, and a PW web terminal will appear at the + bottom of the website. Users can work through this terminal to use their AWS cluster. -For instances where a NOAA CSP cluster does not initialize, useful -output can be found beneath the ``Logs`` section beneath the -``Provision`` tab as illustrated below. Once again, when opening -issues related to the NOAA CSP cluster initialization please include -this information. +Please note, as soon as the cluster is activated, AWS/PW starts charging you for use of the cluster. +As this cluster is exclusive for yourself, AWS keeps charging you as long as the cluster is active. +For running global-workflow, one needs to keep the cluster active if there are any Rocoto jobs running +because Rocoto uses `crontab`, which needs the cluster active all the time, or the crontab job will be terminated. -.. image:: _static/noaacsp_using_2.png +.. _cluster-on: + +.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_using_2.png + :name: noaacsp_using_2 + :class: with-border + :align: center + + Knowing the Cluster + +After finishing your work on the AWS cluster, you should terminate/stop the cluster, unless you have reasons to keep it active. +To stop/terminate the cluster, go to the cluster session, and click the green power button as show in :numref:`Figure %s `. +A window will pop up; click the red *Turn Off* button to switch off the cluster. + +.. _stop-cluster: + +.. figure:: https://raw.githubusercontent.com/wiki/NOAA-EMC/global-workflow/images/noaacsp_using_3.png + :name: noaacsp_using_3 + :class: with-border + :align: center + + Terminating the Cluster *************************** Running the Global Workflow *************************** -The global-workflow configuration currently requires that all initial -conditions, observations, and fixed-files, are staged in the -appropriate paths prior to running the global-workflow. As suggested -above, it is strongly recommended the the user configure their -respective experiments to use the ``/lustre`` file system for the -``EXPDIR`` and ``ROTDIR`` contents. The ``/contrib`` file system is -suitable for compiling and linking the workflow components required of -the global-workflow. +Assuming you have an AWS cluster running, after logging in to the cluster through ``ssh`` from your laptop terminal +or accessing the cluster from your web terminal, you can start to clone, compile, and run global-workflow. -The software stack supporting the ``develop`` branch of the -global-workflow is provided for the user and is located beneath -``/contrib/emc_static/spack-stack``. The modules required for the -global-workflow execution may be loaded as follows. +#. Clone global-workflow (assumes you have set up access to GitHub): -.. code-block:: bash + .. code-block:: console - user@host:$ module unuse /opt/cray/craype/default/modulefiles - user@host:$ module unuse /opt/cray/modulefiles - user@host:$ module use /contrib/emc_static/spack-stack/miniconda/modulefiles/miniconda - user@host:$ module load py39_4.12.0 - user@host:$ module load rocoto/1.3.3 + cd /contrib/$USER #you should have a username and have a directory at /contrib, where we save our permanent files. + git clone --recursive git@github.com:NOAA-EMC/global-workflow.git global-workflow + #or the develop fork at EPIC: + git clone --recursive git@github.com:NOAA-EPIC/global-workflow-cloud.git global-workflow-cloud -The execution of the global-workflow should now follow the same steps -as those for the RDHPCS on-premise hosts. +#. Compile global-workflow: + + .. code-block:: console + + cd /contrib/$USER/global-workflow # or cd /contrib/$USER/global-workflow-cloud depending on which one you cloned + cd sorc + build_all.sh # or similar command to compile for gefs, or others. + link_workflow.sh # after build_all.sh finished successfully +#. As users may define a very small cluster as controller, one may use the script below to compile in compute node. + Save the this script in a file, say, ``com.slurm``, and submit this job with command ``sbatch com.slurm``: + + .. code-block:: console + + #!/bin/bash + #SBATCH --job-name=compile + #SBATCH --account=$USER + #SBATCH --qos=batch + #SBATCH --partition=compute + #SBATCH -t 01:15:00 + #SBATCH --nodes=1 + #SBATCH -o compile.%J.log + #SBATCH --exclusive + + gwhome=/contrib/Wei.Huang/src/global-workflow-cloud # Change this to your own "global-workflow" source directory + cd ${gwhome}/sorc + source ${gwhome}/workflow/gw_setup.sh + #build_all.sh + build_all.sh -w + link_workflow.sh + +#. Run global-workflow C48 ATM test case (assumes user has ``/lustre`` filesystem attached): + + .. code-block:: console + + cd /contrib/$USER/global-workflow + + HPC_ACCOUNT=${USER} pslot=c48atm RUNTESTS=/lustre/$USER/run \ + ./workflow/create_experiment.py \ + --yaml ci/cases/pr/C48_ATM.yaml + + cd /lustre/$USER/run/EXPDIR/c48atm + crontab c48atm.crontab + +EPIC has copied the C48 and C96 ATM, GEFS, and some other data to AWS, and the current code has been set up to use those data. +If users want to run their own case, they need to make changes to the IC path and others to make it work. +The execution of the global-workflow should now follow the same steps +as those for the RDHPCS on-premises hosts. diff --git a/docs/source/references.bib b/docs/source/references.bib new file mode 100644 index 0000000000..e69de29bb2