This is your new Kedro project, which was generated using Kedro 0.18.12
.
Take a look at the Kedro documentation to get started.
In order to get the best out of the template:
- Don't remove any lines from the
.gitignore
file we provide - Make sure your results can be reproduced by following a data engineering convention
- Don't commit data to your repository
- Don't commit any credentials or your local configuration to your repository. Keep all your credentials and local configuration in
conf/local/
- Remove deafult modules:
module --force purge
- Activate StdEnv/2020 module which is a pre-requisite for gcc:
module load StdEnv/2020
- Activate following modules:
module load gcc/9.3 python/3.11.5 cuda/11.8.0 arrow/12.0.1 nodejs rust/1.70.0 scipy-stack/2023b
before actiavating your environment. Amodule list
should look like this:
Currently Loaded Modules:
1) CCconfig 5) mii/1.1.2 9) python/3.8.10 (t) 13) gdrcopy/2.3
2) gentoo/2020 (S) 6) gcccore/.9.3.0 (H) 10) cudacore/.11.7.0 (H,t) 14) ucx/1.8.0
3) imkl/2020.1.217 (math) 7) gcc/9.3.0 (t) 11) cuda/11.7 (t) 15) libfabric/1.15.1
4) StdEnv/2020 (S) 8) libffi/3.3 12) arrow/9.0.0 (t) 16) openmpi/4.0.3 (m)
- Activate your environment:
source activate <env_name>
- Install dependancy:
pip install xarray optuna wandb comet_ml lightning openpyxl loguru tqdm argparse hiplot plotly matplotlib umap-learn networkx ray[train,tune,data]
- For FAISS, I build from source.
Next, if you want to run the jupyter notebook you can do so by requesting an interactive allocation. Here running using kedro jupyter lab
to allow contextualization of the project and configuration -
salloc --time=02:28:80 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=8G --account=def-mtarailo kedro jupyter lab --ip $(hostname -f) --no-browser
salloc --time=1:0:0 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=4G --account=def-mtarailo srun $VIRTUAL_ENV/bin/jupyterlab.sh
To get allocation with GPU:
salloc --time=02:59:00 --nodes=1 --ntasks=1 --mem=32G --gres=gpu:v100:1 --constraint=cascade,v100 --account=def-mtarailo srun $VIRTUAL_ENV/bin/jupyterlab.sh
Data preperation part currently relies on old modules and libraries. To run the data preperation part, you need to activate the following modules:
module load StdEnv/2020
module load gcc/9.3.0 python/3.8.10 cuda/11.7 arrow/9.0.0
source ~/jupyter_py3/bin/activate
salloc --time=8:28:80 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=12G --account=def-mtarailo kedro jupyter lab --ip $(hostname -f) --no-browser
Create a virtual environment with python 3.8.10 version.
Declare any dependencies in src/requirements.txt
for pip
installation and src/environment.yml
for conda
installation.
To install them, run:
pip install -r src/requirements.txt
I used following commands to install PyTorch related packages on ComputeCanada's Graham cluster:
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
pip install torch_geometric pyg_lib torch_scatter torch_sparse torch_cluster torch_spline_conv -f https://data.pyg.org/whl/torch-2.0.0+cu118.html
You can run your Kedro project with:
kedro run
If on a compute canada cluster and want to utilize the SLURM runner, you can run the pipeline using the following command:
kedro run --runner="modspy_data.runner.SLURMRunner"
Have a look at the file src/tests/test_run.py
for instructions on how to write your tests. You can run your tests as follows:
kedro test
To configure the coverage threshold, go to the .coveragerc
file.
To generate or update the dependency requirements for your project:
kedro build-reqs
This will pip-compile
the contents of src/requirements.txt
into a new file src/requirements.lock
. You can see the output of the resolution by opening src/requirements.lock
.
After this, if you'd like to update your project requirements, please update src/requirements.txt
and re-run kedro build-reqs
.
Further information about project dependencies
Note: Using
kedro jupyter
orkedro ipython
to run your notebook provides these variables in scope:catalog
,context
,pipelines
andsession
.Jupyter, JupyterLab, and IPython are already included in the project requirements by default, so once you have run
pip install -r src/requirements.txt
you will not need to take any extra steps before you use them.
To use Jupyter notebooks in your Kedro project, you need to install Jupyter:
pip install jupyter
After installing Jupyter, you can start a local notebook server:
kedro jupyter notebook
To use JupyterLab, you need to install it:
pip install jupyterlab
You can also start JupyterLab:
kedro jupyter lab
And if you want to run an IPython session:
kedro ipython
You can move notebook code over into a Kedro project structure using a mixture of cell tagging and Kedro CLI commands.
By adding the node
tag to a cell and running the command below, the cell's source code will be copied over to a Python file within src/<package_name>/nodes/
:
kedro jupyter convert <filepath_to_my_notebook>
Note: The name of the Python file matches the name of the original notebook.
Alternatively, you may want to transform all your notebooks in one go. Run the following command to convert all notebook files found in the project root directory and under any of its sub-folders:
kedro jupyter convert --all
To automatically strip out all output cell contents before committing to git
, you can run kedro activate-nbstripout
. This will add a hook in .git/config
which will run nbstripout
before anything is committed to git
.
Note: Your output cells will be retained locally.
Further information about building project documentation and packaging your project