diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000..e69de29 diff --git a/404.html b/404.html new file mode 100644 index 0000000..ad1a968 --- /dev/null +++ b/404.html @@ -0,0 +1,652 @@ + + + +
+ + + + + + + + + + + + + + +This Wiki 📖 is a crowdsourced base of knowledge accumulated by generations of current and ex research assistants, students, faculty members, friends, and other personnel. It is "whatever" annoying you don't remember after two months anymore or what frequently someone asks.
+The knowledge is expected to be contributed via Markdown files (.md
) and submitted as Pull Requests to TUNI-ITC/wiki. This guide describes this process step-by-step.
There are two options on how to contribute to Wiki:
+git
and don't want to discover its beauty, then, create an Issue in TUNI-ITC/wiki and share a .md
file with us. Here are the guide and an online Markdown editor.What Can I Contribute?
+Anything! Style, spelling, new knowledge, Wiki's engine. Anything, ok? 🤗
+A good pratice is to have an even (up-to-date) fork with the upstream repo (TUNI-ITC/wiki
) before you form a pull request. Otherwise, you may purpose an edit to the old content that is no longer there.
Of course, the easiest way to sync a fork (and the last resort measure if something went wrong) is to delete the fork from your GitHub account and fork TUNI-ITC/wiki again. A less barbaric approach is to sync it. Here is how to do it.
+If you already made some new contributions, make sure to backup them to avoid issues. You may also want to remove the fork from your local machine and clone the fork again from your profile on GitHub.
+Synchronizing a fork is a simple procedure but requires you to run several lines in your terminal: +
# (this line is for the first time only) Add the upsteam to your git folder.
+git remote add upstream https://github.com/TUNI-ITC/wiki.git
+git fetch upstream
+git checkout main
+# WARNING: this erases all differences between upstream and your local version
+git reset --hard upstream/main
+# pushes the changes to your fork on GitHub
+git push origin main --force
+
The pipeline boils down to these several steps which should be familiar to anyone who worked with an open-source project on some sort of Hub 🤓 (GitHub, Bitbucket, GitLab, and such):
+This is completely optional and we would like to see an issue just to track the progress and maybe suggest where (which section) to put your knowledge or help you with something.
+You can edit directly through the GitHub web page ("Add File" and "Edit" button) or do it locally on your computer. We recommend learning to do it the hard way by forking and cloning it manually you will benefit from this knowledge in your career, otherwise skip this step.
+Fork means that you make a personal copy of this wiki - note that anyone can do that and it does not mess the main branch! To do that go to TUNI-ITC/wiki and press the top-right button Fork - if you don't have a Github account yet, you need to make one.
+After that, you should have the forked repo somewhere in your account, e.g., github.com/<MY_ACCOUNT>/wiki/
.
To clone your fork locally, in terminal type: +
git clone https://github.com/<MY_ACCOUNT>/wiki/
+
You may freely edit an existing file or create new, e.g., how-to-select-a-coffee-in-a-finnish-supermarket.md
. Here are the guide and an online Markdown editor for you to play with.
We are using Material Theme for MkDocs. Hence, you may also propose to add more functionality to our wiki. Check out the manuals of both to see what else we can add.
+Sure thing! You will only need to install the mkdocs-material
python package:
+
pip install mkdocs-material
+
cd /path/to/wiki
+mkdocs serve
+
localhost:8000
in your browser.
+See more in the original documentation.
+Commit your changes! You know how, right?
+# it will print the modifications were made compared to last commit
+git status
+# this will `stage` you changes
+git add how-to-do-something.md
+# this will commit the staged changes
+# (it may ask you to configure git if you are doing to for the first time)
+git commit -m "added how-to-do-something.md"
+git push
+
Open the page with your fork on GitHub: github.com/<MY_ACCOUNT>/wiki/
. At this point, you should be able to find the changes you made in your fork. Somewhere at the top, you will be asked if you want to make a Pull Request and that your branch is ahead of the main
by some commits. Make the request, by adding comments, title, and check that you are proposing the files you expect and submit it. Someone will review and accept it. That's it!
Note, once the PR is submitted it cannot be deleted even by the moderators.
+ + + + + + + + + + + + + +Training a Deep Neural Network (DNNs) is notoriously time-consuming especially nowadays when they are getting bigger to get better. To reduce the training time, we mostly train it on the multiple gpus within a single node or across different nodes. This tutorial is focused on the latter where multiple nodes are utilised using PyTorch. Although there are many tutorials available on the web including one from the PyTorch, they are not self-sufficient in explaining some of the key issues like how to run the code, how to save checkpoints, or how to create a batch script for this in the severs. I have given a starter kit here which addresses these issues and can be helpful to students of our university in setting up their first multi-gpu training in the servers like CSC-Puhti or Narvi.
+PyTorch mostly provides two functions namely nn.DataParallel
and nn.DistributedDataParallel
to use multiple gpus in a single node and multiple nodes during the training respectively. However, it is recommended by PyTorch to use nn.DistributedDataParallel
even in the single node to train faster than the nn.DataParallel
. For more details, I would recommend reading the PyTorch docs. This tutorial assumes that the reader is familiar with the DNNs training using PyTorch and basic operations on the gpu-servers of our university.
I have considered a simple Auto-Encoder (AE) model for demonstration where the inputs are images of digits from MNIST data-set. Just to be clear, AE takes images as input and encodes it to a much smaller dimension w.r.t its inputs and then try to reconstruct the images back from those smaller dimensions. It can be considered as a process of compression and decompression. We train the network to learn this smaller dimension such that the reconstructed image is very close to input. Let's begin by defining the network structure.
+import torch
+import torch.nn as nn
+import torchvision
+from argparse import ArgumentParser
+
+class AE(nn.Module):
+ def __init__(self, **kwargs):
+ super().__init__()
+
+ self.net = nn.Sequential(
+ nn.Linear(in_features=kwargs["input_shape"], out_features=128),
+ nn.ReLU(inplace=True),
+ # small dimension
+ nn.Linear(in_features=128, out_features=128),
+ nn.ReLU(inplace=True),
+ nn.Linear(in_features=128, out_features=128),
+ nn.ReLU(inplace=True),
+ # Recconstruction of input
+ nn.Linear(in_features=128, out_features=kwargs["input_shape"]),
+ nn.ReLU(inplace=True)
+ )
+
+ def forward(self, features):
+ reconstructed = self.net(features)
+ return reconstructed
+
train()
function where we load the MNIST data-set and this can easily be done from the torchvision.dataset
library as follows
+def train(gpu, args):
+ transform = torchvision.transforms.Compose([
+ torchvision.transforms.ToTensor()
+ ])
+
+ train_dataset = torchvision.datasets.MNIST(
+ root="~/mnist_dataset", train=True, transform=transform, download=True
+ )
+
+ train_loader = torch.utils.data.DataLoader(
+ train_dataset, batch_size=128, shuffle=True, num_workers=4,
+ pin_memory=True
+ )
+
def train(gpu, args):
+ transform = torchvision.transforms.Compose([
+ torchvision.transforms.ToTensor()
+ ])
+
+ train_dataset = torchvision.datasets.MNIST(
+ root="./mnist_dataset", train=True, transform=transform, download=True
+ )
+
+ train_loader = torch.utils.data.DataLoader(
+ train_dataset, batch_size=128, shuffle=True, num_workers=4,
+ pin_memory=True
+ )
+
+ # load the model to the specified device, gpu-0 in our case
+ model = AE(input_shape=784).cuda(gpu)
+ # create an optimizer object
+ # Adam optimizer with learning rate 1e-3
+ optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
+ # Loss function
+ criterion = nn.MSELoss()
+
def train(gpu, args):
+ transform = torchvision.transforms.Compose([
+ torchvision.transforms.ToTensor()
+ ])
+
+ train_dataset = torchvision.datasets.MNIST(
+ root="./mnist_dataset", train=True, transform=transform, download=True
+ )
+
+ train_loader = torch.utils.data.DataLoader(
+ train_dataset, batch_size=128, shuffle=True, num_workers=4,
+ pin_memory=True
+ )
+
+ # load the model to the specified device, gpu-0 in our case
+ model = AE(input_shape=784).cuda(gpu)
+ # create an optimizer object
+ # Adam optimizer with learning rate 1e-3
+ optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
+ # Loss function
+ criterion = nn.MSELoss()
+
+ for epoch in range(args.epochs):
+ loss = 0
+ for batch_features, _ in train_loader:
+ # reshape mini-batch data to [N, 784] matrix
+ # load it to the active device
+ batch_features = batch_features.view(-1, 784).cuda(gpu)
+
+ # reset the gradients back to zero
+ # PyTorch accumulates gradients on subsequent backward passes
+ optimizer.zero_grad()
+
+ # compute reconstructions
+ outputs = model(batch_features)
+
+ # compute training reconstruction loss
+ train_loss = criterion(outputs, batch_features)
+
+ # compute accumulated gradients
+ train_loss.backward()
+ # pe-rform parameter update based on current gradients
+ optimizer.step()
+
+ # add the mini-batch training loss to epoch loss
+ loss += train_loss.item()
+
+ # compute the epoch training loss
+ loss = loss / len(train_loader)
+
+ # display the epoch training loss
+ print("epoch: {}/{}, loss = {:.6f}".format(epoch+1, args.epochs, loss))
+
main()
function that calls the train function and defines the required arguments.
+def main():
+ parser = ArgumentParser()
+ parser.add_argument('--ngpus', default=1, type=int,
+ help='number of gpus per node')
+
+ parser.add_argument('--epochs', default=2, type=int, metavar='N',
+ help='number of total epochs to run')
+ args = parser.parse_args()
+ train(0, args)
+
+if __name__ == '__main__':
+ main()
+
With the multiprocessing, we will run our training script in each node separately and ask PyTorch to handle the synchronisation between them. It makes sure that in each iteration, the same network weights are present in every node but use different data for the forward pass. Then the gradients are accumulated from every node to calculate the change in weights which will be sent to each node for the update. In short, the same network operates on different data in different nodes in parallel to make things faster. To let this internal communication happen between the nodes, we need few information to setup the DistributedParallel environment such as 1. how many nodes we are using, 2. what is the ip-address of the master node and 3. The number of gpus in a single node. I have changed the order of the above code to make it more understandable. We will first start from the main
function by defining all the necessary variables.
import torch
+import torch.nn as nn
+import torchvision
+import torch.multiprocessing as mp
+import torch.distributed as dist
+from argparse import ArgumentParser
+import os
+
+if __name__ == "__main__":
+
+ parser = ArgumentParser()
+ parser.add_argument('--nodes', default=1, type=int)
+ parser.add_argument('--local_ranks', default=0, type=int,
+ help="Node's order number in [0, num_of_nodes-1]")
+ parser.add_argument('--ip_adress', type=str, required=True,
+ help='ip address of the host node')
+ parser.add_argument("--checkpoint", default=None,
+ help="path to checkpoint to restore")
+ parser.add_argument('--ngpus', default=1, type=int,
+ help='number of gpus per node')
+ parser.add_argument('--epochs', default=2, type=int, metavar='N',
+ help='number of total epochs to run')
+
+ args = parser.parse_args()
+ # Total number of gpus availabe to us.
+ args.world_size = args.ngpu * args.nodes
+ # add the ip address to the environment variable so it can be easily avialbale
+ os.environ['MASTER_ADDR'] = args.ip_adress
+ print("ip_adress is", args.ip_adress)
+ os.environ['MASTER_PORT'] = '8888'
+ os.environ['WORLD_SIZE'] = str(args.world_size)
+ # nprocs: number of process which is equal to args.ngpu here
+ mp.spawn(train, nprocs=args.ngpus, args=(args,))
+
local_rank
as an unique number associated to each node starting from zero to number of nodes-1. We assign zero rank to the node whose ip-address is passed to the main()
and we start the script first on that node. Further, we are going use this number to calculate one more rank for each gpu in that node.train
function once, we spawn args.ngpus
processes in each node to run args.ngpus
instances of train
function in parallel.Now lets define the function train
that can handle these multiple processes.
def train(gpu, args):
+
+ args.gpu = gpu
+ print('gpu:',gpu)
+ # rank calculation for each process per gpu so that they can be identified uniquely.
+ rank = args.local_ranks * args.ngpus + gpu
+ print('rank:',rank)
+ # Boilerplate code to initialize the parallel prccess.
+ # It looks for ip-address and port which we have set as environ variable.
+ # If you don't want to set it in the main then you can pass it by replacing
+ # the init_method as ='tcp://<ip-address>:<port>' after the backend.
+ # More useful information can be found in
+ # https://yangkky.github.io/2019/07/08/distributed-pytorch-tutorial.html
+
+ dist.init_process_group(
+ backend='nccl',
+ init_method='env://',
+ world_size=args.world_size,
+ rank=rank
+ )
+ torch.manual_seed(0)
+ # start from the same randomness in different nodes. If you don't set it
+ # then networks can have different weights in different nodes when the
+ # training starts. We want exact copy of same network in all the nodes.
+ # Then it will progress from there.
+
+ # set the gpu for each processes
+ torch.cuda.set_device(args.gpu)
+
+
+ transform = torchvision.transforms.Compose([
+ torchvision.transforms.ToTensor()
+ ])
+
+ train_dataset = torchvision.datasets.MNIST(
+ root="~/mnist_dataset", train=True, transform=transform, download=True
+ )
+ # Ensures that each process gets differnt data from the batch.
+ train_sampler = torch.utils.data.distributed.DistributedSampler(
+ train_dataset, num_replicas=args.world_size, rank=rank
+ )
+
+ train_loader = torch.utils.data.DataLoader(
+ train_dataset,
+ # calculate the batch size for each process in the node.
+ batch_size=int(128/args.ngpus),
+ shuffle=(train_sampler is None),
+ num_workers=4,
+ pin_memory=True,
+ sampler=train_sampler
+ )
+
train_sampler
, manual_seed
and modified batch size in the dataloader
are important steps to remember while setting this up.Finally, wrap the model as DistributedDataParallel and start the training. +
def train(gpu, args):
+ args.gpu = gpu
+ print('gpu:',gpu)
+ rank = args.local_ranks * args.ngpus + gpu
+ # rank calculation for each process per gpu so that they can be
+ # identified uniquely.
+ print('rank:',rank)
+ # Boilerplate code to initialise the parallel process.
+ # It looks for ip-address and port which we have set as environ variable.
+ # If you don't want to set it in the main then you can pass it by replacing
+ # the init_method as ='tcp://<ip-address>:<port>' after the backend.
+ # More useful information can be found in
+ # https://yangkky.github.io/2019/07/08/distributed-pytorch-tutorial.html
+
+ dist.init_process_group(
+ backend='nccl',
+ init_method='env://',
+ world_size=args.world_size,
+ rank=rank
+ )
+ torch.manual_seed(0)
+ # start from the same randomness in different nodes.
+ # If you don't set it then networks can have different weights in different
+ # nodes when the training starts. We want exact copy of same network in all
+ # the nodes. Then it will progress form there.
+
+ # set the gpu for each processes
+ torch.cuda.set_device(args.gpu)
+
+
+ transform = torchvision.transforms.Compose([
+ torchvision.transforms.ToTensor()
+ ])
+
+ train_dataset = torchvision.datasets.MNIST(
+ root="./mnist_dataset", train=True, transform=transform, download=True
+ )
+ # Ensures that each process gets differnt data from the batch.
+ train_sampler = torch.utils.data.distributed.DistributedSampler(
+ train_dataset, num_replicas=args.world_size, rank=rank
+ )
+
+ train_loader = torch.utils.data.DataLoader(
+ train_dataset,
+ # calculate the batch size for each process in the node.
+ batch_size=int(128/args.ngpus),
+ shuffle=(train_sampler is None),
+ num_workers=4,
+ pin_memory=True,
+ sampler=train_sampler
+ )
+
+
+ # load the model to the specified device, gpu-0 in our case
+ model = AE(input_shape=784).cuda(args.gpus)
+ model = torch.nn.parallel.DistributedDataParallel(
+ model_sync, device_ids=[args.gpu], find_unused_parameters=True
+ )
+ # create an optimizer object
+ # Adam optimizer with learning rate 1e-3
+ optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
+ # Loss function
+ criterion = nn.MSELoss()
+
+ for epoch in range(args.epochs):
+ loss = 0
+ for batch_features, _ in train_loader:
+ # reshape mini-batch data to [N, 784] matrix
+ # load it to the active device
+ batch_features = batch_features.view(-1, 784).cuda(args.gpus)
+
+ # reset the gradients back to zero
+ # PyTorch accumulates gradients on subsequent backward passes
+ optimizer.zero_grad()
+
+ # compute reconstructions
+ outputs = model(batch_features)
+
+ # compute training reconstruction loss
+ train_loss = criterion(outputs, batch_features)
+
+ # compute accumulated gradients
+ train_loss.backward()
+
+ # perform parameter update based on current gradients
+ optimizer.step()
+
+ # add the mini-batch training loss to epoch loss
+ loss += train_loss.item()
+
+ # compute the epoch training loss
+ loss = loss / len(train_loader)
+
+ # display the epoch training loss
+ print("epoch: {}/{}, loss = {:.6f}".format(epoch+1, args.epochs, loss))
+ if rank == 0:
+ dict_model = {
+ 'state_dict': model.state_dict(),
+ 'optimizer': optimizer.state_dict(),
+ 'epoch': args.epochs,
+ }
+ torch.save(dict_model, './model.pth')
+
Save the script as train.py
in the CSC or Narvi server and submit an interactive job with two gpu nodes (Lets quickly test it on gputest
node as srun --pty --account=Project_** --nodes=2 -p gputest --gres=gpu:v100:1,nvme:100 -t 00:15:00 --mem-per-cpu=20000 --ntasks-per-node=1 --cpus-per-task=8 /bin/bash -i
). Once it is allocated, ssh to each node in two terminals as ssh <node name>
) and submit the job by typing python train.py --ip_adress=**.**.**.** --nodes 2 --local_rank 0 --ngpus 1 --epochs 1
and python train.py --ip_adress=<same as the first> --nodes 2 --local_rank 1 --ngpus 1 --epochs 1
to each of them respectively. Two job should start with synchronisation and training will begin soon after.
ping <node name>
When we are submitting the interactive jobs, we know the exact node name and can obtain the ip-address for that beforehand. However, in the batch job, it needs to be programmed to automate most of the stuff. We have to make minimum changes to the existing code and write a .sh
script to submit the job. Our train.py
script are modified only in the first few lines of the train()
function as follows
def train(gpu, args):
+
+ args.gpu = gpu
+ print('gpu:',gpu)
+
+ # rank calculation for each process per gpu so that they can be
+ # identified uniquely.
+ rank = int(os.environ.get("SLURM_NODEID")) * args.ngpus + gpu
+ print('rank:',rank)
+ # Boilerplate code to initialise the parallel process.
+ # It looks for ip-address and port which we have set as environ variable.
+ # If you don't want to set it in the main then you can pass it by replacing
+ # the init_method as ='tcp://<ip-address>:<port>' after the backend.
+ # More useful information can be found in
+ # https://yangkky.github.io/2019/07/08/distributed-pytorch-tutorial.html
+
+ dist.init_process_group(
+ backend='nccl',
+ init_method='env://',
+ world_size=args.world_size,
+ rank=rank
+ )
+ torch.manual_seed(0)
+ # start from the same randomness in different nodes.
+ # If you don't set it then networks can have differnt weights in different
+ # nodes when the training starts. We want exact copy of same network in
+ # all the nodes. Then it will progress form there.
+
+ # set the gpu for each processes
+ torch.cuda.set_device(args.gpu)
+
$SLURM_NODEID
which is unique for each slurm node.Keeping everything else in the code same, now lets write the batch script for CSC-puhti. Same script can be used for Narvi.
+#!/bin/bash
+#SBATCH --job-name=name
+#SBATCH --account=Project_******
+#SBATCH -o out.txt
+#SBATCH -e err.txt
+#SBATCH --partition=gpu
+#SBATCH --time=08:00:00
+#SBATCH --ntasks-per-node=1
+#SBATCH --cpus-per-task=4
+#SBATCH --mem-per-cpu=8000
+#SBATCH --gres=gpu:v100:4
+#SBATCH --nodes=2
+module load gcc/8.3.0 cuda/10.1.168
+source <virtual environment name>
+
+# if some error happens in the initialation of parallel process then you can
+# get the debug info. This can easily increase the size of out.txt.
+export NCCL_DEBUG=INFO # comment it if you are not debugging distributed parallel setup
+
+export NCCL_DEBUG_SUBSYS=ALL # comment it if you are not debugging distributed parallel setup
+
+# find the ip-address of one of the node. Treat it as master
+ip1=`hostname -I | awk '{print $2}'`
+echo $ip1
+
+# Store the master node’s IP address in the MASTER_ADDR environment variable.
+export MASTER_ADDR=$(hostname)
+
+echo "r$SLURM_NODEID master: $MASTER_ADDR"
+
+echo "r$SLURM_NODEID Launching python script"
+
+srun python train.py --nodes=2 --ngpus 4 --ip_adress $ip1 --epochs 1
+
To Narvi users
+Change the ip1=hostname -I | awk '{print $2}'
line to ip1=hostname -I | awk '{print $1}'
to correctly parse the ip address.
To Mahti users
+For now add export NCCL_IB_DISABLE=1
to the batch script to prevent the occasional hang in the training loop. However, I am not sure whether this is happening becasue of Mahti or Pytorch 1.8.
os.mkdir
inside the script then always wrap it with try and except
. Multiple processes will try to create a new folder and they will throw errors that the directory already exists.for state in optimizer.state.values():
+ for k, v in state.items():
+ if isinstance(v, torch.Tensor):
+ state[k] = v.cuda(gpus)
+
--nodes=1
in the batch script.sync-batchnorm
to have better batch statistics while using DistributedDataParallel.I found this article really helpful when I was setting up my DistributedDataParallel framework. Many missing details can be found in this article which is skipped here to focus more on the practical things. If you have any suggestions then reach me at soumya.tripathy@tuni.fi
.
This is the guide for setting up the remote connection to your office machine and it will not tell you how to set up the machine itself (e.g. install Ubuntu, allocate disk space). If you would like some guidance on this topic, for example, you would want to make your remote machine to "look" like the rest of our machines, please refer to this unofficial guide – also, let us know if it was useful and whether the wiki might benefit from having it here.
+Warning
+Since this guide relies on ssh-forward.tuni.fi
as a proxy server, you may only
+set up the remote connection if you are a staff member of the university, i.e.
+a student cannot apply for access to ssh-forward.tuni.fi
.
+If you are a student, please contact the IT-Helpdesk and ask if they can
+give you access to ssh-forward.tuni.fi
.
A bit of motivation and how it will work. There is no official way of conneting remotely to a self-maintained machine. Here is how we can work around this problem. Your compute machine can be added to a research network pit.cs.tut.fi
which practically means that it will have a fixed IP (or FQDN) and you will not need to type your credentials every 24 hours. However, another problem here is that this research network can only be reachable using university-maintained devices which uses a university WiFi or a pre-installed VPN. The solution to this problem is to use a proxy ssh-server (ssh-forward.tuni.fi
) when connecting to your machine. This ssh-forwarding server is open to the public internet and from there you can reach pit.cs.tut.fi
.
We assume that you have an Linux desktop (host) at your office and you would like to access it remotely from any device, e.g. your laptop (client). Here we provide both: how to setup the host and the client sides. If the host is already set up and you would like to just learn how to connect to it, follow the guide for a client.
+it-helpdesk@tuni.fi
and ask them to connect your office machine to pit.cs.tut.fi
network. They will assign a fixed IP/FQDN and you will not need to type your credentials every 24 hours to have an internet connection. Specify the following information:it-helpdesk@tuni.fi
and be able to connect to the internet using the socket you specified. If so, check your IP and type host your_IP
in your terminal to find out the FQDN of the machine. In our case, it was be something like <IP.reversed> pointed to **********.pit.cs.tut.fi
.openssh-server
package on your host machine (via e.g. sudo apt-get install openssh-server
). This will allow ssh
connection to this machine.sudo nm-connection-editor
in your terminal (or just go to Edit connection
from the status menu on Ubuntu 16.04).Wired connection
) to automatically connect when available.Now your machine (host) should be reachable from TUNI-maintained computers connected to TUNI-STAFF
WiFi or a pre-installed VPN directly via ssh user@**********.pit.cs.tut.fi
. If you are using a non-university computer, you need to use the ssh-forwarding server (ssh-forward.tuni.fi
) to reach pit.cs.tut.fi
. Use the following guide to set up the connection to the forwarding server.
Even though university-maintained devices can reach pit.cs.tut.fi
without any proxy from university premises, we found that using the proxy even on university-maintained devices provides a more uniform experience. Otherwise you need to keep in mind which network you are using each time you ssh to your machine and adjust the CLI command accordingly (see the tip in the end of this section on how to configure an ssh
command to use proxy by default when connecting to a specific device).
ssh-forward.tuni.fi
. For this, proceed to id.tuni.fi/idm -> My user rights
-> Apply for a new user right
-> if required, select your Staff contract -> search for Linux Servers (LINUX-SERVERS) Personnel SSH tunneling service
and select it. In Application details
put something like For pit.cs.tut.fi connections
. Then, go to the Applications
tab and wait until the access is granted (1 min).ssh-forward.tuni.fi
is open to the public internet and it requires 2-factor authentication. For this, log in to the forwarding server using your TUNI credentials (ssh tuni_user@ssh-forward.tuni.fi
) while being connected to one of the University networks (roam.fi/eduroam/TUNI-STAFF
or a university VPN). If 2FA was not initialized, it will reject your password. Unfortunately, you cannot initialize 2-factor authentication from any network. You need to be physically at University and be connected to one of the networks there.ssh-forward.tuni.fi
, type google-authenticator
. It will ask you several questions and show a QR code (resize your window to see it). Answer the questions as follows:Do you want authentication tokens to be time-based...
-> yDo you want me to update your "/home/user/...google_authenticator" file
-> yDo you want to disallow multiple uses...
-> nBy default, tokens are good for 30 seconds...
-> nIf the computer that you are logging into...
-> yssh-forward.tuni.fi
from a non-university network (e.g. using internet shared from your cell-phone) (ssh your_tuni_username@ssh-forward.tuni.fi
).ssh -J your_tuni_username@ssh-forward.tuni.fi your_host_username@*********.pit.cs.tut.fi
(-J
means "jump" using the specified proxy)roam.fi/eduroam/TUNI-STAFF
or VPN), it will only ask for your TUNI and host passwords;Verification code
which is a temporal code from the 2FA app you installed on your smartphone. Then, you will need to type your TUNI and host passwords.Tip
+Config the ssh
connection in ~/.ssh/config
:
+
Host connection_name
+ HostName ***********.pit.cs.tut.fi
+ User your_host_username
+ ProxyCommand ssh your-tuni-username@ssh-forward.tuni.fi -W %h:%p
+
ssh connection_name
to ssh
directly to *********.pit.cs.tut.fi
, forward ports, and transfer large files using scp/rsync
. It is also useful if you are using VSCode
or any other text editor which supports remote development. Additionally, it is handy if you would like to mount folders from the host to your client. You can use sshfs connection_name:/path/to/remote_folder /path/to/local_folder
.
+ssh-forward.tuni.fi
proxy and, therefore, benefit from this set up. Please, let us know if you found
+a way around it.it-helpdesk@tuni.fi
and ask to activate and configure your wall internet socket in a special way. This might take some time.ssh-forward.tuni.fi
doesn't support key-pair authentication.roam.fi/eduroam/TUNI-STAFF
; and 2FA + your TUNI and host-machine passwords on other networks.ssh-forward.tuni.fi
has very limited disk space for each user (few MB). Therefore, can only be used as a proxy for your ssh
connection which is its main purpose.The TUNI intra documentation is here https://intra.tuni.fi/en/handbook?page=2638 but its kind of a mess and therefore below easy to follow information.
+eduVPN application is not available for Linux and thefore you must use openvpn. Generic instructions are available here https://eduvpn.tuni.fi/vpn-user-portal/documentation for various OS, but, for example, Ubuntu is described below.
+Install OpenVPN:
+~$ sudo apt-get install network-manager-openvpn-gnome
+
Generate your own at https://eduvpn.tuni.fi/vpn-user-portal/configurations (give name, e.g., "joni-laptop", and store somewhere).
+Start openvpn:
+~$ sudo openvpn <PATH_TO_MY_GEN_CONFIG>.ovpn
+
Note: You need to update the VPN configuration files every now and then (the expriry date by default is in the filename, e.g., "Downloads/eduvpn.tuni.fi_internet_20201102_joni-laptop.ovpn#)
+There is a Linux desktop client and Python API for eduVPN and here is the link to that: https://python-eduvpn-client.readthedocs.io/en/master/introduction.html#installation
+I tested it on Ubuntu 18.04 and 20.04 and it works fine. The steps are simple and listed below (taken from the above documentation link). +
$ sudo -s
+$ apt install apt-transport-https curl
+$ curl -L https://repo.eduvpn.org/debian/eduvpn.key | apt-key add -
+$ echo "deb https://repo.eduvpn.org/debian/ stretch main" > /etc/apt/sources.list.d/eduvpn.list
+$ apt update
+$ apt install eduvpn-client
+
This wiki page is a collection of tips and tricks, but also best practices for using the SAUNA machines. If you have a tip or trick that you would like to share, please add it to this page. If you are looking for a way to connect to SAUNA machines, please see the guide for setting up the remote access.
+General idea of different disks
+/home/hdd1
and /home/hdd2
Create folders with your own name and accumulate files under these! (If not done already)
+cd /home/nvme
+sudo mkdir ilpo
+sudo chown ilpo ilpo/
+
Now you can create files under this folder
+❗ Every user is responsible to keep their machines up to date ❗
+Every now and then, the machines need to be updated and upgraded. This is done by running the following commands:
+sudo apt update
+sudo apt upgrade
+sudo apt autoremove
+
❗ Do not do this if you are not certain of your actions ❗
+When you should upgrade the Linux version?
+If you are certain that you want to upgrade the Linux version, you can follow these steps:
+Backup your data! This is important since the upgrade process can fail and you might lose your data.
+Check that you have the latest updates installed
+sudo apt update
+sudo apt upgrade
+sudo apt autoremove
+
Do a reboot
+sudo reboot
+
Check that you have screen installed. If your SSH connection is lost during the upgrade, screen will allow you to continue the upgrade from where you left off. Upgrade process will start a screen session automatically.
+sudo apt install screen
+
Install Ubuntu update tool
+sudo apt install update-manager-core
+
Make sure you can SSH to port 1022. Addittional sshd will be started on port 1022. If something goes wrong, you can still connect to the machine and continue the upgrade.
+sudo ufw allow 1022/tcp
+
Start the upgrade
+sudo do-release-upgrade
+
Reboot the machine
+sudo reboot
+
Please see this video if you are unfamiliar with Conda:
+The only CONDA tutorial you'll need to watch to get started (YouTube)
+Install via terminal:
+ +Good cheat sheet for commands here
+Tip
+Time to time it is good to clean up the cached packages (there can be a lot of them). You can do it with the following command:
+conda clean --all
+
Tip
+By default conda environments will be installed to the home directory of the user (~/miniconda3/envs
). This is not ideal since the home directory is located on the SSD and the space is limited. Instead, you should install the environments to the NVMe disk, and make a symbolic link to the custom location to easily activate such environment. Here is how you can do it:
# Create environment
+conda create --prefix /home/nvme/$USER/.envs/$MY_ENV python=3.9
+# Create symbolic link
+ln -s /home/nvme/$USER/.envs/$MY_ENV ~/miniconda3/envs/$MY_ENV
+# Activate environment
+conda activate $MY_ENV
+
Install VSCode to the client machine (your laptop) and use the Remote - SSH extension to connect to the SAUNA machines. This way you can use the full power of the SAUNA machines while having a nice development environment on your laptop. See the guide how to connect to a remote host from VSCode. It is recommended that you first add SAUNA machine to your SSH config file. See the tip under How to setup client-section.
+First add the user
+sudo adduser new_user
+
Add the user to the sudo group
+sudo adduser new_user sudo
+
Create folders for the user
+cd /home/nvme
+sudo mkdir new_user
+sudo chown new_user new_user/
+
+cd /home/hdd
+sudo mkdir new_user
+sudo chown new_user new_user/
+
Share the username and password with the new user. Tell them to change the password after the first login.
+This amazing guide was originally posed in wiki.eduuni.fi and composed by Heikki Huttunen. We obtained his permission to use it here. It was modernized and expanded since then.
+This document describes how to use the TUNI TCSC Narvi
computing cluster.
What is Narvi?
+Identity management
→ My user rights
→ Apply for a new user right
Linux Servers (LINUX-SERVERS) TCSC HPC Cluster
I need Narvi account. My supervisor is X.
.narvi.tut.fi
using ssh
.ssh
key-pair 🔐?Here is how to do it on Linux and Mac systems. The instructions for Windows can be easily found on google. +
ssh-keygen -f ~/.ssh/narvi_key
+
narvi_key
and narvi_key.pub
) in ~/.ssh/
folder. *.pub
is the public key.
+The first time you will use this key with ssh
it may complain about permissions (Permissions are too open.
). If so, you will need to change the permissions of the private key
+
chmod 600 ~/.ssh/narvi_key
+
Please write an e-mail to the admin (tcsc.tau@tuni.fi
) asking to add you to the GPU group. By default, you will only have access to CPU-only nodes.
To see the status of the queue, type +
squeue
+# for a specific partitions (e.g. `normal` or `gpu`).
+squeue -p gpu
+# for a specific user
+squeue -u <user>
+
Remember: do not use the login node for computation – it is slow and will degrade the performance of the login node for other users!
+There are two common ways to run a job at a slurm
cluster:
srun
sbatch
The main difference is that srun
is interactive which means the terminal will be attached to the current session. The experience is just like with any other command in your terminal. Note, that when the queue is full you will have to wait until you get resources.
If you use sbatch
, you submit your job to the slurm queue and get your terminal back; you can disconnect, kill your terminal, etc. with no consequence. In the case of srun
, killing the terminal would kill the job. Hence, sbatch
is recommended.
Here is the example srun
command which will ask the cluster to start an interactive shell with 1 GPU (--gres
) and 10 CPUs (--cpus-per-task
), 10 GB of RAM (--mem-per-cpu
) that will be available to you for 30 minutes (--time
):
+
srun \
+ --pty \
+ --job-name pepe_run \
+ --partition gpu \
+ --gres gpu:1 \
+ --mem-per-cpu 1G \
+ --ntasks 1 \
+ --cpus-per-task 10 \
+ --time 00:30:00 \
+ /bin/bash -i
+
and this is an example sbatch
command which will ask the cluster to run my_script.sh
with 1 GPU and 10 CPUs, 10 GB of RAM that will run for at most 30 minutes (if the script has finished execution the job will be ended), the output and error logs will be saved to log_JOBID.txt
(--output
, --error
):
+
sbatch \
+ --job-name pepe_run \
+ --partition gpu \
+ --gres gpu:1 \
+ --mem-per-cpu 1g \
+ --ntasks 1 \
+ --cpus-per-task 10 \
+ --time 00:30:00 \
+ --output log_%j.txt \
+ --error log_%j.txt \
+ my_script.sh
+
--constraint='kepler|pascal|volta'
in order to select a specific gpu architecture.
+Instead of specifying the resources and other information as command-line arguments, you may find it useful to list them inside of my_script.sh
and then just use sbatch my_script.sh
:
+
#!/bin/bash
+#SBATCH --job-name=pepe_run
+#SBATCH --gres=gpu:1
+#SBATCH --time=00:30:00
+# and so on. To comment SBATCH entry use `##SBATCH --arg ...`
+# here starts your script
+
To learn more sbatch
hacks, a reader is also referred to this nice tutorial.
To cancel a specific job you are running, use +
scancel <JobID>
+
The simplest way is to use scp
command
+
scp -i ~/.ssh/narvi_key -r ./folder user@narvi.tut.fi:/narvi/path/
+
-r
means to copy the folder with all files in it.
+However, once the internet connection is interrupted you will need to start all over again. To have an opportunity to resume the data transfer try rsync
instead
+
rsync -ahP -e "ssh -i ~/.ssh/narvi_key" ./folder user@narvi.tut.fi:/narvi/path/
+
-ah
means to preserve permissions symlinks, etc as in the original folder and h
makes the progress "human-readable", and P
allows to continue data transfer (sends missing files on the target path 🤓).
+Trailing /
in rsync
makes the difference
rsync /dir1/dir2/ /home/dir3
- copies the contents of /dir1/dir2
but not the dir2
folder itself.rsync /dir1/dir2 /home/dir3
– copies the folder dir2
along with all its contents.If you would like to see the files from a remote machine you may mount the folder locally. On Ubuntu/Debian install sshfs
and run this
+
mkdir narvi_folder
+sshfs -o IdentityFile=~/.ssh/narvi_key user@narvi.tut.fi:/narvi/folder/ ./narvi_folder
+
/narvi/folder
will be shown in ./narvi_folder
. Mind that the changes in either folder will be reflected in another one.
+To unmount the folder use +
umount ./narvi_folder
+
Before you do so, check if the software you would like to install is already installed by the admin (e.g. matlab, cuda, and gcc). These are set up using module
functionality. You can load a module by specifying module load <mod>
inside of your script. To see all available modules run module avail
.
If you are not satisfied with the selection you can install your own. Here we will focus on Python
packages and virtual environment manager conda
which is already installed on Narvi (try: which conda
).
conda
Has Many Linux Tools
Besides a ton of Python
packages, conda
has surprisingly many common Linux tools, e.g. tmux
, htop
, ffmpeg
, vim
, and more. This is especially useful if you would like to install them but do not have sudo
rights.
conda
EnvironmentLet's start by creating an empty conda environment +
conda create --name my_env
+
Activate it (meaning that all binaries installed in this environment will be used instead of the system-wise packages) +
conda activate my_env
+# if it didn't work try `source activate my_env`
+
Afterward, you can install conda
packages
+
conda install python pip matplotlib scikit-learn
+
If default conda
channels don't have some package you search for other conda
channels:
+
conda install dlib --channel=menpo
+
If your favorite package is not available anywhere in conda
OR you would like to install OpenCV
, try to install it via pip
:
+
# check if you are using the `pip` from your `conda` env
+which pip
+pip install opencv-python
+
conda
vs pip
inside of conda
env?According to official anaconda documentation, you should install as many requirements as possible with conda
, then use pip
.
+Another problem with pip
packages inside of conda
is associated with poor dependence handling and just bad experience when trying to replicate the same environment on another machine.
It is important that all your source code and documents are in version control since one day your computer will break down and that day you will thank yourself having them under version control. There are many other reasons, but that is the day when it really pays off.
+Below are instruction for various different purposes.
+This is what you should do for your personal documents, such as CV, job applications, list of publications, love letters etc. that you feel uncomfortable to put on public servers such as GitHub, but that you wish to be safe, versioned and backup.
+For that purpose the university ssh servers and Subversion provide suitable tools. First, connect id.tuni.fi and obtain rights to use these services: "manage your user rights" -> "My user rights" -> "Apply for a new user right" -> Choose student or staff contract -> "IT" -> "Linux Servers". Wait a few minutes and you should access to the servers.
+Create personal repository:
+$ ssh linux-ssh.tuni.fi
+$ mkdir svn_repos; cd svn_repos
+$ mkdir <personal_dir>
+$ svnadmin create /home/<my name>/svn_repos/<personal_dir>
+
+Now the repository is created and you can access it from your personal computer. First take out the repo:
+$ cd Work
+$ svn co svn+ssh://<my name>@linux-ssh.tuni.fi/home/<my name>/svn_repos/<personal_dir>
+
+Now you have personal repository that is stored and backup in the university system. SVN (Subversion) is pretty similar to Git, but simpler. For example, you don't "push" and "pull", but use "svn commit" to add you changes and "svn update" to update changes to the existing repository.
+A Fine SVN BOOK is available here http://svnbook.red-bean.com/
+Someone needs to write.
+ + + + + + + + + + + + + +If you are tired to use your phone as a wifi hotspot you may try one of the university wireless networks.
+Works in Windows and centrally maintained Linux boxes, but clearly making it available to self-maintained Linux boxes was beyond their skills. Do not try.
+Eduroam provides access in many universities world wide and is also easiest +wireless connection in TUNI premises.
+Easiest way to install Eduroam is to download a domain specific installation script +from https://cat.eduroam.org/ . Just follow the instructions and that's it!
+ + + + + + + + + + + + + +The BSc and MSc degrees offered by Computing Sciences consist of 1) mandatory courses for everyone and 2) major module and 3) minor module(s). Below are details of each degree program offered by Computing Sciences.
+Below are links to relevant information in TAU Web pages:
+The Signal Processing and Machine Learning (SPML) major is offered in the Bachelor of Science and Technology degree program of the Computing and Electrical Engineering. Our graduates are some of the most wanted in Finnish IT and EE companies and research institutions. The SPML major consists of three mandatory courses (tot 15 cr) and 15 cr from elective courses that can be selected from the list of suggested courses or by proposing a personal study plan.
+ +Similar to our BSc program the MSc program also consists of two mandatory courses (10 cr) and then the student is encouraged to pick one of our three special sub-modules focusing on Audio, Vision or Artificial Intelligence. Since some of these sub-modules contain one shared course that course is moved to elective module.
+ + + + + + + + + + + + + + +As a supervisor you need to confirm contracts to your group memers.
+For contracts send email to itc-hr.tau at tuni.fi
. You need to mention:
Also include the following persons to your email:
+MSc thesis project is the last step of your studies. It is typically six months of full-time coding, building, testing and writing.
+The thesis is about marketing yourself and therefore you should impress yourself, your family, your supervisors and your future employer.
+It is advisable to produce Github pages for your code and data with nice Wiki how to replicate the results and link to the thesis PDF. Its even better if the page also contains a Youtube video of your amazing work.
+When your studies are almost finished (2nd year of Master studies) you should start looking for a job or intern position where you can do your thesis. This could be at your current position, a university research group or perhaps you find a better place!
+You are young, full life ahead, so it is recommended that 1) you do something meaningful and difficult, 2) you learn a lot and 3) you impress people you work for.
+You could contact your professors and ask them for open topics in their research groups (paid and unpaid positions are available) or if they know any companies who are looking for a master's thesis worker. Be active and search until you find a place that suits you and you suit that place.
+Important: You need to understand what should be done in this thesis! You need to understand why this is an important topic (motivates that it needs to be done)! You must understand how to evaluate your results (otherwise it will be unclear what is the quality of your work).
+You should have two supervisors: 1) Academic supervisor who is a senior staff from university (professor, associate/assistant professor, lecturer etc. someone with doctoral degree) and 2) a technical supervisor from the company you work for (preferably someone with at least MSc degree so that they know what MSc thesis is all about). You may interview multiple professors to find who is the most suitable for you. You know, there is a huge difference between supervisors and how much they have time and interest for you.
+Company pays your salary so you must make your technical supervisor to agree what you do, especially if you do the work during your working hours. You also must know confidential things that cannot be put to your thesis as MSc theses are always public.
+Concrete action: Fill and agree the thesis supervision plan with your supervisors (official form)
+Important: There are three important things to bear in mind: 1) read what others have done (related work), 2) read what others have done (related work) and finally 3) read what others have done (related work).
+Before you can find the related works you must know the correct terminology of your problem! Only with the correct terms search engines (Google, you.com) can provide correct links to the existing code, articles and books.
+Steps:
+Work hard, be diligent and consult your supervisors often! Yes, talking with your supervisors is your responsibility, not theirs.
+This can happen parallel during stage 3, and its good to make notes all the time.
+Check out this Latex template: MSc thesis template (Latex)
+Examples of great theses (although your main supervisor may have different examples so ask him/her):
+Actions: You must attend the MSc thesis seminar course of you major. The seminar typically includes: 1) watching MSc presentations by others, 2) presenting your work (at least once, agree this with your supervisor); in CS the presentations are available in budjetti.cs.tut.fi, and 3) participating information literacy training by university Library. All details you will find from the course Moodle page or ask from the seminar course instructor.
+You must ask your academic supervisor comments for your thesis. Some supervisors comment multiple versions the manuscript, but some only the final draft i.e. the version that you think is pretty much ready. Ask your supervisor(s) what he/she prefers. Remember that you are evaluated every time you send something to your supervisor!
+After the supervisors are happy to the current version:
+Evaluation criteria and evaluation templates can be found from thise official page
+Enjoy your life, You deserve good life!
+However, keep updating your knowledge so that your knowledge and skills remain current for the future jobs and needs!
+ + + + + + + + + + + + + +So, you decided that doing a PhD thesis is good for your future. Along the long and winding road you may find the following instructions useful.
+You must be absolutely sure that you want a PhD degree since it means four years of super heavy work under (possibly) monstrous supervisor. On the positive side, you will learn how to make science and that opens the door to the academic career. In addition, many R&D labs of big companies appreciate PhD degree from their employees.
+In order to apply you need to fill and submit a number of super boring bureaucratic forms that insist your commitment and also commitment from your main supervisor (make sure you found a good one):
+ +Before applying send an email to doctoral studies office and ask for the list of things to be submitted and a link to the electronic application system.
+Welcome, you just started your jorney toward magic of science.
+Input: 1) you, 2) your supervisor, and 3) funding for your salary and research.
+This is the most challenging part, but don't worry, your supervisor will tell you all the necessary details and tricks. However, you may find the following links helpful or at least fun to read:
+Output: Novel scientific knowledge. More concretely, a number of peer-reviewed scientific articles.
+Input: Sufficient amount of scientific contributions, typically in the terms of the articles from the previous stage.
+It is difficult to tell a generic rule when you're ready to start writing your thesis as that varies a lot between the fields, even between fields close to each other. You should discuss with your supervisor each year what is the stage of your PhD studies and she/he certainly tells you when it is time to start writing.
+There are +recommendations +for PhD thesis from DPCEE. +It is worth to read through and perhaps adopt their structure, but of course you should follow +practices in your field and discuss with your supervisor. +However, it seems that the committee provides a lot of feedback if you do not follow these recommendations.
+Before writing you need a suitable template. Vast majority of people prefer Latex as it is just so convenient for writing scientific text.
+Output: A thesis manuscript that summarizes your research and results. The thesis can be a monograph or a compilation thesis that includes your original articles. Write using a meaningful structure and beautiful grammar and send it for comments to your supervisor(s) before proceeding to the next stage.
+Input: Final draft of thesis and fixed using the comments from your supervisor. The pre-examination form.
+Your supervisor selects two reviewers, makes sure they are available and willing and fills out the official form (see below). +You need to send the following two documents to your doctoral program academic officer +(contact details and submission deadlines):
+You may also want to read general guidelines from the university.
+Your documents will be processed by the Faculty council (they meet ~once a month) and you will receive an email notification once this is done. Your pre-examiners will also receive a formal invitation and the submitted PhD thesis. Now you cross your fingers, wait for two months and hope for positive reviews. Time to think about what you will do next in your life and perhaps start to look for a position in academia or industry!
+Also, start taking care of your ECTS. +Once you are done, send an email to the academic officer and ask if everything is ok.
+Output: Faculty council decision for pre-examination. Official invitations to the pre-examiners.
+Input: positive statement from the pre-examiners. Form to publish the thesis.
+You are almost done. Next, you need just a little bit of planning. +Now, you pick a date for your defence (a Friday but other days are also possible) and, if the defence is in person, book the room (via campus assistants - go to an info-desk/reception or find their email). +Your supervisor selects 1-2 opponents who will come and publicly "torture" you, i.e. ask nasty questions about your thesis.
+Submit the following documents to the Academic Officer of your doctoral program (contact details and submission deadlines):
+You also need to publish your thesis, i.e. make it publicly and openly available. +The official permission comes from the Faculty council (~1-2 weeks after the document submission deadline), but meanwhile, you can already proceed with the next steps to save time if you feel that you will get the permission to print (e.g. reviewers didn't raise major concerns etc). +Contact the library and start doing their 9-step checklist (see the bottom right) as you are waiting for permission to print and defend.
+After the Faculty council informs you that the permission to publish is granted, notify the library so they could publish your thesis in Trepo and the printing house so they will print you the physical copy. +Tell your supervisor to send the link to the thesis to the opponent(s) -- remember you can't directly contact your opponents.
+Tip
+Order at least 5 copies for personal use: your supervisors will probably ask for a signed copy as well as you will need to give/send a copy to each opponent.
+Output: permission for the public defence, date and room for defence, printed and published manuscript
+Now you should have the date and time of your defence and a booked room. +Here are some things you need to do before the defence:
+This stage is stressful but keep calm and carry on.
+Tip
+Since you can't contact your opponent directly, but you still want to let them know that you will have the karronka to celebrate your defence, you may ask your supervisor to send them the following: "May I let (your name) know your dietary preferences for the post-defence dinner?"
+Tip
+You can order two different cakes for the post-defence "coffee and cake" event.
+This is a formal "act" with pre-defined lyrics for custos, candidate and opponents. +More details are available on university website.
+After the defence, your opponents will discuss your grade -- give them time and opportunity to do that on the same day. +This report will be submitted to the faculty who will finalize the grading of your thesis. +To get the final decision and a grade, you need to wait until the next faculty meeting (deadlines are here).
+Tip
+Don't forget your charger for your laptop if the defence is in person!
+Input: successful defence, opponents' report, and a grade.
+Congratulations! You have done it! +The Faculty will contact you regarding the opponents' report which is similar to reviewers' comments.
+A few touches:
+Tip
+You can ask for the recording of your presentation from the AV team if your defence was recorded.
+Enjoy your life (finally ;-) !
+Output: degree certificate and freedom
+ + + + + + + + + + + + + +{"use strict";/*!
+ * escape-html
+ * Copyright(c) 2012-2013 TJ Holowaychuk
+ * Copyright(c) 2015 Andreas Lubbe
+ * Copyright(c) 2015 Tiancheng "Timothy" Gu
+ * MIT Licensed
+ */var Va=/["'&<>]/;qn.exports=za;function za(e){var t=""+e,r=Va.exec(t);if(!r)return t;var o,n="",i=0,s=0;for(i=r.index;i