diff --git a/doc/recent-changes.md b/doc/recent-changes.md new file mode 100644 index 0000000..9a6c0d5 --- /dev/null +++ b/doc/recent-changes.md @@ -0,0 +1,39 @@ +# Recent Changes + +## Post Train Quantization +* Added to onnx export. +* Minimal accuracy loss for ResNet18 model. MobileNet variant becomes too noisy and is probably unusable. +* Inference time reduced to ca. 60% of float32 original. + +## Variance Parameterization +* Back to earlier implementation: + - Use "smoothclip" func to force diagonals of covariance factors positive. + - Overparameterize with scale factor that is applied to all covariances. +* Remove BN + +## Training +* Exponential LR warmup +* Train variance parameters 10x slower +* First train without NLL losses. After LR warmup, ramp up the weight of NLL losses. +* Add the "shape plausibility loss" back in. It's based on a GMM probability density for the face shape parameters. +* The warmup changes helped to get: + - Decent performance without BN in the variance heads + - Smoother loss curves than before + +Curve of rotation loss at time of publication: + +![Curve of rotation loss at time of publication](traincurve-paper.jpg) + +Now: + +![Curve of rotation loss at time of publication](traincurve-now.jpg) + + +## Model +* Add back learnable local pose offsets (Looks like it doesn't help) +* Simplify ONNX graph +* Store model config in checkpoint. Allows loading without having to know the config in advance. + +## Evaluation +* Add evaluation of 2d NME for 68 landmark AFLW2000-3D benchmark. +* \ No newline at end of file diff --git a/doc/traincurve-now.jpg b/doc/traincurve-now.jpg new file mode 100644 index 0000000..c519bea Binary files /dev/null and b/doc/traincurve-now.jpg differ diff --git a/doc/traincurve-paper.jpg b/doc/traincurve-paper.jpg new file mode 100644 index 0000000..f600d21 Binary files /dev/null and b/doc/traincurve-paper.jpg differ diff --git a/readme.md b/readme.md index f514618..ffa99c9 100644 --- a/readme.md +++ b/readme.md @@ -1,13 +1,24 @@ -# OpNet: On the power of data augmentation for head pose estimation networks +OpenTrack "NeuralNet Tracker" Training & Evaluation +=================================================== -A.K.A. OpenTrack's NeuralNet Tracker Training and Evaluation Code +/ [**"OpNet: On the power of data augmentation for head pose estimation"**](https://arxiv.org/abs/2407.05357) -Intro ------ +If you are looking for the code for the publication please note the [`paper` branch](https://github.com/opentrack/neuralnet-tracker-traincode/tree/paper), +which is a special tailored snapshot for the publication. -This branch contains the code for the publication. Beware, it also contains leftover things from past experiments. +This repository contains the code to train the neural nets for the NeuralNet tracker plugin of [Opentrack](https://github.com/opentrack/opentrack). It allows head tracking with a simple webcam. + + +Overview +-------- + +The tracker plugin is based on deep learning, i.e. neural network models optimized using data to perform their tasks. +There are two parts: A localizer network, and the actual pose estimation network. +The localizer tries to find a single face and generates a bounding box around it from where a crop is extracted for the pose network to analyze. + +In the following there are steps outlined to reproduce the networks +delivered with OpenTrack. This includes training and evaluation. However, the instructions are currently focussed on the pose estimator. At the end there is a section on the localizer. -This readme contains instructions for evaluation and training. Install ------- @@ -42,13 +53,15 @@ Evaluation Download AFLW2000-3D from http://www.cbsr.ia.ac.cn/users/xiangyuzhu/projects/3DDFA/main.htm. +Biwi can be obtained from Kaggle https://www.kaggle.com/datasets/kmader/biwi-kinect-head-pose-database. I couldn't find a better source that is still accessible. + Download a pytorch model checkpoint. * Baseline Ensemble: https://drive.google.com/file/d/19LrssD36COWzKDp7akxtJFeVTcltNVlR/view?usp=sharing * Additionally trained on Face Synthetics (BL+FS): https://drive.google.com/file/d/19zN8KICVEbLnGFGB5KkKuWrPjfet-jC8/view?usp=sharing * Labeling Ensemble (RA-300W-LP from Table 3): https://drive.google.com/file/d/13LSi6J4zWSJnEzEXwZxr5UkWndFXdjcb/view?usp=sharing -### Option 1 +### Option 1 (AFLW2000 3D) Run `scripts/AFLW20003dEvaluation.ipynb` It should give results pretty close to the paper. The face crop selection is different though and so the result won't be exactly the same. @@ -58,59 +71,62 @@ It should give results pretty close to the paper. The face crop selection is dif Run the preprocessing and then the evaluation script. ```bash -# The output filename "aflw2k.h5" must batch the hardcoded value in "pipelines.py" -python scripts/dsaflw2k_processing.py $DATADIR/AFLW2000-3D.zip $DATADIR/aflw2k.h5` +# Preprocess the data. The output filename "aflw2k.h5" must match the hardcoded value in "pipelines.py" +python scripts/dsaflw2k_processing.py /AFLW2000-3D.zip $DATADIR/aflw2k.h5` # Will look in $DATADIR for aflw2k.h5. python scripts/evaluate_pose_network.py --ds aflw2k3d ``` -It supports ONNX conversions as well as pytorch checkpoints. But the script must be adapted to the concrete model configuration for the checkpoint if that is used. If you wish to process the outputs further, like for averaging like in the paper, there is an option to generate json files. +It supports ONNX conversions as well as PyTorch checkpoints. For PyTorch the script must be adapted to the concrete model configuration for the checkpoint. If you wish to process the outputs further, like for averaging like in the paper, there is an option to generate json files. +Evaluation on the Biwi benchmark works similarly. However, we use the annotations file from https://github.com/pcr-upm/opal23_headpose in order to adhere to the experimental protocol. It can be found under https://github.com/pcr-upm/opal23_headpose/blob/main/annotations/biwi_ann.txt. +```bash +# Preprocess the data. +python --opal-annotation /biwi_ann.txt scripts/dsprocess_biwi.py /biwi.zip $DATADIR/biwi-v3.h5 -Integration in OpenTrack ------------------------- +# Will look in $DATADIR for biwi-v3.h5. +python scripts/evaluate_pose_network.py --ds biwi --roi-expansion 0.8 --perspective-correction +``` +You want the `--perspective-correction` for SOTA results. It enables that the orientation obtained from the face crop is corrected for camera perspective since with the Kinect's field of view, the assumption of orthographic projection no longer holds true. I.e. the pose from the crop is transformed into the global coordinate frame. W.r.t this frame it is compared with the original labels. Without the correction, the pose from the crop is taken directly for comparison with the labels. +Setting `--roi-expansion 0.8` causes the cropped area to be smaller relative to the bounding box annotation. That is also necessary for good results because the annotations have much larger bounding boxes than the networks were trained with. -https://github.com/opentrack/opentrack - It currently has some older models though. Choose the "Neuralnet" tracker plugin. +Integration in OpenTrack +------------------------ +Choose the "Neuralnet" tracker plugin. It currently comes with some older models which don't +achieve the same SOTA benchmark results but are a little bit more noise resistent and invariant +to eye movements. Training -------- -Several datasets are used. All of which are preprocessed and the result (partially) stored in h5 files. +Rough guidelines for reproduction follow. -Rough guidelines for reproduction follow. First to get the data there is -the expositional script below which enumerates everything. +### Datasets -```bash -# 300W-LP -# Go to http://www.cbsr.ia.ac.cn/users/xiangyuzhu/projects/3DDFA/main.htm and find the download for 300W-LP.zip. -# Currently it's on google drive with the ID as used below. Better check it yourself. -gdown 0B7OEHD3T4eCkVGs0TkhUWFN6N1k -# Note: gdown is a pip installable tool for downloading from google drive. You can ofc use anything you want. +#### 300W-LP & AFLW2000-3d -# AFLW2000-3d -wget www.cbsr.ia.ac.cn/users/xiangyuzhu/projects/3DDFA/Database/AFLW2000-3D.zip +There should be download links for `300W-LP.zip` and `AFLW2000-3D.zip` on http://www.cbsr.ia.ac.cn/users/xiangyuzhu/projects/3DDFA/main.htm. -#LaPa Megaface 3D Labeled "Large Pose" Extension -#https://drive.google.com/file/d/1K4CQ8QqAVXj3Cd-yUt3HU9Z8o8gDmSEV/view?usp=drive_link -$ gdown 1K4CQ8QqAVXj3Cd-yUt3HU9Z8o8gDmSEV +#### 300W-LP Reproduction +My version of 300W-LP with custom out-of-plane rotation augmentation applied. +Includes "closed-eyes" augmentation as well as directional illumination. +On Google Drive https://drive.google.com/file/d/1uEqba5JCGQMzrULnPHxf4EJa04z_yHWw/view?usp=drive_link. -#300W-LP Reproduction -#https://drive.google.com/file/d/1uEqba5JCGQMzrULnPHxf4EJa04z_yHWw/view?usp=drive_link -$ gdown 1uEqba5JCGQMzrULnPHxf4EJa04z_yHWw +#### LaPa Megaface 3D Labeled "Large Pose" Extension +My pseudo / semi-automatically labeled subset of the Megaface frames from LaPa. +On Google Drive https://drive.google.com/file/d/1K4CQ8QqAVXj3Cd-yUt3HU9Z8o8gDmSEV/view?usp=drive_link. -#WFLW 3D Labeled "Large Pose" Extension -#https://drive.google.com/file/d/1SY33foUF8oZP8RUsFmcEIjq5xF5m3oJ1/view?usp=drive_link -$ gdown 1SY33foUF8oZP8RUsFmcEIjq5xF5m3oJ1 +#### WFLW 3D Labeled "Large Pose" Extension +My pseudo / semi-automatically labeled subset. +On Google Drive https://drive.google.com/file/d/1SY33foUF8oZP8RUsFmcEIjq5xF5m3oJ1/view?usp=drive_link. -# Face Synthetics (https://github.com/microsoft/FaceSynthetics) -wget --tries=0 --continue --server-response --timeout=0 --retry-connrefused https://facesyntheticspubwedata.blob.core.windows.net/iccv-2021/dataset_100000.zip -``` +#### Face Synthetics +There should be a download link on https://github.com/microsoft/FaceSynthetics for the 100k samples variant `dataset_100000.zip`. -Now some preprocessing and unpacking: +### Preprocessing ```bash python scripts/dsprocess_aflw2k.py AFLW2000-3D.zip $DATADIR/aflw2k.h5 @@ -129,6 +145,8 @@ unzip reproduction_300wlp-v12.zip -d ../$DATADIR/ The processed files can be inspected in the notebook `DataVisualization.ipynb`. +### Training Process + Now training should be possible. For the baseline it should be: ```bash python scripts/train_poseestimator.py --lr 1.e-3 --epochs 1500 --ds "repro_300_wlp+lapa_megaface_lp:20000+wflw_lp" \ @@ -159,6 +177,20 @@ It will look at the environment variable `DATADIR` to find the datasets. Notable --ds "repro_300_wlp" # Train only on the 300W-LP reproduction --ds "repro_300_wlp+lapa_megaface_lp+wflw_lp+synface" # Train the "BL + FS" case which should give best performing models. ``` +### Deployment + +I use ONNX for deployment and most evaluation purposes. There is a script for conversion. WARNING: it is necessary to adapt its code to the model configuration. :-/ It is easy though. Only one statement where the model is instantiated needs to be changed. The script has two modes. For exports for OpenTrack use +```bash +python scripts/export_model.py --posenet +``` +It omits the landmark predictions and renames the output tensors (for historical reasons). The script performs sanity checks to ensure the outputs from ONNX are almost equal to PyTorch outputs. +To use the model in OpenTrack, find the directory with the other `.onnx` models and copy the new one there. Then in OpenTrack, in the tracker settings, there is a button to select the model file. + +For evaluation use +``` +python scripts/export_model.py --full --posenet +``` +The model created in this way includes all outputs. Creation of 3D Labeled WFLW & LaPa Large Pose Expansions -------------------------------------------------------- @@ -209,4 +241,13 @@ encoded as JPG, else as PNG. When `storage` is set to `image_filename` then the files. The other label fields are label data and should be relatively self-explanatory. Relevant code for reading and writing those files can be found in `trackertraincode/datasets/dshdf5.py`, -`trackertraincode/datasets/dshdf5pose.py` and the preprocessing scripts `scripts/dsprocess_*.py`. \ No newline at end of file +`trackertraincode/datasets/dshdf5pose.py` and the preprocessing scripts `scripts/dsprocess_*.py`. + +Localizer Network +----------------- + +There is an old notebook to train this network. + +The training data is a processed version of the Wider Face dataset. The processing accounts for the fact that Wider Face contains images with potentially many faces. Therefore, sections which contain only one face or none are extracted. + +The localizer network is trained to generate a "heatmap" with a peak where it suspects the center of a face. In addition, parameters of a bounding box are outputted. \ No newline at end of file diff --git a/run.sh b/run.sh index 886ecba..2e5b5f0 100644 --- a/run.sh +++ b/run.sh @@ -6,4 +6,5 @@ python scripts/train_poseestimator.py --lr 1.e-3 --epochs 1500 --ds "repro_300_w --with-nll-loss \ --roi-override original \ --no-blurpool \ - --backbone mobilenetv1 \ No newline at end of file + --backbone resnet18 \ + --outdir model_files/ \ No newline at end of file diff --git a/scripts/AFLW20003dEvaluation.ipynb b/scripts/AFLW20003dEvaluation.ipynb index 79b1eea..bd4afb6 100644 --- a/scripts/AFLW20003dEvaluation.ipynb +++ b/scripts/AFLW20003dEvaluation.ipynb @@ -77,11 +77,7 @@ "outputs": [], "source": [ "modelfile = '../model_files/pub_synface_oroi/run2/swa_NetworkWithPointHead_mobilenetv1.ckpt'\n", - "net = trackertraincode.neuralnets.models.NetworkWithPointHead(\n", - " enable_point_head=True,\n", - " config='mobilenetv1', \n", - " enable_uncertainty=True)\n", - "net.load_state_dict(torch.load(modelfile))\n", + "net = trackertraincode.neuralnets.models.load_model(modelfile)\n", "inputsize = net.input_resolution\n", "net.cuda()\n", "net.eval()" @@ -240,7 +236,7 @@ "outputs": [ { "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAIEAAACBCAAAAADCy4aMAAAuEElEQVR4nDW6V6xuWXIeVlUr7PTnk9O9fUPn7unhJA2HQ5Ec0QYpkhJJW5RIARYIGAYswJBhwPaLbQiGYfvFhmGDftGDIIoGbIB0EGRSFsUww+EEkjM907nvvX3juSef//xpx7Wqyg+n52W/1t67vlXrC4X/u5lah2by/J3z1c3J/ozuJYevPCjg7GsfvPDRGy+eD6n3HR3uc3mQVW2IXWzeuuLvH2+mVD6bbFEzdRiiVzdJVtM5FjGOhtNy4zHsvd3tZac5ORzBcWfyPKkXk6c+bvYGK763deFv1fmr+UKW+LuXSCbVvzrbG5tiPjvaemx3dKbTnznB6iebJ3cH1TfyF5OTG2lvnhBfJHfefbr/dnfQ89+avrhjHw7iquoBD7aT6tk8aTH163Xln6cb3y2224u1XEMPFl29adiq+ifemF4+8d+xFHZ4PexuwmP8X/NQCXcYff7o/sHgUVrEduAOv3Be/Xg+j74+mq9Dv2f3uCvcu7fX3/mzG7tv3+kn5z/EVwdP6vXp892l8Qf9enaJ9iodtQc13qv3+u/gej293ZrZwVHRQbl92c9aq4taErdtkyeVb/azcuLjnsd/Fk9sU5/sT+6dJeuuw+p8v9qYvjLt704W2k7v94aSvLo6iLOti27/2w8f//Xx0VZiH5yvD0d/tdu8P/ZxuNgN2XHnWs578NqFu18lxXEcL1xBi2S4NMvUwKqH22dKbR0lSQbm7u8PF7317SYpZlv4X5z2VrI7fPx4q173TZx1683A7ayK1+PlYHl4lrtk/cVV6st+cvrnZ3HxS113A+MfJi+vth5tfyvbXHFHI40rCwsYLnf2FoP33Xxr0fbaXLrVgHuzvIzjCC28cJjVYZkET9tPPn92knVrieR5JPwPybn0w+EkrvZqe7UaVYPZy6C9187yEv6YNsfjLJm90S6K1Z988uZD/IXT9YyOf/hC73xyWvzlZ84wdDsS2i6nizStNgaT8rF/frd8uiXrqxIKKIHabHj02fcxG5XJia9D0o3a7fLVPxik8iJAtd7Q6E6/nu1t1cnr5XJV5ctMbl+tvXTzLCf9F3Z93E94+nLZ4Td+++Lz9+PPX768iW8//OlkuvekePrZskuHO3lZe8vzYtQO8wNewOy17uKFZm2ufiOq2U3Lwtw437RdmWKNVnWy4LxK/WUaH1Jvf7vCf+i73aaDQhZVlj+/PSPb3H0lXOzz/a+P7+a9Gz9I7qo/+73+dn6S/WS3Dud/evPGaYh6AiYVnOc6y5vhWZZGTaY/u1xeXd6czTIYwyKM2iaPbA5frXu1m0OY0PNIVYG26+e99PtXO09/ybbbzygvh8vCtmcrb9LZ7kNK7f7r5dnW6t6/3nxpb799pFuj+O1/lpC9P/5qsl/88OtvHCxiO3oABQh2SWx7ITGF7Znm6ucuZDG7NV/0XJZFnYR2LQ6abJANICMHbGUgYDQ6WA2qzR0tDj7g8aMR/qdFdjnfL11nazWQd+2bt+unt5ZHf/Flv3eoyXtv7S+//pwO8tX4i2Z09kkzGj/qL0c/9EVr7SxHuKR+OmvzE7P3xtXoYwdy0qeRq1s3h1Gd1iHttl1XBXMVdsve05I0H0FV5AP5y3Djyc+kamgbL+NO6LkO0jQFG1/avzi8Nfvk7Cu3R++/QPdeunn8/87ldtfe+bKZPH6y/bn144M2+7A/xIzKIYRmvEFzwuna3r4MHwwlztZ6a3kk3yUFDeJgLAMkXS29ErHrFSNrmszUk+PxdtOsXTXC9rm368tRpzsY3dmoO7h7Vr5w+gkd3DhyP/cv297Ow/vSW5vtrr1p0o+6F+HtF3anzbzvY7/WrBr7Rurl5iLtme2dxWWfW91wNJgmtWhi+o33pjSjTgGMGEhW2w8R6pGMl3EE4/U6OdnjhtaLTbuH6U7a0vEI5K3ncf3kO/Fu73znhd8v+jfL6bSO1drBm9p8MHyl+uCN9OzibM8YV1PMX6f5Sui10KLcuLss0xpwMy2KOrGYpIOcoO+SQdGGRbZTWuI89AcF9MuJH1yOLuzNkC01BguDJZwPlw+Hdd7vzf/td3az8z9942blbl18o6+T4qOjq9vF6PXRvL7aDO/5F1eP6/zArNul+q3wOE0zSZ9djfHgxpJp5ppNz3nIm5itMrDROHDjtFSrg6usptin3cOU4lXmq/Hx5/5ksEouC7VL1ayY9Gu7xpunb53dPVr8/i8NI7/6Vx/u95Yb38nizjD9iTCriuLy8K65eMw8LYAW2Z2je2lrAwut96HYr0EbQy4dNqaJaTvtO/WMFhJlf5HYkHBm6jWvODx8czbgREb9+o1v753tMQ1Ta/faqy3QQf1mlBX8wa86Wt34o6M7w/fXjv2hv333F+uji97s0ennVpdPdxf7m8OueNF/s3S9G5HV9Wgw2Exi3UQTbmX1eFZsso7zBHLbS1zsBbHdqHVOXKpiXrxDp+2Ieqv0eOt8ZziHHP/HOFJ33k3gZPiG1L1vXvztw1E/fHCymT3/kvmj/avBV/zCJNUny/2ti+M819JDr7k8zwpN4tHIqc+7bN+dVliKbjYj92DHnhn2aWc9sbjeTE+zZrSa5tNN8X3a/gNXDybDcPjmxdXy1vtmfUBm2CW6OXm/y3p6zt/p/ua925MH9yuazJLVd6N89scv5wDPvl+8jk8+3Her4V5TP7pYJKNBie2urQYp7d/OnnDa2f7mcNMe3/GHQ84H6hJr06wI0y7JR3NMlwOLpoGT/l6kRS+dDNoNk/RNRdb2g4kPpejg1UeDB4e/+a0vlSfxKmw2Z1/7I731UnbpOP043m7Pz5uXqwkfTuOgSAfM9Vo1CW7DTdpNnnp/hbLt7BXdCo9uPiuGTQrgLQotTUixQ+sFnEtkc/aVSywn1QRXa8X95eAoK2160asUxYcvn6TvNn/3/heeuPst+hvf/tpf6lfX66OsXz3hYh4u93fm/MQNh6nUkTtKm3Ewmwmu1nG58jMTXgcok3z5fPOomJSpivWAli9darGNGjqDqwLPN9a+l0F9dXDSOFoZy4vUig5PLxbp2ovH/PjBLz+9+T4cStocPLz5YPTTV6uLpDi9AOGu91aYJplmYjAJWOTz6ZZuF27O+y1cYQ32Rc5n5ObHHty4HTH3yCDJifRForCIceAXI6im1XjK8XTnWW8+uFrvIdvFoLpcFVuDVfnRya/MN48HZy32h35+O/SfZmXSP72gmBY78KwLLa0rL3sRUzFr227M9RVudsUDL1SkJm9ybWLfzjfaoVhryYDMusLX6QoBYAjRtE2en37hycws0kIrU5QH/efWdtrRqJfJ8t7Fv3OeXy0Xl4X1/eVr4C/bchePY7Vlczw7q2F37XxKw71gqfNpliTHCM1r8/7jnILHTdcGtpcD0b72CYxFtNxGHkpfmy5Z5VXGiVz1ws3stB20LUM2La7KtaOVldbDKO0dfYB/8zFvHHYVQjG0fVl2Fd+0521zs1p/cKyF3x0+TwbOhjicu2yMTc0Q3zzdeJ4lNfgxTIu2uOhPU+g7A6oJg4nzepSuosSoLtjoDUmX2Q/WYkyP1nxbD8yq32soUufc+sM/Tr76JL37vJ8145s3bewuFsfZC/xkMVy36Xcuk7y+0XzcGxVpngyfmvUizKtZ3d68uDHtZ82qWOd5Ohus2s6mFhLHkrZmEM/bhBoTl6YTZUe2S+E8HMdud1rAybCs8r7iemKXA17beb95fefp6vbjpDz8gpmeAndoNorlk82kXn94mc7I7kr6YpYlsZ4mt1OB2HWt22hGx2kl043RQuH81idlGnZaU5RhYGIvuQolr5cYqq5jaxHULsZNWNYTU/l6/cl2U2MeWr+0a9smvHNVvlhf7B1uUPXKRRx9EjfXV5o9ag5i5k+OfO3pRchlOKrMyeWtngWzmqFLx5Nm1atoNhnO+rOd7KGRcSwLwhwMDEI3n/YGK7NsuzJHTBpJQ682ycM3Z9XuxyHpyuFKQFc9ttly7fIQbgi84p/P96orro6x6N9bT06yLFN78tSL6x2o831a+nfqt5xLmgtG73vjhl0Lyzy58Kv9p1L3NgMXGISSBNicXWz4zlQKkAbvbXBgHXsJc6rWsnrZv9q2Xm2nqU3Td+fZ7uZ5Vs6aZrUqQmdg42QTppytL/Ozw6LMim0e9pij/mD8mSzgskST52mywqwNFylfmYl7duKT5bgZK5kkySOsjuttW+uSm6hOKZIzpDZmpX3vp59Del5kj18YrLqilNyO356na8m3tgTSXBbaRqhHDc9DPkyXw+cXWRzb3byQFMLyvVuTbmmDNjoemxh9UtJFXzpzY/Y4NN0ggU0CTjLPzfJ0MCxX+YxWZNCA4ZhYy4iGtuZVUuU2+M75Jq01S+2fxrgDs6H0F7jqqMqkMJcbM7/hZLnzbGlNmmwkQ+9Zu6NX1qFO2g58PyV2AE0bc6gH2YOr/rM3HLki6TTpUTif0hpO1TRBFMiTEqEl13W27dreN3/64eRhvxk1p5MOqogWip6Fuo9nbqHU7pXLOt46d6nTanISFd147PNcQuie7Y3t0ndMaZokCMRgbEv1OPtEafWGQ5+4aDOhxbQr8kVpupUPai1aToxYYrEhDb0F0CqpB13MT42tNy6iJz9Mi1W2ftL0vC45nG/EWyEW6ue7Tdug6w8JBmzUnNxJXKfawWAyyTl4TFfLeWhS+wnPk/XEFmlqfSLNs8s0i8dzXi5QrM29ty5zjhDUB3ZA0X933GzMKdRJAxTaaBX9kWTHQ6hnUYcXg4sxrGRxcP/1hwVTkq9nNCKg7mjbJq3aphunBrnLfAPTgh2Hj5OsS7PMpUZtnHG+Vk7bEDiKM47AARnUaAwDorRorowuKpPMx221sURoxZODs+eTvjZS2+izvjH5YqUvNPZoCyTtT3KX53U+fzZZU7SybMaJN7HJfSyOxmd9dtXgfDnp594SJrOl6euzw7Ksg/heUTifJWnqDJARQdS0i3aAMvrISL9qmSLWiRiLzfI1uvCDxcr69TCnoqt4p70/yLsW+4PM+v4sWeJoXSzUKygKR8FkxsQneLnXmaY7X+d+loozi2bEelyzVGAsOetA0CBBjGDASGSDwV/sfKfPRy9d3HoMsYWmNwxsYznxbOjcZW0Bph3nx/BK+AuXG7HZME1T2+WtHL9QrKhuvcvR2Gi8YIXzYRP4YrFV9BKPVMVB0q46rrlLrbWQWBIgRWBBg5GRIEa3qu1BuSg+3izSNrayGCxzZy93ttwT21vuP98tdDaK8AJ+0GKet9zLcjCJupXBV01pqpDk2jPQoW+HF2c2szKfu4nNhjaW7TCNXctlo95bnxpDAq5D1IgmGCVSEo4Sk7O3/ozze7fqUR3mGaghQ589oIc8PnxlOpHm6fTKheaT9mJjAMEk/dRkZVG6ipG0ieR9ygKQmeyy87WFxRJyNxhD09IYpW1W88amSW/Qz7wDTx0BGouAqkpEhBxhab5+K1+k32teyhZGuPUV2dI8S8dlc2g0PebCrpn786KwIOCLXDUU566SUYTQGZen0WBDJtahXaRJPKlvhPVCa5OmnYRl01IfUps6BUbHahEQicWyoBFUYObkdGNldev9Fy83TtpUubLe0nQC06TXW9Kzrp8MmsvztcRhFrDvZRm5hsrBkk0LA4fRqKWUxVx2DtwDXU/GfeTUUgizJpjEJZSIRwUhJhR1GkktoCqDKoOG1eB5f6P/F+vvvLo71iombW9q+zZIUtqL7P52ZmhmvrU9MK2Hpgz70C42a15fcDKgzjqnCtA6EVx1WVU8CYNeUbADMrwsl2idd95aAEC2jIioqpZAADCqKljDZM4Hs9no9tXRK+X2qsE2GV7aMO2HOmpaFs9v3P/iJ7CnJoee2ubw4Wbam/Z6ocyPn3G2ZbEpgiWDpZn3K192qW6QZoEwLJYBksw7Z0hBgYVQAMSgqqqyGmwBQEXbhla3PzjF5IVD3w9l0m0fRrtZNAUezTabrje9edV+cOOvRUgT7q8uqtJeWdOuskXnr9qwusz6w4HjzMxr0nA2WB6ow9OB52UVsZdlhJbUqqqJqgYBAVQIVBUiAABaX2NXPN94Ymdo17v+RGi6CrbN0tncbL790kBl8M4R7iwmppcg7C8Wz5PFzonfjOMPx4uNZbgYzxbDUT/wzEosB8sba0JXQ9d0Haa2cCQAgKoqKogKqlYRQgyRMRAACFioEw4wvHBNiDTP+88729lVn/uh0deby97qEq96RadJaihbjDE9za5ab8LJdBPq/lV+VszN2vjmRaowZ79RBIaBhk6BstwCEF7XUQVRjaSdBm6asFKQfGHUMELqZrR0aYlDduFQn+9ebtj+KBidGpcePyGZja0d5qkjND6rxMSsk8Ojvdtn2lPsyrans97j/TUKVQrDXNWHCCoudw4FFBQABJVVAUhD0BXHVTuHS4iTpGqK1PdOTYsd1jqsXQWNERUr5eD0w6K96Gg8t+tZf07r1hpSq6GxeUjPT/OnT7Dty4ast5GQljuNfa60lbpIAsnSkScDSgKgAIiIKizIEurqGGb5KmIyoOoSedolvnN5XFB+aoMiLTQxqZ1wzNB2vfb02fq90cc372wjklXrpF6UWPxZcqHUYdLauq47zNfTl89WPcVB1eeYsK9tCgQAoCqIACIgHNvYlG2IJYxnZ2k5WTa0BB9lVRy6QDYpqX/x0gXOilktavNiOpiEEiDpRRndeCWLbNGCR5g072fvZ23jHTjI3MbZlLJ6/uROyUdx7eGbnTMBO+sNgYqiAgKqgEgtTdUsQzkd4fHZQ7o9gclMklDOKjhJrB+T9c0YkosEarVpsH2rCxdib3xxPq6r0wPnyRhrxQM02eGxjeMi3Nif/PGPnZ6YebG3+Ko841feK3vPZZCigicEEQIFESRUCc3q/CrWXRZHH82TfGhOBquMoeVqFRdi8x01xmi5Nbg4g+yqTYNVY0o6J6jWWrXti/PdkACiRU19NfkmQDrSX3ll5+k396/m//i/GX0B3nv82dl3r+qbjwfAqSFVAIAIigKgoVzNrpZchzZ81GuKl6/65262SjgEaFpsG86Hk564WMT+ws3MKDNU2bzurnqZqW3HtLEKayaCIqFVX2/v/cT9xQq/mG72tmHxufv/bXqw83jxt94dfjOOB+7R+E6TOiRQVBESAGmri+P6ipa6DGd6rEzRNtlkdtBGDVLMohvu7ibqJBpjr6o6czX5zjYE03FMhbPJo7IbVpIgOGvAiJNbJf9Q5395bj+/8RMffeU/uEz2Hj5K33zpg2T15OYX5/Fi7JUJVIEQRFnqi/P6LJ4cN2XXVB58ad36csYn44FqcdpG7A8zA0iUYedPmywUWo+tzj26CP1muUSlOJHARGgQ0OnopeTy+fSyCouf/PmPvnenV37/7Pg/Ju6/9nV8uns76aKCBItihEWgK+vZfHG8nHZHnbqhyHACS8eQ1GfVsPckTSBfnySooKQBCbPZVjof1dacb9oyOFMnaoLjrgwOFBAJydtd/er7xx/PPn729me++rvnwzNjf3N/OV/u3r6fTCcbFgUMgpKqCDfV6uJ0tlzNqqmFbBePTP3Gm//XX9t87/6os80UabF+sDcAkhZAAdFleL77fHthIQWKPTXDZ6yw6C9WGYmQElohzPax+ItwtrpcfbzqTRbpwZvT352ecld85d6SxaYC6hWjaMRmWZ4/B6waeP6Z1z/Rtx79zP80nKzt6ksv/Olf3L1P5K/8eKswiMyA3PBq9NF6suqT2tqc5KIeSCvsYHGwZFYFRFU0KmZv2PXWS75/ZJeLZvDw237FthHCVA1YK0qCoKrKoT16joPy/PTpL/1H2e88psvfwjbxcXPAt/rf276YxoK8Tw2htiLYZg1c7iSzNehZJsgX5eZlj8ZXIaXGhBCFFVAtqVGSzx5cnpc3DstTg7Br80R5Udlh/uJNbgYOLACCAQyLo2d2s33y7Oyrf68Yff4D/Nsvn7zaO2/3qe5fLCrJMb17e+Q9gLg6al0m737Jtj3XkJ2P8rNqMDo3Z3W6cbYGySwXZXZECChGC++3VtV7e1Nvq5OtYqMZ9FTzg4bTECVFJCAbOdaLWXbLfvg0/PrXbGje+ubl6Pbnuo/vHWy108OPpovBYOPV3XGhxIGtkapYHd+9vHW2B7azddF4jsIFtqFcz4phBGm9F7WISiaqlbC2pl3Xrp3dSQbEpkgSDj0DYBANWhQCkFDB7tri/FnvyR98hiebf+efvyNqnw5+LR794O0HanZ27+5vOIsSWnAodBgffG5temuapcEqmmBHVwVcNoui9myxTgUAEBUBgFQtChxIE3wWEI2HFMR76bKUrTGGEAiUudnM15sbb3zwDf//jDZvf/bfOnpX7M989vmH77xrTbZ398ViOzMgioRM6SWu1hu/FigNiU3T8qo3eLobgvjlJluDohqjgAKgiqJB6jqnGcFAa0cOwItRkFYMESgikuEYvcU2eeWVB986XviT59+ijXTUe/8PL6uVn+TrB3uTnrWkgIAQGEus1yVbug0zX7fWLyW3Y+pcLSH6nKmNCjEaNkpAAAxsbYdMoJoasFZcwxbVoDcGjYKqiGJPk0Xen2391Ek7OynL2TPnSt8lcWN7qzC3iqJILDCLiqBIDRR72eU4xaSwE16k40dr7ao/DTgfiq6g486KFaBPHxSDYYPgWK2NwYJRQKcGkV3wIFHUIgGnataz4XZ5sVfKXI7Ty0THdqswJtm0SZqgKmjkGKXrSJ1brLONSWZt7LaAtXteGwn1pA7heNRlwEoqqCiMih6MggBE5zGQxqQVg0rGGkSjqoKKhqQx4zjOumY8X0nAVzE5h3rk1PezAtIEVFSVOXZxFkKRYDsQ9T6xYd4Mq3xBy1m5sRx25Qz09VaFg0UFQQAERMuioGhNS+QkohMCMNYikLISkJDFiNSm6q3Phk1bseXy1Uu0AP3MJx7EsIiKcIxx3sqAs6UFTIEtnE16/uKdG48vJnOfxQFBcbLWGafBqSICoqL4GMR1ajQxUa0hRFEkg6AqLKqoCErR1/0GDJG63mip0ft0lUTqO5Mag0FBpeu6LrQVuJQj+EiJJSv1C73KPD0+6Wm5udrmdNW/6FofgUx0qg4jqWOLwSCrMTFjBUUgICKxYMh2zOwqqwrRL50rAQmY+thhF/qQ2sQ4R74hEI4Yg3T1ohtnQdLRVYbO2Gojz1zxs7+VLKoRaty+TOKqrdLaOc0UiK0FIGL0waj1AmBURYgQiRRVVQFEolEmhmDFE7IxxMpG1aZIxninNrhORLgObQUnjRkr0MD4DHtKsZ+botj59y7yrvb9czI5Qdd1BpsYIrMIKoCIqGUnwSQMiGQIiRAIEUQUVKLxBKSCrrOCURANJX7Q84l3zpKyCKhwhNC2YYFFr82aNTHeQ0azCZFScvBfNdY8NsVRejhws467qBo6ZmFVFEFLlInzCtYaYwhAVQUBQAFBRRA0CLNWyAY4sKKxBo231iYWydjQqoq0bcTLkKQEidt2pldEQ4mqsFrz6i/2KlrGNttokhWXVRu6jtsQJaIoGiJS7xDQgLIAAgCQRgJAZemQlayaKJYUwDoyAKqUWpdYEiGJiBA5rrqmPresPSn2ifq+Z5AKQFW0a6OfuwOVWn+YT3sBA4CwBBURjiwIwErOEQKoqACCXr8DWFKRSBENARgmsmSttc458h4AEAwBiAq3oWukgVMxwWVJut/61Jq+J6mbtumkMYNfdSlCPp1s5liHjkG0ARUVBUARA4gKiCQqwEGu/4KoiIqiWDRoI2lE66w11hgC401C7ICsqjCLhGZe23Oto/cj7cfMGSjI0oo7VtpE3fuN34OwWlv6aVrZynpg01oTjBpkVaeqiKIAqkCsiggAiCIAQoKiiKpoDRIpYhAkREOGlAlEooS2a0ru8GgbbeLHvlhZSdB0NhFNVeKJxrC3+/RuqWc3ejloCJFUhFWNKKqV6JQFmFREFem6CUDECoCAkQXUKnI0FgVIGI01ACgCwhxi13DbxqZ4MlFjAJOhRus0kcSmrCoBJ8rc/f13P14m/kQHwbvSowWhzikodxldfz4oI6IqIakYAABhFiRWikpkQboMFEEVBQkAWECFuZVWOg6Vraa5qaGjbNKMrE8CRlt0wYAyC0sCX/n82f937sbLXp1LbMij61CcBRKxAoSswCAICACqRoBAI6uSga5GBwkQEgqBGMsGQEVBYoysdYxduaqRkzYrQSjxTU9Sghpt4pVVgZkl2s7e+PuXv3e+Pk1TqpE7y4ZV1YJYEWAgjRoMASkCghgQUUQEVbQhpCkQOQVjhRiISUVNIGGO2krVLpcoS9q4NEQmxU7UKIWetaiioMIcybqiSYp/9L0/763aZOnFVtFSYpkgqhoAaK9lKjIYBURGIgYEAyRBmdEaa0lBBIGMKkXmCCHETmMMZZQOCbyrCsgm6AJDa5rCGgNRFYSMlRAk9U391t1/82zz+a1K+w7RaAC0oggAYDhKJI9MSoAgn3oXAALAgaKxhkAF1BAwK7IghBhjqNtQhSr4BkHadkgmKYFtqxLACiGBghEVNSYGtr2e+eV7/+bK7KStYvQOiQSNAQBgYSULomhIAQlEARANSxSRtlxHJkACIEY0EoEaqTTGpo5Ns6yVTEvLtu+yRUg4SJfV3NlplhMgslEUo2QYo47iKy99/X5rR56Y1Wtgh8SMToKgVUEyTpUYo0HVCApkmmRZzLnNQgoEHFRcm0QFbCJH5grLMsQsGm1CFpKeQjanVUa1Ke25+MGWaywgIQBEi8xG8vArR+9/cDsM8zaNFhNpGQmaqGSILAABqgghCKABICQXvZ9Nt5EEBMlErV2sCFtRjHXN7aqrEUKFQstN9JO2IA9qwLPtxeb0+dom2ushJ2IJAcjSSzeXf/ThjdFIM8FODSooq6oAXE9DJQtgWI0KEilSYuzpGlhAJgElv7QkbWSOIYS64yYkGkSi1AUWvXLQb+frrUJlnbcc5ue93T4CEAoSKyInKJ5/4dG/vlHuIBDbJKKqKDkfEUBFCMlFg8qgEYnA2Fz7z89SAqPMYqAl7SAyx6YLNZd1MEZUiw99NmjRzhfjVrRLaraGnXcudB/2dkY2gNiOERUIkYv29q/+38f1zkRS0KCk1jgr2pATNWIRDKpRBhsZCMlzkpyMXAYgBC0IMXWdcNO0TdmECKAhsqjdWWxN2qwVDs/MIJSWENQZl+TVQzse9XwdG0REQES1tPPr//wH1Wq7sewQyZCCdbPZMBIaESFSVkQwGCVC0ghUF4UHRWUFUWkZVGLXVqFmVYOhsY+LuHmRmiqTaZ5bDmVljQATEgVH8fQ4S8SmaMgwGs2oyfiX/+mHkkwSEIOIBo3h3tWjvK8mNaoqzKhCXVBwposQl2CYQVmJNdYQua1r5IASASOj0xhzMqq9TYJIiClZG8SCiMmksxq5NLCy3iEguUiJ597f+t8U0XogBFVnFOzm+vFRv5CE3KcctYnNQmzXLIOpWQKyoIJyBOUYQ+BllCgZREjuJe1dGhT2hZ2Q1RZNraklD8xowYgLvmm7cpVmhtCqBOO0dYOtz73zKCusAhIZBDAAZv9qVjZ5BkJRpQ1dNWt6oZ634nshGEboMCh2kWuu66aT2EQU31br1TC9dbxjXx85Sx4kojXW/igN8EYhhFguy+UgTxEYkCiSNz/+0dWTYeqvAxNFAEAY2xWulG0wUeZcni+5zSlIMhxc2zokHLXrmigcm0q7Gj12tP2uabL+YvLjfaekZNRRUGs1CikoIClkDhLnZ0sBo4pIBIDF0Ze+cXG0AQAiKgQIoChFBufT1IKE1bI6X7ArbOHQpwkxgUSREDS2qG3VCHStOkc12Nl4+eoKfqa47pMBJFBLigAKQBDRgQUQQ7PSWLGEgioaBum/yi5rVgVQEFIABRfJ7J5cqG3bRSzbPMsG/dyTuRZ6wiJROJoauIldqJkAWQbvpIP5zWc/n6MSghKKCosNQSIiKhlUUUSvwDCtPVlGNgEiKO4ty9kmI4iKOAVEDQCqIzdb1B1Sb5T08iwxZIStEsfIGqJW3HYsoCEI5p3ENHY4WeBLd8BEEiVUUVWyUwCHgHgdT6h1FDwnbW0tCSgLNwm++J3q4sBa5KgmWgN6rRfQD3rSgmow1jlrARBRWVhBooTYxjYKdLEBJ8TclSG5fPXsNywbIlAVUBFwNjgUQDSqgAhGhXoEg3nsCEExsgairRjnZRHQshhGJQQgAKXEIPc1xkiInlQtoTKgQVYNjZIxseOuBdCIAvMqboa1oUEbnaqCCCh4Z6U1QGSAhFBAkR2g8VkdY1RUldimmkBTl6xE0QQSMegQUASYyKqQZY6gCiqsSKqosQsxirSdxKYTJ4DaGqiokMVPZwKo+mkqYqy3wp1AktgWnYIa1EgGCR1KR6QxVMwhqftx1aYaDUZkZwmUHISWDJCgCqAxqEzMiirKDCaKYhcid40ggCqYzoahG7yRhuT6FgYENGid9ZGhbbHnLaFhQGBERUMcHajEbkUG1ddlYzWGGDFkic1djzBEYlAwAKxAKMokrAZUAEEkiBHRtuqABBTZdOKKVu5aD0gAAIgIZKyxBQqHqmskQVWDRBKBxYhgQNUQ2hQEL29DO+tV57M5MPbNIAy3biVW0AARoxURDUyNKKJEMRyCIEflWDMoKCh3Oni2VW28DKAIdC23iNA6tIbUO9s1rVoj14I0BlF0AZGRSwDm8A+++PTtVY3fpbtlz7RZ28vLB8svd67XeBDqAMzKUxtd5xojKjGysMaoTe0igy2xS+/9xKlbrt/ODQIoAhoEIENEFhTQgrHAHVpjRU3sAquCIKhwZBOlfddvTd/+WvFTD6auZG0TnuUvTb7xlo3N1Wp62cRiOKgJmVmIhVQlsgKxCIqStmJV0vbmg42DsTHRoQJ82gcCsIgIaA1JVFENBjsOaFhVSJRBKThRPNs8uPvHXz53P/vV/+F76dbtk4m7/9LX7i+wb7qZdLBcLKS3PwTqDAlSZFEFCCEig4Iyi7lx2RvcvGWVHF4nHAhkCAGsVSJQgcSET4lv6NAqCH2qpCSozObVzxo9XLv/L7qdNB1vDA4j3Lv32Vw+Pl8tgjPeSHhyv7+9k2aOYicCxMLCDAJdtBrd+d24NHdG5NWxXs8eJCIEtQYVldg4les9CWZynQ0RQQRZQVVouvbxr0X75t752f/Se2Xv+eP3/t2L5JPq27hqUyZPCWsrlCwPb2/eATRUR1GNXYhBRU1k7hSohZ3b4NHIpyKHDAIggDWooGijKoJc+2KAlq89KhVRlhj0ePBHTy+Scufg3//LxRfufvJTkh+8sJyeQ3o1sjZL/+oYm19K37c/+PlEDTYKICwiHZM00PUqvyzYdp8Zmus5AEiggERWQCwCKJIoAwKLAeVIQJ2IqooCsgpwk8bv3X7+3vnf+O1ps7wXjjY/n+v9cC8c0y8vMM6Wa2vPYVJvtT95exZdp6BRNYYuaFCqtUGEUR7Xb8UUAAAVCRUJARFFSVRF9VP1xyHEqHRtuqiqEgEHDsa53vqv38d/8no2Wbxf9PUM6cYXf3L2Z8eP0vnVm3HrH/93+z88PH9l7bf+5MO/ukSQIKGpaoEIsQ0tlIPJUL40zH90BvTaG0MBRCtKGBFZgVhBUAGFGJmYUFkQmKoKOUtGo797T9fb2fb6SR+fFZ6a/f/sxq9peOND53rf/+SF7w7OHo6KjVb/1d/ocYwlByDR2GLjdbE3gxt7kaJHIgEAupZdisqWBAVYWQSZBQGUgYARUQEJCYShQxQRjGuP37l199GpJxy+2/Q+n89frREOHz83p0/Ppr+x1v7K78DfO/3vXz/eFGm6Opq5QtVAWPvoFhTxDW8AAVQV4VoNgIKIWFVR5WuL8NqnQeAoQCKKCGikk4ZAObL01i7bJWzOma5SO//D4eL8I+GAXuwL8eWL917a+6//y3+09WMTuZKrYDc/FhfODr7wYHqyv+jf3OwJIIHqtecB128jilYFASQigsSohgBIgQUQFeAaCBARVYK65s6Ti+E4vDi0x2WnqxWOVoX3J4usmKa+ie0/eT/9rX85vZN/vW0v3lz5/jFvvSaDO997svXX/c2yB8zptQ8JCAgIqkrGsoqKEquGIAgYwKpEA4ygIgrCXbMkAESImf3CH2/1bm+lOzeej981tWnAAY6yjSbrH68+OQ/H4TfHXyv8Z45vXN3/B4f/x40XD/WX/+mDz/zDJ/0RrgGSIQBBABAgEURFEKuioNf0ghVEgVT4U7dOJYoGNnNVAEJA2P7cO3c2n772/fHk6kt/+LMbD5OFWS6KSz988KSFW+nusaY/eFy8d/cp4TdXx7I83v8/508f/tn214rUAKIyAJAgIQhc1wCrqCKipKoKKOJINBBe73QoCbDiXAiRVKno3uieDdPv7v357eFstFbBpp9mIcaLT5az5DP9zG6fkjt8sAaje/E9c/7lo+WH7082bj57dd9aJCT6tAP0KeoAAKyoAIhSZEYAG0GuVfw1jyJQEWzgU6HCSftj9uPbOE3ChZ09wHenjfSbxeJpterf3siivxpHuLVz/ujsP3n7svnqZU5/570XPj679VVNEMgYVABEMPijuYCgVhX4mraJAl5v8yCoAtL1ihUQCAooAYGBtHvr5XeqoTmui+Zhc6/cp7Nu/ujK3NoaGmZOI1KgQge/fffg7uh//on//BtZ9O1WnaklMqgKiET06eoOIoBaVWFQEGAEBZLrnRZEBEC9tmSBjcg1pSGwaH7q8PF9ZA4/GNJJiWZ5xBvbm84TSpCYtF1Ey+nj0+MP2j/4nclrM+3fzi2hIiCQIiECAvJ1F1CtqJCqRogKCICgIMYwIihwVBBhECOKiEhI4FaT5s7t5nJ5dIzc+MjSjcZ7GVngwGoJAULXcbo8qkPV84MfrPV+0UVHSNdIRLCAqCz06QCyqsjX5wEdYAREMaoASiYQBlXARlQCKxKoETuMmYViTd/iuApl1YUEMoNYkyIi+3nkrom66vBDqidFlvV/gZwjY69dL0QCAhFWQFRUVHttWANFowoAImBYHIlQjAAYWaQO5GKMXbCIhtTRNSSARxJCHaUEbDsyqEBQkZRta5Zde1L2eTJdrb3Vs0jXVEBJEeEajqhwLcTtNdAU1DAjGGRtkAKGaCOLKovqynSO28CIIk4FAQyhUUUVwLyNvdhZK5EFEITLppWS5bArnFnS5t5Nbw3hdeVPD6CqMCAgEABYBby+jkAQgYkaIVAmRwQSGVHC3IQmqRsF1mgFiACQVBhRFcGhbRUFK0CQEOZNXWPoTuo8JpNU+i9nzhr8FPc/soEV6brs9TWpooDI3TWzVycKqkZjJEGGjqBWx111pZ2SdGqQFYAIr7M2BhIHwAIcyJRNLOu2Xh53hijfGOavF9ZeSyQwAp8Ook/vXVBWUKsAoEBiNJIAAFybl1FVWQUZRCMAQHO6yB3aaNghoUZBK5EcqwpERWRF4BW286oN7WVHGRdmlN7aNAngdc2giOZHMACU68JkVUgUwKAQMnzqlIAwqWlQqQOWFomhfX64HggheFEVIlBlRVAKwlFVIwMvbTlfhLbpMAv90wPrhq8bbwAQDYL8yAm/Tj5QAdUjqlUFUQQS9OE6yyJWVPZi1EQUEI1IHeD8nd19ALTiEIEBQQmZETjIdTQItZ2mp9wF6KDK027c9d/AxIq/ZqZikQRRfgRHJAQlA/8/BrEZ3LQHCxcAAAAASUVORK5CYII=\n", + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAIEAAACBCAAAAADCy4aMAAAuEElEQVR4nDW6V6xuWXIeVlUr7PTnk9O9fUPn7unhJA2HQ5Ec0QYpkhJJW5RIARYIGAYswJBhwPaLbQiGYfvFhmGDftGDIIoGbIB0EGRSFsUww+EEkjM907nvvX3juSef//xpx7Wqyg+n52W/1t67vlXrC4X/u5lah2by/J3z1c3J/ozuJYevPCjg7GsfvPDRGy+eD6n3HR3uc3mQVW2IXWzeuuLvH2+mVD6bbFEzdRiiVzdJVtM5FjGOhtNy4zHsvd3tZac5ORzBcWfyPKkXk6c+bvYGK763deFv1fmr+UKW+LuXSCbVvzrbG5tiPjvaemx3dKbTnznB6iebJ3cH1TfyF5OTG2lvnhBfJHfefbr/dnfQ89+avrhjHw7iquoBD7aT6tk8aTH163Xln6cb3y2224u1XEMPFl29adiq+ifemF4+8d+xFHZ4PexuwmP8X/NQCXcYff7o/sHgUVrEduAOv3Be/Xg+j74+mq9Dv2f3uCvcu7fX3/mzG7tv3+kn5z/EVwdP6vXp892l8Qf9enaJ9iodtQc13qv3+u/gej293ZrZwVHRQbl92c9aq4taErdtkyeVb/azcuLjnsd/Fk9sU5/sT+6dJeuuw+p8v9qYvjLt704W2k7v94aSvLo6iLOti27/2w8f//Xx0VZiH5yvD0d/tdu8P/ZxuNgN2XHnWs578NqFu18lxXEcL1xBi2S4NMvUwKqH22dKbR0lSQbm7u8PF7317SYpZlv4X5z2VrI7fPx4q173TZx1683A7ayK1+PlYHl4lrtk/cVV6st+cvrnZ3HxS113A+MfJi+vth5tfyvbXHFHI40rCwsYLnf2FoP33Xxr0fbaXLrVgHuzvIzjCC28cJjVYZkET9tPPn92knVrieR5JPwPybn0w+EkrvZqe7UaVYPZy6C9187yEv6YNsfjLJm90S6K1Z988uZD/IXT9YyOf/hC73xyWvzlZ84wdDsS2i6nizStNgaT8rF/frd8uiXrqxIKKIHabHj02fcxG5XJia9D0o3a7fLVPxik8iJAtd7Q6E6/nu1t1cnr5XJV5ctMbl+tvXTzLCf9F3Z93E94+nLZ4Td+++Lz9+PPX768iW8//OlkuvekePrZskuHO3lZe8vzYtQO8wNewOy17uKFZm2ufiOq2U3Lwtw437RdmWKNVnWy4LxK/WUaH1Jvf7vCf+i73aaDQhZVlj+/PSPb3H0lXOzz/a+P7+a9Gz9I7qo/+73+dn6S/WS3Dud/evPGaYh6AiYVnOc6y5vhWZZGTaY/u1xeXd6czTIYwyKM2iaPbA5frXu1m0OY0PNIVYG26+e99PtXO09/ybbbzygvh8vCtmcrb9LZ7kNK7f7r5dnW6t6/3nxpb799pFuj+O1/lpC9P/5qsl/88OtvHCxiO3oABQh2SWx7ITGF7Znm6ucuZDG7NV/0XJZFnYR2LQ6abJANICMHbGUgYDQ6WA2qzR0tDj7g8aMR/qdFdjnfL11nazWQd+2bt+unt5ZHf/Flv3eoyXtv7S+//pwO8tX4i2Z09kkzGj/qL0c/9EVr7SxHuKR+OmvzE7P3xtXoYwdy0qeRq1s3h1Gd1iHttl1XBXMVdsve05I0H0FV5AP5y3Djyc+kamgbL+NO6LkO0jQFG1/avzi8Nfvk7Cu3R++/QPdeunn8/87ldtfe+bKZPH6y/bn144M2+7A/xIzKIYRmvEFzwuna3r4MHwwlztZ6a3kk3yUFDeJgLAMkXS29ErHrFSNrmszUk+PxdtOsXTXC9rm368tRpzsY3dmoO7h7Vr5w+gkd3DhyP/cv297Ow/vSW5vtrr1p0o+6F+HtF3anzbzvY7/WrBr7Rurl5iLtme2dxWWfW91wNJgmtWhi+o33pjSjTgGMGEhW2w8R6pGMl3EE4/U6OdnjhtaLTbuH6U7a0vEI5K3ncf3kO/Fu73znhd8v+jfL6bSO1drBm9p8MHyl+uCN9OzibM8YV1PMX6f5Sui10KLcuLss0xpwMy2KOrGYpIOcoO+SQdGGRbZTWuI89AcF9MuJH1yOLuzNkC01BguDJZwPlw+Hdd7vzf/td3az8z9942blbl18o6+T4qOjq9vF6PXRvL7aDO/5F1eP6/zArNul+q3wOE0zSZ9djfHgxpJp5ppNz3nIm5itMrDROHDjtFSrg6usptin3cOU4lXmq/Hx5/5ksEouC7VL1ayY9Gu7xpunb53dPVr8/i8NI7/6Vx/u95Yb38nizjD9iTCriuLy8K65eMw8LYAW2Z2je2lrAwut96HYr0EbQy4dNqaJaTvtO/WMFhJlf5HYkHBm6jWvODx8czbgREb9+o1v753tMQ1Ta/faqy3QQf1mlBX8wa86Wt34o6M7w/fXjv2hv333F+uji97s0ennVpdPdxf7m8OueNF/s3S9G5HV9Wgw2Exi3UQTbmX1eFZsso7zBHLbS1zsBbHdqHVOXKpiXrxDp+2Ieqv0eOt8ZziHHP/HOFJ33k3gZPiG1L1vXvztw1E/fHCymT3/kvmj/avBV/zCJNUny/2ti+M819JDr7k8zwpN4tHIqc+7bN+dVliKbjYj92DHnhn2aWc9sbjeTE+zZrSa5tNN8X3a/gNXDybDcPjmxdXy1vtmfUBm2CW6OXm/y3p6zt/p/ua925MH9yuazJLVd6N89scv5wDPvl+8jk8+3Her4V5TP7pYJKNBie2urQYp7d/OnnDa2f7mcNMe3/GHQ84H6hJr06wI0y7JR3NMlwOLpoGT/l6kRS+dDNoNk/RNRdb2g4kPpejg1UeDB4e/+a0vlSfxKmw2Z1/7I731UnbpOP043m7Pz5uXqwkfTuOgSAfM9Vo1CW7DTdpNnnp/hbLt7BXdCo9uPiuGTQrgLQotTUixQ+sFnEtkc/aVSywn1QRXa8X95eAoK2160asUxYcvn6TvNn/3/heeuPst+hvf/tpf6lfX66OsXz3hYh4u93fm/MQNh6nUkTtKm3Ewmwmu1nG58jMTXgcok3z5fPOomJSpivWAli9darGNGjqDqwLPN9a+l0F9dXDSOFoZy4vUig5PLxbp2ovH/PjBLz+9+T4cStocPLz5YPTTV6uLpDi9AOGu91aYJplmYjAJWOTz6ZZuF27O+y1cYQ32Rc5n5ObHHty4HTH3yCDJifRForCIceAXI6im1XjK8XTnWW8+uFrvIdvFoLpcFVuDVfnRya/MN48HZy32h35+O/SfZmXSP72gmBY78KwLLa0rL3sRUzFr227M9RVudsUDL1SkJm9ybWLfzjfaoVhryYDMusLX6QoBYAjRtE2en37hycws0kIrU5QH/efWdtrRqJfJ8t7Fv3OeXy0Xl4X1/eVr4C/bchePY7Vlczw7q2F37XxKw71gqfNpliTHCM1r8/7jnILHTdcGtpcD0b72CYxFtNxGHkpfmy5Z5VXGiVz1ws3stB20LUM2La7KtaOVldbDKO0dfYB/8zFvHHYVQjG0fVl2Fd+0521zs1p/cKyF3x0+TwbOhjicu2yMTc0Q3zzdeJ4lNfgxTIu2uOhPU+g7A6oJg4nzepSuosSoLtjoDUmX2Q/WYkyP1nxbD8yq32soUufc+sM/Tr76JL37vJ8145s3bewuFsfZC/xkMVy36Xcuk7y+0XzcGxVpngyfmvUizKtZ3d68uDHtZ82qWOd5Ohus2s6mFhLHkrZmEM/bhBoTl6YTZUe2S+E8HMdud1rAybCs8r7iemKXA17beb95fefp6vbjpDz8gpmeAndoNorlk82kXn94mc7I7kr6YpYlsZ4mt1OB2HWt22hGx2kl043RQuH81idlGnZaU5RhYGIvuQolr5cYqq5jaxHULsZNWNYTU/l6/cl2U2MeWr+0a9smvHNVvlhf7B1uUPXKRRx9EjfXV5o9ag5i5k+OfO3pRchlOKrMyeWtngWzmqFLx5Nm1atoNhnO+rOd7KGRcSwLwhwMDEI3n/YGK7NsuzJHTBpJQ682ycM3Z9XuxyHpyuFKQFc9ttly7fIQbgi84p/P96orro6x6N9bT06yLFN78tSL6x2o831a+nfqt5xLmgtG73vjhl0Lyzy58Kv9p1L3NgMXGISSBNicXWz4zlQKkAbvbXBgHXsJc6rWsnrZv9q2Xm2nqU3Td+fZ7uZ5Vs6aZrUqQmdg42QTppytL/Ozw6LMim0e9pij/mD8mSzgskST52mywqwNFylfmYl7duKT5bgZK5kkySOsjuttW+uSm6hOKZIzpDZmpX3vp59Del5kj18YrLqilNyO356na8m3tgTSXBbaRqhHDc9DPkyXw+cXWRzb3byQFMLyvVuTbmmDNjoemxh9UtJFXzpzY/Y4NN0ggU0CTjLPzfJ0MCxX+YxWZNCA4ZhYy4iGtuZVUuU2+M75Jq01S+2fxrgDs6H0F7jqqMqkMJcbM7/hZLnzbGlNmmwkQ+9Zu6NX1qFO2g58PyV2AE0bc6gH2YOr/rM3HLki6TTpUTif0hpO1TRBFMiTEqEl13W27dreN3/64eRhvxk1p5MOqogWip6Fuo9nbqHU7pXLOt46d6nTanISFd147PNcQuie7Y3t0ndMaZokCMRgbEv1OPtEafWGQ5+4aDOhxbQr8kVpupUPai1aToxYYrEhDb0F0CqpB13MT42tNy6iJz9Mi1W2ftL0vC45nG/EWyEW6ue7Tdug6w8JBmzUnNxJXKfawWAyyTl4TFfLeWhS+wnPk/XEFmlqfSLNs8s0i8dzXi5QrM29ty5zjhDUB3ZA0X933GzMKdRJAxTaaBX9kWTHQ6hnUYcXg4sxrGRxcP/1hwVTkq9nNCKg7mjbJq3aphunBrnLfAPTgh2Hj5OsS7PMpUZtnHG+Vk7bEDiKM47AARnUaAwDorRorowuKpPMx221sURoxZODs+eTvjZS2+izvjH5YqUvNPZoCyTtT3KX53U+fzZZU7SybMaJN7HJfSyOxmd9dtXgfDnp594SJrOl6euzw7Ksg/heUTifJWnqDJARQdS0i3aAMvrISL9qmSLWiRiLzfI1uvCDxcr69TCnoqt4p70/yLsW+4PM+v4sWeJoXSzUKygKR8FkxsQneLnXmaY7X+d+loozi2bEelyzVGAsOetA0CBBjGDASGSDwV/sfKfPRy9d3HoMsYWmNwxsYznxbOjcZW0Bph3nx/BK+AuXG7HZME1T2+WtHL9QrKhuvcvR2Gi8YIXzYRP4YrFV9BKPVMVB0q46rrlLrbWQWBIgRWBBg5GRIEa3qu1BuSg+3izSNrayGCxzZy93ttwT21vuP98tdDaK8AJ+0GKet9zLcjCJupXBV01pqpDk2jPQoW+HF2c2szKfu4nNhjaW7TCNXctlo95bnxpDAq5D1IgmGCVSEo4Sk7O3/ozze7fqUR3mGaghQ589oIc8PnxlOpHm6fTKheaT9mJjAMEk/dRkZVG6ipG0ieR9ygKQmeyy87WFxRJyNxhD09IYpW1W88amSW/Qz7wDTx0BGouAqkpEhBxhab5+K1+k32teyhZGuPUV2dI8S8dlc2g0PebCrpn786KwIOCLXDUU566SUYTQGZen0WBDJtahXaRJPKlvhPVCa5OmnYRl01IfUps6BUbHahEQicWyoBFUYObkdGNldev9Fy83TtpUubLe0nQC06TXW9Kzrp8MmsvztcRhFrDvZRm5hsrBkk0LA4fRqKWUxVx2DtwDXU/GfeTUUgizJpjEJZSIRwUhJhR1GkktoCqDKoOG1eB5f6P/F+vvvLo71iombW9q+zZIUtqL7P52ZmhmvrU9MK2Hpgz70C42a15fcDKgzjqnCtA6EVx1WVU8CYNeUbADMrwsl2idd95aAEC2jIioqpZAADCqKljDZM4Hs9no9tXRK+X2qsE2GV7aMO2HOmpaFs9v3P/iJ7CnJoee2ubw4Wbam/Z6ocyPn3G2ZbEpgiWDpZn3K192qW6QZoEwLJYBksw7Z0hBgYVQAMSgqqqyGmwBQEXbhla3PzjF5IVD3w9l0m0fRrtZNAUezTabrje9edV+cOOvRUgT7q8uqtJeWdOuskXnr9qwusz6w4HjzMxr0nA2WB6ow9OB52UVsZdlhJbUqqqJqgYBAVQIVBUiAABaX2NXPN94Ymdo17v+RGi6CrbN0tncbL790kBl8M4R7iwmppcg7C8Wz5PFzonfjOMPx4uNZbgYzxbDUT/wzEosB8sba0JXQ9d0Haa2cCQAgKoqKogKqlYRQgyRMRAACFioEw4wvHBNiDTP+88729lVn/uh0deby97qEq96RadJaihbjDE9za5ab8LJdBPq/lV+VszN2vjmRaowZ79RBIaBhk6BstwCEF7XUQVRjaSdBm6asFKQfGHUMELqZrR0aYlDduFQn+9ebtj+KBidGpcePyGZja0d5qkjND6rxMSsk8Ojvdtn2lPsyrans97j/TUKVQrDXNWHCCoudw4FFBQABJVVAUhD0BXHVTuHS4iTpGqK1PdOTYsd1jqsXQWNERUr5eD0w6K96Gg8t+tZf07r1hpSq6GxeUjPT/OnT7Dty4ast5GQljuNfa60lbpIAsnSkScDSgKgAIiIKizIEurqGGb5KmIyoOoSedolvnN5XFB+aoMiLTQxqZ1wzNB2vfb02fq90cc372wjklXrpF6UWPxZcqHUYdLauq47zNfTl89WPcVB1eeYsK9tCgQAoCqIACIgHNvYlG2IJYxnZ2k5WTa0BB9lVRy6QDYpqX/x0gXOilktavNiOpiEEiDpRRndeCWLbNGCR5g072fvZ23jHTjI3MbZlLJ6/uROyUdx7eGbnTMBO+sNgYqiAgKqgEgtTdUsQzkd4fHZQ7o9gclMklDOKjhJrB+T9c0YkosEarVpsH2rCxdib3xxPq6r0wPnyRhrxQM02eGxjeMi3Nif/PGPnZ6YebG3+Ko841feK3vPZZCigicEEQIFESRUCc3q/CrWXRZHH82TfGhOBquMoeVqFRdi8x01xmi5Nbg4g+yqTYNVY0o6J6jWWrXti/PdkACiRU19NfkmQDrSX3ll5+k396/m//i/GX0B3nv82dl3r+qbjwfAqSFVAIAIigKgoVzNrpZchzZ81GuKl6/65262SjgEaFpsG86Hk564WMT+ws3MKDNU2bzurnqZqW3HtLEKayaCIqFVX2/v/cT9xQq/mG72tmHxufv/bXqw83jxt94dfjOOB+7R+E6TOiRQVBESAGmri+P6ipa6DGd6rEzRNtlkdtBGDVLMohvu7ibqJBpjr6o6czX5zjYE03FMhbPJo7IbVpIgOGvAiJNbJf9Q5395bj+/8RMffeU/uEz2Hj5K33zpg2T15OYX5/Fi7JUJVIEQRFnqi/P6LJ4cN2XXVB58ad36csYn44FqcdpG7A8zA0iUYedPmywUWo+tzj26CP1muUSlOJHARGgQ0OnopeTy+fSyCouf/PmPvnenV37/7Pg/Ju6/9nV8uns76aKCBItihEWgK+vZfHG8nHZHnbqhyHACS8eQ1GfVsPckTSBfnySooKQBCbPZVjof1dacb9oyOFMnaoLjrgwOFBAJydtd/er7xx/PPn729me++rvnwzNjf3N/OV/u3r6fTCcbFgUMgpKqCDfV6uJ0tlzNqqmFbBePTP3Gm//XX9t87/6os80UabF+sDcAkhZAAdFleL77fHthIQWKPTXDZ6yw6C9WGYmQElohzPax+ItwtrpcfbzqTRbpwZvT352ecld85d6SxaYC6hWjaMRmWZ4/B6waeP6Z1z/Rtx79zP80nKzt6ksv/Olf3L1P5K/8eKswiMyA3PBq9NF6suqT2tqc5KIeSCvsYHGwZFYFRFU0KmZv2PXWS75/ZJeLZvDw237FthHCVA1YK0qCoKrKoT16joPy/PTpL/1H2e88psvfwjbxcXPAt/rf276YxoK8Tw2htiLYZg1c7iSzNehZJsgX5eZlj8ZXIaXGhBCFFVAtqVGSzx5cnpc3DstTg7Br80R5Udlh/uJNbgYOLACCAQyLo2d2s33y7Oyrf68Yff4D/Nsvn7zaO2/3qe5fLCrJMb17e+Q9gLg6al0m737Jtj3XkJ2P8rNqMDo3Z3W6cbYGySwXZXZECChGC++3VtV7e1Nvq5OtYqMZ9FTzg4bTECVFJCAbOdaLWXbLfvg0/PrXbGje+ubl6Pbnuo/vHWy108OPpovBYOPV3XGhxIGtkapYHd+9vHW2B7azddF4jsIFtqFcz4phBGm9F7WISiaqlbC2pl3Xrp3dSQbEpkgSDj0DYBANWhQCkFDB7tri/FnvyR98hiebf+efvyNqnw5+LR794O0HanZ27+5vOIsSWnAodBgffG5temuapcEqmmBHVwVcNoui9myxTgUAEBUBgFQtChxIE3wWEI2HFMR76bKUrTGGEAiUudnM15sbb3zwDf//jDZvf/bfOnpX7M989vmH77xrTbZ398ViOzMgioRM6SWu1hu/FigNiU3T8qo3eLobgvjlJluDohqjgAKgiqJB6jqnGcFAa0cOwItRkFYMESgikuEYvcU2eeWVB986XviT59+ijXTUe/8PL6uVn+TrB3uTnrWkgIAQGEus1yVbug0zX7fWLyW3Y+pcLSH6nKmNCjEaNkpAAAxsbYdMoJoasFZcwxbVoDcGjYKqiGJPk0Xen2391Ek7OynL2TPnSt8lcWN7qzC3iqJILDCLiqBIDRR72eU4xaSwE16k40dr7ao/DTgfiq6g486KFaBPHxSDYYPgWK2NwYJRQKcGkV3wIFHUIgGnataz4XZ5sVfKXI7Ty0THdqswJtm0SZqgKmjkGKXrSJ1brLONSWZt7LaAtXteGwn1pA7heNRlwEoqqCiMih6MggBE5zGQxqQVg0rGGkSjqoKKhqQx4zjOumY8X0nAVzE5h3rk1PezAtIEVFSVOXZxFkKRYDsQ9T6xYd4Mq3xBy1m5sRx25Qz09VaFg0UFQQAERMuioGhNS+QkohMCMNYikLISkJDFiNSm6q3Phk1bseXy1Uu0AP3MJx7EsIiKcIxx3sqAs6UFTIEtnE16/uKdG48vJnOfxQFBcbLWGafBqSICoqL4GMR1ajQxUa0hRFEkg6AqLKqoCErR1/0GDJG63mip0ft0lUTqO5Mag0FBpeu6LrQVuJQj+EiJJSv1C73KPD0+6Wm5udrmdNW/6FofgUx0qg4jqWOLwSCrMTFjBUUgICKxYMh2zOwqqwrRL50rAQmY+thhF/qQ2sQ4R74hEI4Yg3T1ohtnQdLRVYbO2Gojz1zxs7+VLKoRaty+TOKqrdLaOc0UiK0FIGL0waj1AmBURYgQiRRVVQFEolEmhmDFE7IxxMpG1aZIxninNrhORLgObQUnjRkr0MD4DHtKsZ+botj59y7yrvb9czI5Qdd1BpsYIrMIKoCIqGUnwSQMiGQIiRAIEUQUVKLxBKSCrrOCURANJX7Q84l3zpKyCKhwhNC2YYFFr82aNTHeQ0azCZFScvBfNdY8NsVRejhws467qBo6ZmFVFEFLlInzCtYaYwhAVQUBQAFBRRA0CLNWyAY4sKKxBo231iYWydjQqoq0bcTLkKQEidt2pldEQ4mqsFrz6i/2KlrGNttokhWXVRu6jtsQJaIoGiJS7xDQgLIAAgCQRgJAZemQlayaKJYUwDoyAKqUWpdYEiGJiBA5rrqmPresPSn2ifq+Z5AKQFW0a6OfuwOVWn+YT3sBA4CwBBURjiwIwErOEQKoqACCXr8DWFKRSBENARgmsmSttc458h4AEAwBiAq3oWukgVMxwWVJut/61Jq+J6mbtumkMYNfdSlCPp1s5liHjkG0ARUVBUARA4gKiCQqwEGu/4KoiIqiWDRoI2lE66w11hgC401C7ICsqjCLhGZe23Oto/cj7cfMGSjI0oo7VtpE3fuN34OwWlv6aVrZynpg01oTjBpkVaeqiKIAqkCsiggAiCIAQoKiiKpoDRIpYhAkREOGlAlEooS2a0ru8GgbbeLHvlhZSdB0NhFNVeKJxrC3+/RuqWc3ejloCJFUhFWNKKqV6JQFmFREFem6CUDECoCAkQXUKnI0FgVIGI01ACgCwhxi13DbxqZ4MlFjAJOhRus0kcSmrCoBJ8rc/f13P14m/kQHwbvSowWhzikodxldfz4oI6IqIakYAABhFiRWikpkQboMFEEVBQkAWECFuZVWOg6Vraa5qaGjbNKMrE8CRlt0wYAyC0sCX/n82f937sbLXp1LbMij61CcBRKxAoSswCAICACqRoBAI6uSga5GBwkQEgqBGMsGQEVBYoysdYxduaqRkzYrQSjxTU9Sghpt4pVVgZkl2s7e+PuXv3e+Pk1TqpE7y4ZV1YJYEWAgjRoMASkCghgQUUQEVbQhpCkQOQVjhRiISUVNIGGO2krVLpcoS9q4NEQmxU7UKIWetaiioMIcybqiSYp/9L0/763aZOnFVtFSYpkgqhoAaK9lKjIYBURGIgYEAyRBmdEaa0lBBIGMKkXmCCHETmMMZZQOCbyrCsgm6AJDa5rCGgNRFYSMlRAk9U391t1/82zz+a1K+w7RaAC0oggAYDhKJI9MSoAgn3oXAALAgaKxhkAF1BAwK7IghBhjqNtQhSr4BkHadkgmKYFtqxLACiGBghEVNSYGtr2e+eV7/+bK7KStYvQOiQSNAQBgYSULomhIAQlEARANSxSRtlxHJkACIEY0EoEaqTTGpo5Ns6yVTEvLtu+yRUg4SJfV3NlplhMgslEUo2QYo47iKy99/X5rR56Y1Wtgh8SMToKgVUEyTpUYo0HVCApkmmRZzLnNQgoEHFRcm0QFbCJH5grLMsQsGm1CFpKeQjanVUa1Ke25+MGWaywgIQBEi8xG8vArR+9/cDsM8zaNFhNpGQmaqGSILAABqgghCKABICQXvZ9Nt5EEBMlErV2sCFtRjHXN7aqrEUKFQstN9JO2IA9qwLPtxeb0+dom2ushJ2IJAcjSSzeXf/ThjdFIM8FODSooq6oAXE9DJQtgWI0KEilSYuzpGlhAJgElv7QkbWSOIYS64yYkGkSi1AUWvXLQb+frrUJlnbcc5ue93T4CEAoSKyInKJ5/4dG/vlHuIBDbJKKqKDkfEUBFCMlFg8qgEYnA2Fz7z89SAqPMYqAl7SAyx6YLNZd1MEZUiw99NmjRzhfjVrRLaraGnXcudB/2dkY2gNiOERUIkYv29q/+38f1zkRS0KCk1jgr2pATNWIRDKpRBhsZCMlzkpyMXAYgBC0IMXWdcNO0TdmECKAhsqjdWWxN2qwVDs/MIJSWENQZl+TVQzse9XwdG0REQES1tPPr//wH1Wq7sewQyZCCdbPZMBIaESFSVkQwGCVC0ghUF4UHRWUFUWkZVGLXVqFmVYOhsY+LuHmRmiqTaZ5bDmVljQATEgVH8fQ4S8SmaMgwGs2oyfiX/+mHkkwSEIOIBo3h3tWjvK8mNaoqzKhCXVBwposQl2CYQVmJNdYQua1r5IASASOj0xhzMqq9TYJIiClZG8SCiMmksxq5NLCy3iEguUiJ597f+t8U0XogBFVnFOzm+vFRv5CE3KcctYnNQmzXLIOpWQKyoIJyBOUYQ+BllCgZREjuJe1dGhT2hZ2Q1RZNraklD8xowYgLvmm7cpVmhtCqBOO0dYOtz73zKCusAhIZBDAAZv9qVjZ5BkJRpQ1dNWt6oZ634nshGEboMCh2kWuu66aT2EQU31br1TC9dbxjXx85Sx4kojXW/igN8EYhhFguy+UgTxEYkCiSNz/+0dWTYeqvAxNFAEAY2xWulG0wUeZcni+5zSlIMhxc2zokHLXrmigcm0q7Gj12tP2uabL+YvLjfaekZNRRUGs1CikoIClkDhLnZ0sBo4pIBIDF0Ze+cXG0AQAiKgQIoChFBufT1IKE1bI6X7ArbOHQpwkxgUSREDS2qG3VCHStOkc12Nl4+eoKfqa47pMBJFBLigAKQBDRgQUQQ7PSWLGEgioaBum/yi5rVgVQEFIABRfJ7J5cqG3bRSzbPMsG/dyTuRZ6wiJROJoauIldqJkAWQbvpIP5zWc/n6MSghKKCosNQSIiKhlUUUSvwDCtPVlGNgEiKO4ty9kmI4iKOAVEDQCqIzdb1B1Sb5T08iwxZIStEsfIGqJW3HYsoCEI5p3ENHY4WeBLd8BEEiVUUVWyUwCHgHgdT6h1FDwnbW0tCSgLNwm++J3q4sBa5KgmWgN6rRfQD3rSgmow1jlrARBRWVhBooTYxjYKdLEBJ8TclSG5fPXsNywbIlAVUBFwNjgUQDSqgAhGhXoEg3nsCEExsgairRjnZRHQshhGJQQgAKXEIPc1xkiInlQtoTKgQVYNjZIxseOuBdCIAvMqboa1oUEbnaqCCCh4Z6U1QGSAhFBAkR2g8VkdY1RUldimmkBTl6xE0QQSMegQUASYyKqQZY6gCiqsSKqosQsxirSdxKYTJ4DaGqiokMVPZwKo+mkqYqy3wp1AktgWnYIa1EgGCR1KR6QxVMwhqftx1aYaDUZkZwmUHISWDJCgCqAxqEzMiirKDCaKYhcid40ggCqYzoahG7yRhuT6FgYENGid9ZGhbbHnLaFhQGBERUMcHajEbkUG1ddlYzWGGDFkic1djzBEYlAwAKxAKMokrAZUAEEkiBHRtuqABBTZdOKKVu5aD0gAAIgIZKyxBQqHqmskQVWDRBKBxYhgQNUQ2hQEL29DO+tV57M5MPbNIAy3biVW0AARoxURDUyNKKJEMRyCIEflWDMoKCh3Oni2VW28DKAIdC23iNA6tIbUO9s1rVoj14I0BlF0AZGRSwDm8A+++PTtVY3fpbtlz7RZ28vLB8svd67XeBDqAMzKUxtd5xojKjGysMaoTe0igy2xS+/9xKlbrt/ODQIoAhoEIENEFhTQgrHAHVpjRU3sAquCIKhwZBOlfddvTd/+WvFTD6auZG0TnuUvTb7xlo3N1Wp62cRiOKgJmVmIhVQlsgKxCIqStmJV0vbmg42DsTHRoQJ82gcCsIgIaA1JVFENBjsOaFhVSJRBKThRPNs8uPvHXz53P/vV/+F76dbtk4m7/9LX7i+wb7qZdLBcLKS3PwTqDAlSZFEFCCEig4Iyi7lx2RvcvGWVHF4nHAhkCAGsVSJQgcSET4lv6NAqCH2qpCSozObVzxo9XLv/L7qdNB1vDA4j3Lv32Vw+Pl8tgjPeSHhyv7+9k2aOYicCxMLCDAJdtBrd+d24NHdG5NWxXs8eJCIEtQYVldg4les9CWZynQ0RQQRZQVVouvbxr0X75t752f/Se2Xv+eP3/t2L5JPq27hqUyZPCWsrlCwPb2/eATRUR1GNXYhBRU1k7hSohZ3b4NHIpyKHDAIggDWooGijKoJc+2KAlq89KhVRlhj0ePBHTy+Scufg3//LxRfufvJTkh+8sJyeQ3o1sjZL/+oYm19K37c/+PlEDTYKICwiHZM00PUqvyzYdp8Zmus5AEiggERWQCwCKJIoAwKLAeVIQJ2IqooCsgpwk8bv3X7+3vnf+O1ps7wXjjY/n+v9cC8c0y8vMM6Wa2vPYVJvtT95exZdp6BRNYYuaFCqtUGEUR7Xb8UUAAAVCRUJARFFSVRF9VP1xyHEqHRtuqiqEgEHDsa53vqv38d/8no2Wbxf9PUM6cYXf3L2Z8eP0vnVm3HrH/93+z88PH9l7bf+5MO/ukSQIKGpaoEIsQ0tlIPJUL40zH90BvTaG0MBRCtKGBFZgVhBUAGFGJmYUFkQmKoKOUtGo797T9fb2fb6SR+fFZ6a/f/sxq9peOND53rf/+SF7w7OHo6KjVb/1d/ocYwlByDR2GLjdbE3gxt7kaJHIgEAupZdisqWBAVYWQSZBQGUgYARUQEJCYShQxQRjGuP37l199GpJxy+2/Q+n89frREOHz83p0/Ppr+x1v7K78DfO/3vXz/eFGm6Opq5QtVAWPvoFhTxDW8AAVQV4VoNgIKIWFVR5WuL8NqnQeAoQCKKCGikk4ZAObL01i7bJWzOma5SO//D4eL8I+GAXuwL8eWL917a+6//y3+09WMTuZKrYDc/FhfODr7wYHqyv+jf3OwJIIHqtecB128jilYFASQigsSohgBIgQUQFeAaCBARVYK65s6Ti+E4vDi0x2WnqxWOVoX3J4usmKa+ie0/eT/9rX85vZN/vW0v3lz5/jFvvSaDO997svXX/c2yB8zptQ8JCAgIqkrGsoqKEquGIAgYwKpEA4ygIgrCXbMkAESImf3CH2/1bm+lOzeej981tWnAAY6yjSbrH68+OQ/H4TfHXyv8Z45vXN3/B4f/x40XD/WX/+mDz/zDJ/0RrgGSIQBBABAgEURFEKuioNf0ghVEgVT4U7dOJYoGNnNVAEJA2P7cO3c2n772/fHk6kt/+LMbD5OFWS6KSz988KSFW+nusaY/eFy8d/cp4TdXx7I83v8/508f/tn214rUAKIyAJAgIQhc1wCrqCKipKoKKOJINBBe73QoCbDiXAiRVKno3uieDdPv7v357eFstFbBpp9mIcaLT5az5DP9zG6fkjt8sAaje/E9c/7lo+WH7082bj57dd9aJCT6tAP0KeoAAKyoAIhSZEYAG0GuVfw1jyJQEWzgU6HCSftj9uPbOE3ChZ09wHenjfSbxeJpterf3siivxpHuLVz/ujsP3n7svnqZU5/570XPj679VVNEMgYVABEMPijuYCgVhX4mraJAl5v8yCoAtL1ihUQCAooAYGBtHvr5XeqoTmui+Zhc6/cp7Nu/ujK3NoaGmZOI1KgQge/fffg7uh//on//BtZ9O1WnaklMqgKiET06eoOIoBaVWFQEGAEBZLrnRZEBEC9tmSBjcg1pSGwaH7q8PF9ZA4/GNJJiWZ5xBvbm84TSpCYtF1Ey+nj0+MP2j/4nclrM+3fzi2hIiCQIiECAvJ1F1CtqJCqRogKCICgIMYwIihwVBBhECOKiEhI4FaT5s7t5nJ5dIzc+MjSjcZ7GVngwGoJAULXcbo8qkPV84MfrPV+0UVHSNdIRLCAqCz06QCyqsjX5wEdYAREMaoASiYQBlXARlQCKxKoETuMmYViTd/iuApl1YUEMoNYkyIi+3nkrom66vBDqidFlvV/gZwjY69dL0QCAhFWQFRUVHttWANFowoAImBYHIlQjAAYWaQO5GKMXbCIhtTRNSSARxJCHaUEbDsyqEBQkZRta5Zde1L2eTJdrb3Vs0jXVEBJEeEajqhwLcTtNdAU1DAjGGRtkAKGaCOLKovqynSO28CIIk4FAQyhUUUVwLyNvdhZK5EFEITLppWS5bArnFnS5t5Nbw3hdeVPD6CqMCAgEABYBby+jkAQgYkaIVAmRwQSGVHC3IQmqRsF1mgFiACQVBhRFcGhbRUFK0CQEOZNXWPoTuo8JpNU+i9nzhr8FPc/soEV6brs9TWpooDI3TWzVycKqkZjJEGGjqBWx111pZ2SdGqQFYAIr7M2BhIHwAIcyJRNLOu2Xh53hijfGOavF9ZeSyQwAp8Ook/vXVBWUKsAoEBiNJIAAFybl1FVWQUZRCMAQHO6yB3aaNghoUZBK5EcqwpERWRF4BW286oN7WVHGRdmlN7aNAngdc2giOZHMACU68JkVUgUwKAQMnzqlIAwqWlQqQOWFomhfX64HggheFEVIlBlRVAKwlFVIwMvbTlfhLbpMAv90wPrhq8bbwAQDYL8yAm/Tj5QAdUjqlUFUQQS9OE6yyJWVPZi1EQUEI1IHeD8nd19ALTiEIEBQQmZETjIdTQItZ2mp9wF6KDK027c9d/AxIq/ZqZikQRRfgRHJAQlA/8/BrEZ3LQHCxcAAAAASUVORK5CYII=", "text/plain": [ "" ] @@ -257,7 +253,7 @@ }, { "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAIEAAACBCAAAAADCy4aMAAAeYklEQVR4nIV7a5NkSXLVOR5xb2ZWdde8Z7QCiV2tMIGB4Efw34EPMpOBgfEQq12thpmd6Znurldm3hvh5/AhbmbXjASkWVtnZT3Cw5/Hj/tlB0AbJl68CMAAYAO2MN6nYUqyZUnNhg3DsscvEbYMm1bYtgFYspSwW0+rnc+5m7/7m7/5/vPb77+s+Edf3v4RMDyE2T6UZFmwDcgCgXGSDTDg3OQev2djSE4pTTi7Idnqra3d/xcJ/qE8gGEOdWk7MERnADSgy48yvelAUWjYtumw+toLsq89LdvObKezKgByGOJ61kWh43KGafhyMUmQzPEdhEBAGt9mONOELSEK1JuZzrR1PknF69oU7J3zV9+tk3r/f1jhwxcXnV6uLwvQuBsvsttAFLYOFEsmabVlNbu1rks+nxHuy+rYlQn5qhzL87H+/yTgxZ6+2N2SpKFl2+CwDgwokOhZIlNR1Y/HJijz3NVyWdoKq62ZZZ7nUld+Gsfjj6g/v/D/TZ6Ls22G3f7D8H4OP7QkE+hJh5b3b5fYTevT05KOPEFeWusy626K+GG++wXa2x/YxwExbnoNSV+9zyPUABspG5mCYKcMQBo+MASkASjBfnx8YFScn5+eT9jDbbFbruuiea7FoMrtq6Ce6/VEX4/djr58Pnxws4MtGRrfpGGSlA2DlOqk7FmQ5x/enl59Wf3w5vHcdzWUsLLZxHzDbIJwvp/L7eGigxdh/zMlyCMjbS7otGwRTnFkppEcSNsM9Y6yPDwey7Tju3fvH9Kdk8GafVkjaplxXAUUwPR+qts5xNUKH9LP9q2Lr2uzPmiaFze1DGvoCG6ZYH++P093N+37b79d5EgvOc0FRpRpP/G4ZDBK0M3n5WoFYMur1yjY/A8G4uL7sGCQBlgEkRcjje+kFJOfl/jkDk/f/O6YzKcVUQ91AmKeqceEsa/hjBqH7KqbxumrET7cHtB2aQ4/N0c6xCZEmMIWB7aBltNc13O/vb17983ff5fh5fmI6VCbQQJuvZdp3pXaG8pcelMdAhi+5NWfSGBDsMdNheESJgBZjuGCtrdiYJab+fh4jE92pzffPJTlwVrEbA9HlymoJe4+ez2nokR2BQpUR1IhYF7UiS1J2xgWFocXmHYI0GaebgtgpO0hFne7+f6xHer6/fendTmeWkd3NUxO4RK7m4/2tfVaEGiyGZeMdKli29vYfFAWx72Ha+iisU0kefs5kIB8+Ghuj0/h58ffP716eLPEsvRUvZ0APMf+7tOv9njKqLU3ZRc2CbbEj2vgb6EHaxQxXawi+VLtjBEDAtOAg7Bws1vevX049B+e70/H++Ma62lVeMnSWKeb16/iuChZntpSd7vi8/LCD65Q5JIAh+sPKUgIhmRuKWJIYKtYEinaDtavf/vDcXl+kzcP/+uYfVmWjDr1Np/52RefvfL9Q+5iOZ3PvP30q8Ox3S91C3ddY3BgHgB2YpQfmSA2tdMGhBzvjBRGMnLbxfr1b/6O/e3TY79/uF+d3XViuEZ5PX/8UTy3cz/Gcjy3cnvjc3g6TPWq1BcxNaIhLWPkYA84IGh7dzEEwJ4kLCexm779zdf3r+PNw3T+8ZTr6pjqjFPupv3N6309nlLWj2/ucfvp7S4fjje7CVWX1L+Vvg/lIS1YCVgGmbYuyW8YBLDNLoZtt7qP4x9+c5/Aqeu0nHvmyFwf1Wl/mHZ6PJ2kYzvi42l+xdPi6bCz64vDLzBkE2EYQKBGGRiBeXFFWDTsBjoRTHPf3nz9h15WreqtxGIGBO5ubgpvsD4dW3d/u8x3r0uUtmSbXpWmaoAdISWoLQsTwgaIBxDmsIU2Z7UByAhrxIkN7+rx/dc/LM3Z+qreE7ZjOhwOh0l535b17FWt7ua6uNsNWc6zotLIS33xCzw0qt4GtgGjj/trC1hb3MAhuo1S+/3Xf/cIrcvaGmyUIKLu9jOzndZz9jTW8uom8nRucimldAaqPOrtVQBzSwKShlFkQ0jLsVUFAdSlqI836j/+9ru3zz17ZlctLDVBzjo1qinTBQ03u8Psm4/cOudYO8vMmlvYY4vvAYchO3X5ahNNW8yMxkCjmEty2ODpx//95tSNCiumQjJJMHOVmuoEhMvhtnqab+fW4lDWldi5KhXyxeAb7oItpyRKGAZCChBhwBJAgybkNJwIr+e1nVV2yPlGJdcAIS8ugUwkJpe7WiPi8Gr2Urh/5ZyQrWogn4EzPkggO5W+KgdOARwSpAHa5NYNpWCfzrEDI8jemhapOmUnbQSimHWqSAbU+pE7eNrPeVrrdrK21AOgXz9KWOILHZCwqM0FGMhUhORAO+lmv0drLbWelh67uqRcwmDMJdwL1jOL0Z4jz1Pn6fbQ8nTaMtIIhlHuL1aQNHDi0IEFkLa46QCAMwW4OyIzXs+37bQsbUlgAjzXVY6UyzxFF2nVyAyQKlyWc+1qrUoDbgxDpI0+8o22CMEW87I2TK8LppYEZqeZbemvCpnB0vPcSj0otT82MYJuRoUc04TFu+qYb2Yd358XlFovTqjh8rbkFK6+f63Oyk1JvJRrUAa7Q2qaPr+ZHt88n3umxGkKtyzRRXPys4J1zWkXvWAXWXbV2dfzEUTdWkFLOaqiUpkj5gQPiwBApgZwjA8SGOzOXp9avPrVx/m39++asWoqJWoKmAAtWQPNhqZSM3Z1Isocazqm13la6oZzLipApjJ1he2jJBO85IJhMtKAKYPA8tgPf/LHX+7b70+PZmRnZQ0ywVALB6JkVyVFRqGcC9a+eErMrG1UP1xUrkx/OG3LROD1E3ggEo7+CS55Os9f/dsvoHen47mQAoM0ygyij8YKHRm2C4OZ6Y7WexRw96pKhjO3bhzOnh9S7yVOBhEzBElLCm4wiljP681f/Ppzxun33zxlIRFRAgCCdJC2Y4pqZM9CKMC0INTojlrTUva84CNl9g2h6FKiYVqSEMCoIhg4hQDi8XT7p//mC7h9/de/WQvJQJR4yYcIwSmKMxNoLRjs4OySLFRdle4tSwRNSV1pBGgNcAgCAaUMiEqnEAzDRunoja//1a9e5+P9D//lv713sRjhRKVSrAGhh11YJ6QZ/RxWb9nsyGzWqa5qPVOl1AhJmV1XOmB0RrgWpMElccMzgOC17f7oz/7k+Ic33/7t7773jcywTamAZhRUA05ErVkZGVF6T7FkOFtTz3rqa2JQRXL2XDs8igpoCDRGrbwiKAK0YEPKtfyTX9/d/+3Xb999d9opmQQAhyKAUgok0tGRhFq23rUqqovcW5KMuq6LSKLAUPbsjRgcEEFvBOGoxMNVaQ5EbaqJ0+tXz+9++/v747MOa1MGaVN0RL10QMGazoq+NEXtLgAyE2QJ195WICIkcsMb5ADiHLfOFAFwtHUXQg90Z0/G+bvj/Q/nZTXSyQoIBJgsdHY70GMqrYN1aruo5VV2ZZNQI2DXnjZAq5tWZopxgYKjd9aHhgYYffTWUalp3q0/vn9/dvZE9sFBOGoZ3fxos50ToyhcZpWpdpoOssBBq3Y7R+sXgZ79ghUNepR/jUbqAqi3MpJwdhp2X87WutpdBAwFgQuUjoiEkiQ6UNt6BjJlc9eX3gjVsxppdWUhe8+uQsZqO+KSkghc/QAJcEA3qemzz0PzR8uxdVAICjZhFUBhkAWc7aQzYas3bTC49ZYBqC52AHYDSsmeKRc75cKI4RDkB4y4aUTJZCrx6k8e3v4T/O3aVKLahFgLARGBsEuQe/UOSQ3K3i3Bq3o/Nk8zaz2XQgpNYDCVoyHuHYxpDm88kjdOb2sQnELGep6/+uJQptfH04NIkAYLadABiyBJTm6krZ5WWztMmlGySM0VdZlj1D+MmyJg5drE2YWEtLVphgdpAjilRp7e/tm/eH3+5Iun7777cTeTPYUIAiRhi6yFCJoklBbUerIEosBT6f0sqw445hTgBMwSwx0IhutWtS9BaDuHLeqy+tOvfhnvG/7uv74dXEyhI0JkDHoJjJG9grJ6R24XAMJlCnohKsk8K6Stcycz5SYEnecacKYEBIPohqkEuiMfd7/41Uf59PzjX/9mfZ00UCalXRk2i8USgPsWGkK0zsnD4gmbZa5SJXpq9IQyrJCUTpN20oSyWWapsTGtOYp4W7/85cffLvffffsmQz1KMKIEooazE1GiTqF0QkZUInurzr7maH8VhbarW7vQYVvjkinIgQIhiexdQrGDW1KA7Vzy5u6z5z+sp9+/abvsAYMSyAghClhLiSBGTYkIEUpgXSzk6JFpkleUtuGgbXIRDvRCg1CmRREabEsoHYDLDf9+/f55vT+mwDnXZ/VmzPvDVMrEqNNwhYKQR/Er1bbEkulUpzQ+uzSoo+IJGn6XHMSqJfFK84CWrADK9PTmfHo+eRS79WHNJmHf8lB2c8xRZcEkjAzBBAlxjqn1VDamnFZN9WEERGyBr8En9gSCW0eLGLmZhGw01NDbH7rX80KqnVu2spsnY6pY+3F/U2sUD7wlgQHG1vMWjEABbbdeyStnAsIQhxcOxigiLMHcmAqOeYY7D7r/8Z1KtlMax1PvuwOnw4SKYjgdaGWbRQmOQb1shAQ3WSJKZp27+/irg5AabrHRGNCWh72R3gaFAHPa5TfvT4pcpPvzue4/OkShyzSXXaFRq1ZWyAADIYykw6Ils+WaShBATLV0XOizzRrjMCIubMrg/jfe3VtMen18VjLV1tNjneYqZoMPtbLOCJeUFWUDVeXCysolB8mRpACijinNxmHL13EryY1M+RnlDjq9q/c/vj3vSrez8aObieszbwp2h9PNft5PZd5Vn895uRAdtkItjVJoT2y9dTpbXVu7oNGANw7eHIBps0b8RIJIcTq9+fY7flG6E+WWk9bH93N5vdOpnV7vs9bGPetuuYhgWAxHoGem7AzbQQj12dt4UwZAMlJGjPbMAFKx9R0GSNDVqfb7r+8/sY1Sb6jT6fHh+XXuPnt+fgxlL/Wg5VDnBHJUKZsE2wSvS7O15hSzI60qk8PT+EHN5ItLc0xlN1tQQJbdWmrNHnYgprWdTjnl/VzVmpw309TW3fnmYNkgQLzAtwyQQUZBi7lWBoev0wTBDQxfDhxRqxgZiYAt9/D8ydFVLlDUUnLN3bS+a76t0+m5lbp0rauLYAQB0/RAfdyXZUFUoFRCjnqZZIGbGkzwOsfeph0SHeMtCOjhSamlf1QA7Gc0lznm5RTn1wf28/ldexXO3tcIM6hxMdroq0IIMgJBoierobABXRPvhWmO0aoCULwcAoX6/dO5Hc+3h3l1nfed01wtq+5ef/Kofj7XwyHmCqEUD6cy5QS1ypWIMAqnDq+qH8J8nP7B9ri+BoU2dOKg3d999543f/6Xy/9cp/3OpUxaV91+/stff/ptP/bdfo4amQpeprj0mBQBOo7kaqAqY62bB/oSt5sZLkOnIUAYFwbVUCrO37+5+7O/+Jf/+7d9nmqdatiuH33xR1/dnu6e45NXtdAmyW0QHNuEgrVnJiAaUkSpU8XFD+DN6dMc3ekLCfRCCe7t6Z3++Ff/7o/fLp8dV6AEpZlz4PvHOfe/UnGBg7UQ/PBnADuKc8KaDFguuWatRo8PochL/oG2ZHwpY+SQDnK0Pv/iF//6V/WH9tUPb5ukzhkTqHff48sv7+qy2j2mKSoG9Qdn2EKBM0oqA0B4tXa7Sr8w+EjHGMQFL3oBEna55OtDfnf+5Vd/ufuul69q/nBOrJiCXWS5O7zaqZt0LSW4OdEWUDYiIjDt27qWWgvb4XCoL8/HBiOGQ2zt6qZHwgIRKMLukz//5ZePv8s/vYuPPvn+KVscIl1iP7/6eAd1EFFLlCB4zWb2wHAlXFRQ51q4n/f15zsYHgbZXM/XYSgh22EGj/r4n/+rm79689ltZHz0z9aHte8mFqLuphJKg+FSIoK85tWNCosIMuBS6lwC+7lk3XyFuP7Xuckgb4H4waGCSPmXfzK/eZ83h1wT81RkVjpinna1JwmwRgnE6HJpE6WTQYMRKFEiplpcprqeq/Hz10jd20x7+D83WpNhPcf+zz//97/5/Mt6LqvXR00lu3alsBxupAAj6hSXBseGUQAGCZNRehxqc6kFdcrncw1TP5WAIMGrJXiZ9JAE2qov/2l+/c1x3nctxvncWUKpWus8k1ZB4XChMaghDBSEgxe1lhIoQVtxmOqGBHDJC2YQG38TFwnGL7Jk6+v+T//ir/9H/Ok+1U+1KNOM0Ij1JTBCO1G2Mpa0AzF4KRoyGSUC3em4eYW6jW2Iq+PFi8i8qgVh1zxS//SPbv7uu/MvPkd3WsiM3aJaD7UWt85aXTbjaSAeAIagFMd8AjHXTp+zFqrtq36agOGLAJsDjGYSZBfbMt38+pf//a/w2Rc3S9ohdU37c5v2ryep90ZtAlj+0IZYhtMmIp2o01ytRKnLskTl1hZfL/tSgE01Hi3/irs//nX5m9/q9m5lO6/dtsPyNB1el/O5d8e1t7HGNHisTYy2g6EoRZBQojYql/1TLR5DKxtmhH1ZsgJ9Ncgo7xmf/vIv/9N/PH366c35MdqqAF3co8Zux1NLMzha9zKY2A9jzEJpYIKakKZgy+i9R6/F41DCw9v5QSObJEMNE3T47PQfftc+/vIujic6laxIRDhgdDGilIhSTIAxRhM50mw3wkA6GIDrrOXdx1MvtdbYFkte5IXtK28Ezth8slFK/91vzzd3h+hCG2Zq5w6QWnHOElFLKaUkLGKblozYVm45powMN1nHw+TArl5TLwY3gEEMY2tMgARBBloczv/54Wk3PXqO8vrYMqD1dG6y3c/ZMoCIUgogKLiRfkHbLkVbdkYEsc7z7pWWjD5VeQwuCVSLMBKjNm883qgthHI6fndP7d6/3x3ualFFR7a190b1Z3RnhIMFY3QclLMHaZkqpWiAFjOK25Jx25rZs5IcWdF+0Zm9zNWMSBgITZ/2d7CWuv/kbn+4w+NyWJe+KkVHZB+rBBrrapDNAtIwaowMxRi4PJ7POa8tpooKkgTNBK6FwPiQI67dE1t83J/7ce1xPr89fMxZilq1NjnAGP0v0GMwb2mxwGAAEXTAVBR0Az49ld2EeV9Vde3rgQ+AlC+TlBywiaXcan37jEPB4zfPd3/21S6aorB30dVR6sUCMKikI2wPSlwOFprBDmDBrkx3PtBjK0wAcSlQgzv8aW4C6EgCh0/r264ZS9XzH3Q7A45QY6uiQkknLQYIhjlWZEaiJ2EEiJB5M3Nh3R907HXb7RkQyDIcIyXxCmERIEOEl3J3uHu4B7/49Ivl9O1hv78p0zx4n1YJZYQgOpyDhUehNVK9bc8Fvfh881HVw+nV3eTyVHFp0DUc/xKYQQ6gfSkuJpSu+yn49nhTJz3TKDX2bWqxcU4jasasCSyQSfLi4tumMdzPx4lNKIzpcEEoBBG0OKYNL2uEt9pCGK2X+fX+8alVudzc3e4n93nXKsEKIEoMbBgEWZj6MK1CINgbIMVx8dx3uU6MmwpAYW2Mwbb/yW0HDYYMUrZiCJHrbtpPWQ71brotoVLmw9lhmohaACJycx6TSDjCBgIQELY5R1+W2f2Uu4mV13mqN5/bIPolSYyRCpgAGEYv0+2nD/XTPUs4UzHV3coSYAkgAhChGh6NUZqsY0aQUolwqwefs9UZp9TBVaiRvrj8BgleUgYjuDCa+5A97/fTbpr3yE6EhWlPEAo4yTGAkAJiAFHMErQNWQpbdpn2u+O0I6f1+L7C1yjcPOLiBHEh/baGYcBmcdpP5WaeCtpSCtlVDyUHJJKKwgjCcBaAKNyIIVsIp+RsZV9LV0R7ePdNDWO0uJeTfAFKMXiwHDzf4A8AGtPNZ4gKxwSnu+JQeypqiVJIhCK2AB/ZLQhuHVN2NOTDw+B5+/Hx6VSLbMZly2b0uVvn5itaD187J4Y7J6KbtahJzXWuva+qhUGSLkEorsQfMQYoGLtDme28JiKs4/Oz5xrIIDRIHMMce274gNIGwOQgusxQy6d3/dXhdkZC6opeCtkGHUWOIhAgYoPAl3FZZoo4Pz1nFFhe+zxPlSSpjbG4NkiX6mhgMMrQlqkqnt69e/PD+tmrfpijhLXKc8yBy7YQI4DwmPeMye22x5ddhpfnE2+jwWYtt1FThLf+4JJ/ttHSBaleszNghL/57f0KrOXm5tXrWwZ6a/vdvqqj1OAIyIBNimRsp48/2NVzwVxvosNtNaGamHLLRx/Q2Tj+Q318+Qotj8dDrHraHe5eVx1PWTv3ZaIitpi5NCce6GBDzMgOrUtGTKVWtoxKrrUg3IcEFy7l0u2Nj/gzCeqXyW+FWrAu71mmNA+qO0aVevXWoNo0RMgaVc/u6B3KpgiuKL2tvQJZC0O8JtH0xthxYxBsIJhgMQAqGK++eGzk0ue+yO5lYurcysRtrrcR5WM6cL2P5ezp1hxBu1vVycNpqZW+0ETkyAH/8HUF7QDEw5c3On+/7nyogSyVKFwLa2yj6UEDEODYsMHY8O7Z1FvLEhylBpzRl7qDiH6ZKW58/vVYf5Dgmjfb+fmJvex3Ox6q1qBbllyIKATGZIdj5DqGZ4By27fsHaXEiPuOedZ6qrteEJHyVgTz0i5c/8dY4c7tvfLphx/7fjfvb17X6q7e2wKrI1y3KhYj11PiGJimhMzMnigRtIm+xkxnrzN6oGrjCMZCqF7eH7SC3IgtUnUfC3qVPKZIjlp6dyIwtlBAUBxzZKibSFtorSsRAYzJch5mndded5bHvH07kn6ZAC4esH1FmLtPPnrb0CMKytiJH+S0ndiISF1Gk5K6izOd7r1pQAigenXM0/OyqB4yrdSgGzg6tRcC/KyhA7r3r75a3jwzW6mAyVLAbTSJMijqzZ84SrKQqdSa6VqCJJHZyrSj5VIP4rZ3Yf/D+H8xYBoOIUv1S56fS2zpIsYoSxfKYwCdvllh+Efv6loNRtBRqpemXZTWRNXX6HUwzqlhJKJcn1/ggFpxwauO6HGaP/6k9+KohailxpQZ2/76eJzA5mXRDHCTpNZ6maICLJUyp3po67qWUg95nkpHQhuV+DMWwy9ZFQAoXfPdF1665l0QpdRSoyqZGw0swWN3NTNRuC0cmlFKYZBQYjfNU880SwWnua6X4u+f5oEtUfGlJ3TUj/crnloEBZZSKogYjw9Y4cwxqeQYbmcqlcIUpZZggE6V3T5Y6nBKTKWQ8WK08/Jl/zQ2ACAqy7SbZnc5StQyHqmExtMlmZs7jnXTJknd01wiyCAJx26/h8asoqbGFj4jvK1DXYOBxgcQHxvZHrv5h/X+7EgWuNQYCB1pO1PpngCgUeUzhewITrWWAcWEebebpHM/i8o6HrzEy+kadcEK/5haauSP92shIkpBlCgbCwW7d3F7m2bHtvAdiLLbV0Bhm9N+PwMt0wFnbb2JYf008vnh/J+Xqmj9D+95iDJNDI4OKZ2ZmSn1GM+/9G3JAR7OVvc3cLpmetofSsJ23TVmr60Nf43BqY32bcPplwR5GRqadinr08P7fUTl4AQAOFvPLrkLCXfrsndLMlgmlrLb280BYj+XkBKTZiVc146IALcVhdGv8oMuLi3MaEyFsjs9nNeyVEM7kMzCnmvvGjsy6pKkBIASiBIg61SmqA6uZV+LG0LJndBWltrTjIgXqfiKDz9YZLAs4ydk5FKqcqMGWjjVcxAA6d5tyQWBqIFgwFFKDRs1UKaaFhGjKYqIGqWBAzxfJLjQzJsAG5t1xS5l2lW3LJBGl+yEBJPZskk0AhODERxLHAiSaoxyY6QMMorSdTcratRJEkMf8NWGdBMvUtFWdMPN06fHfFijBrJWWDKyj41rZiYqoxaCrBqtECsRoxeRLjoVMLfddMxalc0so55tPePLEd1PA5J2O9z9UcHjeFBgPPGTww0bi0lwjzpm9wNvDd/aCDt9iHvWed4RtWsdfPC1TeGHMTxf5IStfYKy3nxeHpe0pWVAWTP2B/aHhzqxlmoSofGkFbY0qFJ4Na9NMlsrRaotl0b2jUS/dmn/UAPb6A1cVe52d8d1Ve9txEup5eaurHPvrFMNgSXSDGzjjK2QSyU2WkLh7MkatSp7tibp0ixzg2neJrV+4Q62UZyM/XRrSa1DIFlK7CasnL4/NQRLIIJ0dYIlYmz0iAyCtZPOEitYatmfq6S+1Yarvn86eHphFRAOyRH1ADpHIQ6iFKb46aE9tcB+JIEg06y1kJRhRzAY2zwTBSpl3u3q5BKTmn6KhV5WCb5wnsHbwysxKJOBCnsULh27u9e392dMgRi8MkvdTROsxiESMPZytD2xWaapZktG4U9QyE8nwSAvaJ0wMwpSZDqzVthgdlSn5fnjx+NDSMlqBlxidzMzs3s80WLxwjQ5wIhpN/8fzlMLl4REwAYAAAAASUVORK5CYII=\n", + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAIEAAACBCAAAAADCy4aMAAAeYklEQVR4nIV7a5NkSXLVOR5xb2ZWdde8Z7QCiV2tMIGB4Efw34EPMpOBgfEQq12thpmd6Znurldm3hvh5/AhbmbXjASkWVtnZT3Cw5/Hj/tlB0AbJl68CMAAYAO2MN6nYUqyZUnNhg3DsscvEbYMm1bYtgFYspSwW0+rnc+5m7/7m7/5/vPb77+s+Edf3v4RMDyE2T6UZFmwDcgCgXGSDTDg3OQev2djSE4pTTi7Idnqra3d/xcJ/qE8gGEOdWk7MERnADSgy48yvelAUWjYtumw+toLsq89LdvObKezKgByGOJ61kWh43KGafhyMUmQzPEdhEBAGt9mONOELSEK1JuZzrR1PknF69oU7J3zV9+tk3r/f1jhwxcXnV6uLwvQuBsvsttAFLYOFEsmabVlNbu1rks+nxHuy+rYlQn5qhzL87H+/yTgxZ6+2N2SpKFl2+CwDgwokOhZIlNR1Y/HJijz3NVyWdoKq62ZZZ7nUld+Gsfjj6g/v/D/TZ6Ls22G3f7D8H4OP7QkE+hJh5b3b5fYTevT05KOPEFeWusy626K+GG++wXa2x/YxwExbnoNSV+9zyPUABspG5mCYKcMQBo+MASkASjBfnx8YFScn5+eT9jDbbFbruuiea7FoMrtq6Ce6/VEX4/djr58Pnxws4MtGRrfpGGSlA2DlOqk7FmQ5x/enl59Wf3w5vHcdzWUsLLZxHzDbIJwvp/L7eGigxdh/zMlyCMjbS7otGwRTnFkppEcSNsM9Y6yPDwey7Tju3fvH9Kdk8GafVkjaplxXAUUwPR+qts5xNUKH9LP9q2Lr2uzPmiaFze1DGvoCG6ZYH++P093N+37b79d5EgvOc0FRpRpP/G4ZDBK0M3n5WoFYMur1yjY/A8G4uL7sGCQBlgEkRcjje+kFJOfl/jkDk/f/O6YzKcVUQ91AmKeqceEsa/hjBqH7KqbxumrET7cHtB2aQ4/N0c6xCZEmMIWB7aBltNc13O/vb17983ff5fh5fmI6VCbQQJuvZdp3pXaG8pcelMdAhi+5NWfSGBDsMdNheESJgBZjuGCtrdiYJab+fh4jE92pzffPJTlwVrEbA9HlymoJe4+ez2nokR2BQpUR1IhYF7UiS1J2xgWFocXmHYI0GaebgtgpO0hFne7+f6xHer6/fendTmeWkd3NUxO4RK7m4/2tfVaEGiyGZeMdKli29vYfFAWx72Ha+iisU0kefs5kIB8+Ghuj0/h58ffP716eLPEsvRUvZ0APMf+7tOv9njKqLU3ZRc2CbbEj2vgb6EHaxQxXawi+VLtjBEDAtOAg7Bws1vevX049B+e70/H++Ma62lVeMnSWKeb16/iuChZntpSd7vi8/LCD65Q5JIAh+sPKUgIhmRuKWJIYKtYEinaDtavf/vDcXl+kzcP/+uYfVmWjDr1Np/52RefvfL9Q+5iOZ3PvP30q8Ox3S91C3ddY3BgHgB2YpQfmSA2tdMGhBzvjBRGMnLbxfr1b/6O/e3TY79/uF+d3XViuEZ5PX/8UTy3cz/Gcjy3cnvjc3g6TPWq1BcxNaIhLWPkYA84IGh7dzEEwJ4kLCexm779zdf3r+PNw3T+8ZTr6pjqjFPupv3N6309nlLWj2/ucfvp7S4fjje7CVWX1L+Vvg/lIS1YCVgGmbYuyW8YBLDNLoZtt7qP4x9+c5/Aqeu0nHvmyFwf1Wl/mHZ6PJ2kYzvi42l+xdPi6bCz64vDLzBkE2EYQKBGGRiBeXFFWDTsBjoRTHPf3nz9h15WreqtxGIGBO5ubgpvsD4dW3d/u8x3r0uUtmSbXpWmaoAdISWoLQsTwgaIBxDmsIU2Z7UByAhrxIkN7+rx/dc/LM3Z+qreE7ZjOhwOh0l535b17FWt7ua6uNsNWc6zotLIS33xCzw0qt4GtgGjj/trC1hb3MAhuo1S+/3Xf/cIrcvaGmyUIKLu9jOzndZz9jTW8uom8nRucimldAaqPOrtVQBzSwKShlFkQ0jLsVUFAdSlqI836j/+9ru3zz17ZlctLDVBzjo1qinTBQ03u8Psm4/cOudYO8vMmlvYY4vvAYchO3X5ahNNW8yMxkCjmEty2ODpx//95tSNCiumQjJJMHOVmuoEhMvhtnqab+fW4lDWldi5KhXyxeAb7oItpyRKGAZCChBhwBJAgybkNJwIr+e1nVV2yPlGJdcAIS8ugUwkJpe7WiPi8Gr2Urh/5ZyQrWogn4EzPkggO5W+KgdOARwSpAHa5NYNpWCfzrEDI8jemhapOmUnbQSimHWqSAbU+pE7eNrPeVrrdrK21AOgXz9KWOILHZCwqM0FGMhUhORAO+lmv0drLbWelh67uqRcwmDMJdwL1jOL0Z4jz1Pn6fbQ8nTaMtIIhlHuL1aQNHDi0IEFkLa46QCAMwW4OyIzXs+37bQsbUlgAjzXVY6UyzxFF2nVyAyQKlyWc+1qrUoDbgxDpI0+8o22CMEW87I2TK8LppYEZqeZbemvCpnB0vPcSj0otT82MYJuRoUc04TFu+qYb2Yd358XlFovTqjh8rbkFK6+f63Oyk1JvJRrUAa7Q2qaPr+ZHt88n3umxGkKtyzRRXPys4J1zWkXvWAXWXbV2dfzEUTdWkFLOaqiUpkj5gQPiwBApgZwjA8SGOzOXp9avPrVx/m39++asWoqJWoKmAAtWQPNhqZSM3Z1Isocazqm13la6oZzLipApjJ1he2jJBO85IJhMtKAKYPA8tgPf/LHX+7b70+PZmRnZQ0ywVALB6JkVyVFRqGcC9a+eErMrG1UP1xUrkx/OG3LROD1E3ggEo7+CS55Os9f/dsvoHen47mQAoM0ygyij8YKHRm2C4OZ6Y7WexRw96pKhjO3bhzOnh9S7yVOBhEzBElLCm4wiljP681f/Ppzxun33zxlIRFRAgCCdJC2Y4pqZM9CKMC0INTojlrTUva84CNl9g2h6FKiYVqSEMCoIhg4hQDi8XT7p//mC7h9/de/WQvJQJR4yYcIwSmKMxNoLRjs4OySLFRdle4tSwRNSV1pBGgNcAgCAaUMiEqnEAzDRunoja//1a9e5+P9D//lv713sRjhRKVSrAGhh11YJ6QZ/RxWb9nsyGzWqa5qPVOl1AhJmV1XOmB0RrgWpMElccMzgOC17f7oz/7k+Ic33/7t7773jcywTamAZhRUA05ErVkZGVF6T7FkOFtTz3rqa2JQRXL2XDs8igpoCDRGrbwiKAK0YEPKtfyTX9/d/+3Xb999d9opmQQAhyKAUgok0tGRhFq23rUqqovcW5KMuq6LSKLAUPbsjRgcEEFvBOGoxMNVaQ5EbaqJ0+tXz+9++/v747MOa1MGaVN0RL10QMGazoq+NEXtLgAyE2QJ195WICIkcsMb5ADiHLfOFAFwtHUXQg90Z0/G+bvj/Q/nZTXSyQoIBJgsdHY70GMqrYN1aruo5VV2ZZNQI2DXnjZAq5tWZopxgYKjd9aHhgYYffTWUalp3q0/vn9/dvZE9sFBOGoZ3fxos50ToyhcZpWpdpoOssBBq3Y7R+sXgZ79ghUNepR/jUbqAqi3MpJwdhp2X87WutpdBAwFgQuUjoiEkiQ6UNt6BjJlc9eX3gjVsxppdWUhe8+uQsZqO+KSkghc/QAJcEA3qemzz0PzR8uxdVAICjZhFUBhkAWc7aQzYas3bTC49ZYBqC52AHYDSsmeKRc75cKI4RDkB4y4aUTJZCrx6k8e3v4T/O3aVKLahFgLARGBsEuQe/UOSQ3K3i3Bq3o/Nk8zaz2XQgpNYDCVoyHuHYxpDm88kjdOb2sQnELGep6/+uJQptfH04NIkAYLadABiyBJTm6krZ5WWztMmlGySM0VdZlj1D+MmyJg5drE2YWEtLVphgdpAjilRp7e/tm/eH3+5Iun7777cTeTPYUIAiRhi6yFCJoklBbUerIEosBT6f0sqw445hTgBMwSwx0IhutWtS9BaDuHLeqy+tOvfhnvG/7uv74dXEyhI0JkDHoJjJG9grJ6R24XAMJlCnohKsk8K6Stcycz5SYEnecacKYEBIPohqkEuiMfd7/41Uf59PzjX/9mfZ00UCalXRk2i8USgPsWGkK0zsnD4gmbZa5SJXpq9IQyrJCUTpN20oSyWWapsTGtOYp4W7/85cffLvffffsmQz1KMKIEooazE1GiTqF0QkZUInurzr7maH8VhbarW7vQYVvjkinIgQIhiexdQrGDW1KA7Vzy5u6z5z+sp9+/abvsAYMSyAghClhLiSBGTYkIEUpgXSzk6JFpkleUtuGgbXIRDvRCg1CmRREabEsoHYDLDf9+/f55vT+mwDnXZ/VmzPvDVMrEqNNwhYKQR/Er1bbEkulUpzQ+uzSoo+IJGn6XHMSqJfFK84CWrADK9PTmfHo+eRS79WHNJmHf8lB2c8xRZcEkjAzBBAlxjqn1VDamnFZN9WEERGyBr8En9gSCW0eLGLmZhGw01NDbH7rX80KqnVu2spsnY6pY+3F/U2sUD7wlgQHG1vMWjEABbbdeyStnAsIQhxcOxigiLMHcmAqOeYY7D7r/8Z1KtlMax1PvuwOnw4SKYjgdaGWbRQmOQb1shAQ3WSJKZp27+/irg5AabrHRGNCWh72R3gaFAHPa5TfvT4pcpPvzue4/OkShyzSXXaFRq1ZWyAADIYykw6Ils+WaShBATLV0XOizzRrjMCIubMrg/jfe3VtMen18VjLV1tNjneYqZoMPtbLOCJeUFWUDVeXCysolB8mRpACijinNxmHL13EryY1M+RnlDjq9q/c/vj3vSrez8aObieszbwp2h9PNft5PZd5Vn895uRAdtkItjVJoT2y9dTpbXVu7oNGANw7eHIBps0b8RIJIcTq9+fY7flG6E+WWk9bH93N5vdOpnV7vs9bGPetuuYhgWAxHoGem7AzbQQj12dt4UwZAMlJGjPbMAFKx9R0GSNDVqfb7r+8/sY1Sb6jT6fHh+XXuPnt+fgxlL/Wg5VDnBHJUKZsE2wSvS7O15hSzI60qk8PT+EHN5ItLc0xlN1tQQJbdWmrNHnYgprWdTjnl/VzVmpw309TW3fnmYNkgQLzAtwyQQUZBi7lWBoev0wTBDQxfDhxRqxgZiYAt9/D8ydFVLlDUUnLN3bS+a76t0+m5lbp0rauLYAQB0/RAfdyXZUFUoFRCjnqZZIGbGkzwOsfeph0SHeMtCOjhSamlf1QA7Gc0lznm5RTn1wf28/ldexXO3tcIM6hxMdroq0IIMgJBoierobABXRPvhWmO0aoCULwcAoX6/dO5Hc+3h3l1nfed01wtq+5ef/Kofj7XwyHmCqEUD6cy5QS1ypWIMAqnDq+qH8J8nP7B9ri+BoU2dOKg3d999543f/6Xy/9cp/3OpUxaV91+/stff/ptP/bdfo4amQpeprj0mBQBOo7kaqAqY62bB/oSt5sZLkOnIUAYFwbVUCrO37+5+7O/+Jf/+7d9nmqdatiuH33xR1/dnu6e45NXtdAmyW0QHNuEgrVnJiAaUkSpU8XFD+DN6dMc3ekLCfRCCe7t6Z3++Ff/7o/fLp8dV6AEpZlz4PvHOfe/UnGBg7UQ/PBnADuKc8KaDFguuWatRo8PochL/oG2ZHwpY+SQDnK0Pv/iF//6V/WH9tUPb5ukzhkTqHff48sv7+qy2j2mKSoG9Qdn2EKBM0oqA0B4tXa7Sr8w+EjHGMQFL3oBEna55OtDfnf+5Vd/ufuul69q/nBOrJiCXWS5O7zaqZt0LSW4OdEWUDYiIjDt27qWWgvb4XCoL8/HBiOGQ2zt6qZHwgIRKMLukz//5ZePv8s/vYuPPvn+KVscIl1iP7/6eAd1EFFLlCB4zWb2wHAlXFRQ51q4n/f15zsYHgbZXM/XYSgh22EGj/r4n/+rm79689ltZHz0z9aHte8mFqLuphJKg+FSIoK85tWNCosIMuBS6lwC+7lk3XyFuP7Xuckgb4H4waGCSPmXfzK/eZ83h1wT81RkVjpinna1JwmwRgnE6HJpE6WTQYMRKFEiplpcprqeq/Hz10jd20x7+D83WpNhPcf+zz//97/5/Mt6LqvXR00lu3alsBxupAAj6hSXBseGUQAGCZNRehxqc6kFdcrncw1TP5WAIMGrJXiZ9JAE2qov/2l+/c1x3nctxvncWUKpWus8k1ZB4XChMaghDBSEgxe1lhIoQVtxmOqGBHDJC2YQG38TFwnGL7Jk6+v+T//ir/9H/Ok+1U+1KNOM0Ij1JTBCO1G2Mpa0AzF4KRoyGSUC3em4eYW6jW2Iq+PFi8i8qgVh1zxS//SPbv7uu/MvPkd3WsiM3aJaD7UWt85aXTbjaSAeAIagFMd8AjHXTp+zFqrtq36agOGLAJsDjGYSZBfbMt38+pf//a/w2Rc3S9ohdU37c5v2ryep90ZtAlj+0IZYhtMmIp2o01ytRKnLskTl1hZfL/tSgE01Hi3/irs//nX5m9/q9m5lO6/dtsPyNB1el/O5d8e1t7HGNHisTYy2g6EoRZBQojYql/1TLR5DKxtmhH1ZsgJ9Ncgo7xmf/vIv/9N/PH366c35MdqqAF3co8Zux1NLMzha9zKY2A9jzEJpYIKakKZgy+i9R6/F41DCw9v5QSObJEMNE3T47PQfftc+/vIujic6laxIRDhgdDGilIhSTIAxRhM50mw3wkA6GIDrrOXdx1MvtdbYFkte5IXtK28Ezth8slFK/91vzzd3h+hCG2Zq5w6QWnHOElFLKaUkLGKblozYVm45powMN1nHw+TArl5TLwY3gEEMY2tMgARBBloczv/54Wk3PXqO8vrYMqD1dG6y3c/ZMoCIUgogKLiRfkHbLkVbdkYEsc7z7pWWjD5VeQwuCVSLMBKjNm883qgthHI6fndP7d6/3x3ualFFR7a190b1Z3RnhIMFY3QclLMHaZkqpWiAFjOK25Jx25rZs5IcWdF+0Zm9zNWMSBgITZ/2d7CWuv/kbn+4w+NyWJe+KkVHZB+rBBrrapDNAtIwaowMxRi4PJ7POa8tpooKkgTNBK6FwPiQI67dE1t83J/7ce1xPr89fMxZilq1NjnAGP0v0GMwb2mxwGAAEXTAVBR0Az49ld2EeV9Vde3rgQ+AlC+TlBywiaXcan37jEPB4zfPd3/21S6aorB30dVR6sUCMKikI2wPSlwOFprBDmDBrkx3PtBjK0wAcSlQgzv8aW4C6EgCh0/r264ZS9XzH3Q7A45QY6uiQkknLQYIhjlWZEaiJ2EEiJB5M3Nh3R907HXb7RkQyDIcIyXxCmERIEOEl3J3uHu4B7/49Ivl9O1hv78p0zx4n1YJZYQgOpyDhUehNVK9bc8Fvfh881HVw+nV3eTyVHFp0DUc/xKYQQ6gfSkuJpSu+yn49nhTJz3TKDX2bWqxcU4jasasCSyQSfLi4tumMdzPx4lNKIzpcEEoBBG0OKYNL2uEt9pCGK2X+fX+8alVudzc3e4n93nXKsEKIEoMbBgEWZj6MK1CINgbIMVx8dx3uU6MmwpAYW2Mwbb/yW0HDYYMUrZiCJHrbtpPWQ71brotoVLmw9lhmohaACJycx6TSDjCBgIQELY5R1+W2f2Uu4mV13mqN5/bIPolSYyRCpgAGEYv0+2nD/XTPUs4UzHV3coSYAkgAhChGh6NUZqsY0aQUolwqwefs9UZp9TBVaiRvrj8BgleUgYjuDCa+5A97/fTbpr3yE6EhWlPEAo4yTGAkAJiAFHMErQNWQpbdpn2u+O0I6f1+L7C1yjcPOLiBHEh/baGYcBmcdpP5WaeCtpSCtlVDyUHJJKKwgjCcBaAKNyIIVsIp+RsZV9LV0R7ePdNDWO0uJeTfAFKMXiwHDzf4A8AGtPNZ4gKxwSnu+JQeypqiVJIhCK2AB/ZLQhuHVN2NOTDw+B5+/Hx6VSLbMZly2b0uVvn5itaD187J4Y7J6KbtahJzXWuva+qhUGSLkEorsQfMQYoGLtDme28JiKs4/Oz5xrIIDRIHMMce274gNIGwOQgusxQy6d3/dXhdkZC6opeCtkGHUWOIhAgYoPAl3FZZoo4Pz1nFFhe+zxPlSSpjbG4NkiX6mhgMMrQlqkqnt69e/PD+tmrfpijhLXKc8yBy7YQI4DwmPeMye22x5ddhpfnE2+jwWYtt1FThLf+4JJ/ttHSBaleszNghL/57f0KrOXm5tXrWwZ6a/vdvqqj1OAIyIBNimRsp48/2NVzwVxvosNtNaGamHLLRx/Q2Tj+Q318+Qotj8dDrHraHe5eVx1PWTv3ZaIitpi5NCce6GBDzMgOrUtGTKVWtoxKrrUg3IcEFy7l0u2Nj/gzCeqXyW+FWrAu71mmNA+qO0aVevXWoNo0RMgaVc/u6B3KpgiuKL2tvQJZC0O8JtH0xthxYxBsIJhgMQAqGK++eGzk0ue+yO5lYurcysRtrrcR5WM6cL2P5ezp1hxBu1vVycNpqZW+0ETkyAH/8HUF7QDEw5c3On+/7nyogSyVKFwLa2yj6UEDEODYsMHY8O7Z1FvLEhylBpzRl7qDiH6ZKW58/vVYf5Dgmjfb+fmJvex3Ox6q1qBbllyIKATGZIdj5DqGZ4By27fsHaXEiPuOedZ6qrteEJHyVgTz0i5c/8dY4c7tvfLphx/7fjfvb17X6q7e2wKrI1y3KhYj11PiGJimhMzMnigRtIm+xkxnrzN6oGrjCMZCqF7eH7SC3IgtUnUfC3qVPKZIjlp6dyIwtlBAUBxzZKibSFtorSsRAYzJch5mndded5bHvH07kn6ZAC4esH1FmLtPPnrb0CMKytiJH+S0ndiISF1Gk5K6izOd7r1pQAigenXM0/OyqB4yrdSgGzg6tRcC/KyhA7r3r75a3jwzW6mAyVLAbTSJMijqzZ84SrKQqdSa6VqCJJHZyrSj5VIP4rZ3Yf/D+H8xYBoOIUv1S56fS2zpIsYoSxfKYwCdvllh+Efv6loNRtBRqpemXZTWRNXX6HUwzqlhJKJcn1/ggFpxwauO6HGaP/6k9+KohailxpQZ2/76eJzA5mXRDHCTpNZ6maICLJUyp3po67qWUg95nkpHQhuV+DMWwy9ZFQAoXfPdF1665l0QpdRSoyqZGw0swWN3NTNRuC0cmlFKYZBQYjfNU880SwWnua6X4u+f5oEtUfGlJ3TUj/crnloEBZZSKogYjw9Y4cwxqeQYbmcqlcIUpZZggE6V3T5Y6nBKTKWQ8WK08/Jl/zQ2ACAqy7SbZnc5StQyHqmExtMlmZs7jnXTJknd01wiyCAJx26/h8asoqbGFj4jvK1DXYOBxgcQHxvZHrv5h/X+7EgWuNQYCB1pO1PpngCgUeUzhewITrWWAcWEebebpHM/i8o6HrzEy+kadcEK/5haauSP92shIkpBlCgbCwW7d3F7m2bHtvAdiLLbV0Bhm9N+PwMt0wFnbb2JYf008vnh/J+Xqmj9D+95iDJNDI4OKZ2ZmSn1GM+/9G3JAR7OVvc3cLpmetofSsJ23TVmr60Nf43BqY32bcPplwR5GRqadinr08P7fUTl4AQAOFvPLrkLCXfrsndLMlgmlrLb280BYj+XkBKTZiVc146IALcVhdGv8oMuLi3MaEyFsjs9nNeyVEM7kMzCnmvvGjsy6pKkBIASiBIg61SmqA6uZV+LG0LJndBWltrTjIgXqfiKDz9YZLAs4ydk5FKqcqMGWjjVcxAA6d5tyQWBqIFgwFFKDRs1UKaaFhGjKYqIGqWBAzxfJLjQzJsAG5t1xS5l2lW3LJBGl+yEBJPZskk0AhODERxLHAiSaoxyY6QMMorSdTcratRJEkMf8NWGdBMvUtFWdMPN06fHfFijBrJWWDKyj41rZiYqoxaCrBqtECsRoxeRLjoVMLfddMxalc0so55tPePLEd1PA5J2O9z9UcHjeFBgPPGTww0bi0lwjzpm9wNvDd/aCDt9iHvWed4RtWsdfPC1TeGHMTxf5IStfYKy3nxeHpe0pWVAWTP2B/aHhzqxlmoSofGkFbY0qFJ4Na9NMlsrRaotl0b2jUS/dmn/UAPb6A1cVe52d8d1Ve9txEup5eaurHPvrFMNgSXSDGzjjK2QSyU2WkLh7MkatSp7tibp0ixzg2neJrV+4Q62UZyM/XRrSa1DIFlK7CasnL4/NQRLIIJ0dYIlYmz0iAyCtZPOEitYatmfq6S+1Yarvn86eHphFRAOyRH1ADpHIQ6iFKb46aE9tcB+JIEg06y1kJRhRzAY2zwTBSpl3u3q5BKTmn6KhV5WCb5wnsHbwysxKJOBCnsULh27u9e392dMgRi8MkvdTROsxiESMPZytD2xWaapZktG4U9QyE8nwSAvaJ0wMwpSZDqzVthgdlSn5fnjx+NDSMlqBlxidzMzs3s80WLxwjQ5wIhpN/8fzlMLl4REwAYAAAAASUVORK5CYII=", "text/plain": [ "" ] @@ -317,7 +313,7 @@ "outputs": [ { "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAIEAAACBCAAAAADCy4aMAAAq3ElEQVR4nDW7SZNt2XUe9q219j7N7fJm5svM179qUUUABYA0SDQixMbB1mHJshRuqAHlgTV0hAcOhz3yQDNPHOGQI+yBRlaEw6YUDJG2KFIkRZEUBIAAQRZQKFShutdnnzdvc87Ze6+1PLgPef/A2SvPXl976B+PV/Eb4dffuY1l/2ha7dRhZKVfhrt6kS8Oqt2zy/N+dH/+tExlWsbkqCZtTstw9bnLw0dWpB9Pxn0ehnjB0U2mZlVQqvA05/km+nHxIVoIzdJvjpqF/9G9lD+7jlfNj27sft8+/4Q+5lnT/evdX3tnPHnK15PqYG/sfrq+GB3ZalXuxIO3P1g0e5/Ch9iZ7U1lsrxuZqPletGvfuLq8IlUk81ot7pWst2TqcLGu4AVxJGf5W6mSGtOSjl6QMAm5me+t5wvLyddbkZEdybkPArh47fvvfVXB7uPpiebV/boPITnfDnfWy8wxfjs38xKz69e41bVBlJ7542R52WyhN317vVsHR7N5zbQuhk9rHXEUjl7HJUa/XpztGFKa081kSB0bbkYrcc2ea77n+yXQaaKvRMx5fDD870332vrR018Scqz0XrIR93u6Pn1jRInH1zUp/X4VpPHOzzp+ke7Pwk3Lz0103m8nC71dLanPfpbzy/0hoUYoTHmUOf1eb69aLvUdyNGXaxSLjR+fI/Px9TFyzS+1qrvw9VamyZc1XfOdW+Hupo3lp/o9Ohsgg8DZIQnT8XmuNfkW7GRfHL65l6pBrtIE4674WKvD9fTtgdHevI43KhWVJuzy4SLPgs3dNR1aSg9h9gH4jZXvAmX3bjKTfPxLVVP/bTB3Eto7rXh1ub5zskzDokzpo93O9spd/ty+USi1zeq0XwdF9P3Vz/ppbbrXmZxtFOu59f1Bbd9lnsXl092x+PNvHLjOM710j+mHfV1ztkmQy3kRBT7uNm89/LzndFiOTq/32qZnvvt7vL+ebhbPqkvsgwJZOsxj9fTa1DT9Gn5tF3J5GgmozLu22/u/7RN1qtVifWkDklnXf241gs7uPmDYTnbQdgbaSdNHbw/v5xNBu1z71yJipYmiItJPHr4+ZCbi5Gc1+3aexrH9fXiNKQTr8YhX+b1NFce1tXp3kijhpOrCrvV3Rjr2HTxu6823ZA2iXg0HXkOo5Ve15NLusPf4HPZnRSarfOoqrycnbT3ujPUvWW2KhrCEEO1SURDHL0zWtx8JJiXxYwXPt4gdAjrWOeLw2U30jYWvb79FA03p688XrnU9W3eGXuzqH70qd3xohoohHHdqkm9DCVPzuc7T08mT27vU253irQ2zZuH6civ11H7bCShLlFMGETe9HLJb//GXyUufXtKWFfrdTm5eRU57zHml7l+PMYK/adWk5lOzl59WHOs5vd42hCd4XtH09Gi8JAne9PaEcXqVV9dj5unZ7x6ZW/AqC0+GY8vP3kyvqFXQ1wtB0NbReKmroJAqyGMWvG739lf3VnCrysvY+88LuolN7E/8Uk434nPmsWNvOTlwTuv/Kg+QZzPpdoRK+PHr7aTjTVXGO+MJ02HIMPTtC59+/y8jeOx7s2aKjZ5cXI1mvbnp93mrESuqjrUVV2zMBMFS8rrvPpO6/NLWpt0jLzpfFXqoFflVr4eZNZVZdKetVevvPfgfJKrZjyZlT0unJ7ujKEunU2qSPm6HpstRx++tNp73O1czto483GOw2Y92kknQx4yvJHAysLEkphjBsESk1K785S9UL98cEy6vjl5cudq4LFb10Vyq0Ien+nyDV8cx5zr2XgXe3VfDxe7RxZNNsvxtAqWmknyhV7eL9WqHs52x/tjaalaDLznJx8ul5vC0o4mo9i0TdMGB4uaqY2SyV7fT747Wr+0ZKRchumy1IUo9Lb3VGu/Gy+00TK81v35/blljGdViCHFPstEnPpcZm0IVlpYtVz2E5W8Opu01TSK27pruFstek9GMVKsxJ2YyI2YHASFGj0/+KBdd6vVzTPSXq6aZuj3ksQw8PGIT/er9fVschbun5y2DfpKZi014LCWNL+xlrXxPNQxQCl6uciU9p52Z/v1ZN6YZY+zlfV91zvXVWRUAUZk7MVIUIgJqtVluPz0ozz/7hexsy6prPaXdFFftlUwlGqzv1N6XtW35dHCpiMlmU4CMyGH1e5o8JIxlioKr1tQPglDnDxb6oG281ZLDq1lHzZrCxVVsWYBnMwZKsgEBjHnnBnrUd+slifxzvHVBtBGp4Wk5snmxrpUm0eXtFdt3ukX1QRF2ybEWKarspFoPvRUNyOGLaoYVmtfYLS+XE8w36k2icYNyma96KkZj3fnszZGDjBmDpHAMBcRJi1YbT766TXNv9PsTgzivSzsccdjXiNRi9MyaavqYulNXRtV1bzagBdtVq9zLl61FUvldSxdWa65xmk3pdlukxGaqN31Ym3NrN4ZT9rIgChFiQFBJAoxk7uZ5TDyb70JteuHR02v6pudaSusYXSSQnUy7LT71w+f78TDYbzOk1HXUdcOw9zbdydW2gZaFeWoEq/P6mG2WFi9OwmFmqi6uh6obmMVggVjY0lRi0RVZnFnN3cz8rJaj3aev/V7O9/+ss6rwRorT/rlroVqd5ne+/l3RnZ59p3JG12Yrerni5O9pDc+kqMnq2aWruhBJ2Ap4rLi65lxPuWDZseWExfv1qscqroKUQgQR1B2CmRUMYzA6g6KqVAotkjvP7hanOmd5yWUcXw+VlIerK1Hv/Xy79z43uKqpXFVHdn05IMfPr+gZLmP/V984+H64dXl9XJjVrrxlYWueTq9fXQryw3ltLi8ytJMmrYORAR3bP+YYEU1GREAd1Bax+v71+t+9JnNzvIAtBo97AeXEp7bwOm1j37+CX1v/w6L177L5yflbvh4zM+wzm2fjjfL0WS2Owm1LM7bVD/Fan+i7fr4gIZVkThuogcmZ3en4s7sDpgzETkyAJDUm7ja/d4r71RP237W2q3bm6vLesw2hLp8/NaNRS6XD+GjmkMLi3eG1Uk1uwiHQ7xsnx2k6nxnfX2xvzvl4yamy9DYQY5DudttisWmbhnuDrjBzJhdHQLOQ0mapQQFuzPWs9Xk2cGT+blY3a0nz9A2S8kcprVfr0cppX72qcudSc11GXWTUZe6JGfrna9+PVC7XLUns+dP9/baULzvY9xjKjLuVD3EtiKABYAbFFAoGIN1nhd+VpVc96wViCdOOWNy1czZ7ezsk089/twwrRehTXvjjx8FOmmrNK5bCUyxHS8rlzF/eHv4reVYyisnC/FmtV7S6zM+TnE6rQvTQEoh1kFg2/8/OUFhgKdU7ApLG7qryzrMsdo0tYyeBIsDWT/Z3D6WMunH/TCqQngUr/hqnGbd/nWpJXAQIi5lXbXPhvKXjjMyfTfPm9nEdpavj67qq735NBDIglPLgciJzBgAAXAtCtW8vj6fPg1rHc/b4XiobNM1MfvIEjWrXCnZVV9bM2a3MCJumg/iM3t0dL8KoMAcOF0s6SJ9dDHm8ybEIPtXi1Ob7h1277Z8vbseT1MAa64iERzuYIJ7MZSSN3mdz6kMVdkkIXRdKhhKV7oUveZ6g73FnYWsxmndJpVJDONnY1tVVT9Pt8sIJCFoFfToatmvedRXh6WOMhquKMzTo0+NjsfPb37w4DBHN/cRMbltb5sQKSFvhnXfrzf9erS8CKv2nm3ips65vxyKVWEsEiLVVF/C+sHGnZi3IaE/OEnhufFih8ASmlyRd3d/+EGPnbyz8+nbfxYP/uXB8OjVPUt/8VLgv759UfXjHXIOIDdndzdztlz6zfJ8oV2Qrn+Issv69mEeL1F0OeRLrWZzYhZezyNOdecyTS4PR5rCMuTL0V2/igeP4ewgtF7TzuLZ4ykw/uIX7/lfGjUn6L62WjyRv/rKN+P8ieza1UTIQWDNTnASH1YXl4tOh5TPF3exZ0HLOD2bn1XX1BklCu3N/bGbjtKNDS14HEfcdeMkoZKn0/Z0lT9ZmQo5M0GE9vwLcrw8vT3k8eYn/uS1v/3P13+jHb9398t//Ae8yp+mt9/cG1oCm5s6oDBeLZ+dDktZDGf6mN8pFSqADxe1D9Wy0iun2d1b0bnOPEXadE21upUn5wfjp8F81vjepcswKZqbIIHYg80/E767viqLi+oL/8Hv97f/+26yfvv4P73/t373m/3Dt+6fXVdTeAlsxO7qyP3y+LI77Z9crIZEq9gO3Twf6lP99nRv2OvPNctsWjFJ4dG64Hwz7nZyOnxelzqYtePjEylsOiQQkxBBYrzDm6F+sq42z77yd/+3n561V+Xdr71E8ul3T9M77T0WNUYJDjaDW7/sFleXTxeb7iwHObB0i6vT+hqzq3W5c7keeZge7NdwOHsW5f3z/dHFnSybaQ65bhYzF9fKtGJSgJg1yJxt9Gj00Qdn7/ybz/3sn768Xsy++p+sFieXP3n8bH59ucvkFAjsZupDt1o9PV9f9evV5U63+6nzZ7j+4pv/zy8dvP2tw1Q9dbbr2y/dnDhbD4cLhzC7uL2R04nlUWhtxIs10YbqK4cbOTm7R9vxccl8ubBvfTNOlsPBp/BPni7ltP3s42EdXMXdhZDgppvN1ZOz6XVZXvkvf+Z3Dl8Nb/yfNyfVEW78HP/5Z77TFj4f7+w1JKQKLoOW+cP9Zt14lFzF0I3DshsmA/Vhup6bGwgmDLJ5/ZWdneXq+uM8dOc+//6NpXin4WH5fKhRiUMccLilvPhwOYnDR0/DP/x7+Hq9960/nDY3dH2429351X917/i6tBhVTSBCSoo0Xg/rW9XVvOHgbmGYPxkN969BVc59LmYOghARczkcv7Y43uzzJ22+fvXyHlpcytXR7uTNo0sNLAxycfZ88ej6xs7jDx4d/NKXbPqrv3v1Gz+9uV+eNDesj8v0Q2njzqsvz2OEW0TRdBUff3q6mlUddO4l1LMu9k/i0dPQ0MrUrYRIAOBe0e7o5tH5/tlngn4ye+VQZDrZjJuJB5bAwmB2VU3ddX/w8vrs4f1ff2BD/fP/3p5Ob2/ee+/nq9XZJ39xmuc7N9+4M2uNS1YW7+LZ5k6mPCfPNXIIoeqPNvWyq8bNDmdPQwUNRKBKAxfm2/XBQcVhZ1LXHnznBnvVFOYGEUGgDLfc1y9NLy4W9sd7r7xx79X/6p89L9XV6otfWj/+tx8+pvr+vQd3d6PAMHjFkGeHz9tpunle75803vZhtB5XZ91Fy6mNbUot3EFOBAdDKi1z2h3YPRDX0SqnSANJ7UGYGc5kecBBs7f47OJH35Y/a3eO3vqps4/k9kvhWz/662NpZg8evDLab4Q8E1P2KvtmJw2HJY76pvY0CrvnN/JaRteHLg5116IIAMicGAT3Epxs34iJJcAJMW2sIgJARFCjhnk5fuszP/r28ZWs3/3on03GDf3polvncK/dv31nZxoqUgMAM95QN6fxKhzS1X6sCkIbq1XIHV/fjeRlMNeCwk4kICTjWHggOAWiKHBJgVQkxiBgh8HNdJqob2bnd+NiebFYn3fXV826Xc7C0f3JhF5uJ6OGiGBu7lo6sVy313uBmolyGYfnk6GffEjaX06a0o2HqjZCiU5S2IRVXSv3CONoxSuICRNg5jxU7tkRq1WdJogH3aTbnF136OzZ5JTiLO6HENp9bppI5u5FU7E0EFpa7lgo9TQVnwSS1dPVxb2PKyqXNvCEvITa3chJQS4VAtwpk4RiVa4puzhYAjMFVTd2ilI2VV1iPenni5UlvByHZMNYymTSjtBGuLu7qSa96rVt4jCuUNdchdIE35TNCqGutL945uO7xbwMwdjIXZyJoioMHKynOroHEBhBAoBiJE4mwhaRao0htjtpuM6NXt5ZsBTsNLGuYKJmZlpK0eukhxo7iT7W/vBkh8Lo27c//GTvg7uXedzo7GTIvUR3dy8CyWylyg4oOTc+oHEhUjAHuANm7uQMl9Us1WBiUW72Lj2P0SYqPI3ccKDk7p5SSmnYoGrdvCJqvBQPTXht8d7TenxhYRLjNZXlpNEeUiqYBHWnUY6wUJSj0tgdTghgAZgCWLNb1VVKVEofZSUkpGWChJQrNKGSKogM7K7FS9HSXfX7416bvXUssxRGkzrs/8zJuvpwP2Dz+kVM6241HlTQGMQogohYhZWpkuIM90JCRExEcHInNQ0d3DUHkyAlCKtrUKqFWKQKLiUmN9UhDRt/NsQdkIzqYazjqo91CWP+lW9c8Xp08HAx6XZonfpWsmQWgJRcirmzkxSqMjlIiIgIYHKYg9Q0xEzuYFFJXAgMiNfmRCKBixncTYuXIZUFjyZDrfugZvBWwyQzVXv/Q4yz46ujp03JuxdZUzYrSbOZAg6SwLWgZhcJwgw3wN1B7oC7urMV1eIdrIKqOzgQxSqGWDMxS0kw037IdFbqhq2NtyROZpkxksQmtPvflA7rIa7mQzv4phtyGSyVoqYoREwEqYQdcDcQkTuRKYPIi2Uyk2CsVhEZhUBC5JCKY83szq4gUyubNHSnMfsY46OKxvVURIMRQ2bT+//gLIQ+Xs6coid1y5TMzbyoGcgVEpmI3MycCABAZBCYFyPnQCBSEmGJIcQYuKq2kpkd5qopp147PHfJYVS3hyYjkmmclQHc58Q3vvrFjU9DXh/IeF1yQUIyczd3EArJCxBwd0fODgKIFeauTGBmNElcKcYgW+AOURry6BzczYpp7hednNimxHpmO1pHlhEFoYE/fOfscj3+zXt5FayrLjfXedNnypSLlVTUzdTd3dnVAHMwDCDAmdThDrg6nIAgFCOxaIYQhVCP2Bmq2cqQ+o3l8rRwqKpZM5U6VFWgahg87NnCVMtP/d6rWl/emNXBSyrB3VTYuTgJw6KrqbubuwptJ8IEOEDkpu5ixkUQQERikMAgqLqrFh06HYbSjz8+gAiomXOWFlWpl7IqIZRdmKZffPl7H3V7j+u4MwqbihikmVwMZEKkIHd3ApEykzsEAJkaiEFUSDma5uBOMJCBAHMj12KDJk+a1nFzVVcdEk128yxWdYG267WHqWVXbRevvbT85rtnn1o03djLwEDUTCzsKGxE5A5XcyJ2wE3MmbSogbmkLlIACRggeBATwBWwrKVYV3JarXpo3cc1nNo6N95QvUQZ6DK0UFUruynXv/jlr3/9lctJwxU3OeTaKQWwV+4GsKsVBESAQGTsSiAmuIc8ZJm5cOVMobC5K7u5JHZV9cE2abkiW/LhqRDLNKxqZ8o+6tOyC7WbqQ0h2Ggz+bXP/jENx9ZWaTQUZWmUgUwuQDaACkRBzk5QZgKBhKAwC1xxZLirsbMYuJSiJSdNVrSsi2YWVPXVmJsxVVk9e9/QFU2DwE1MNBWe9XL4m3/8bhkf3zaqInHMTi5KWyVViimLQogAsh/7dg6CGpXtvnQ4MYqCFOa5aEl9Spu8KXUPWD9MmWTwEgby7IvM8yCAu6tKGZybu+c//9a/+v5L1WECayAQQzkwAe7mFNicWZyICAYAzKpqZqsWUAIziB0MU2CwDUruN9r3170TJ1r2s6q+1roUG2bLkqV/aR2IlOGAExkn38nVf/b0Xxa7FTVqH4MWVExqwloQzU0QxQiZNAKm7mBO1YoXt0lK4+QKRciVFVCvRVV7X69yaYvYkMd9MyGMLrmraZD1ZkUSYh/gxICLMluIcmP66l9+uH4wJxqVwCTFhLiokggFJhCREUDuTuLEJFWK7en1HsPciMVKjrkDkjkP3ab0m9wT8oaUr29Q3M9eE8AUqSFU4Z2bo0DGgMMDFDzq9u0XvvatHxwezazRgAzZMlNn34KyEzgAXEicGGQUY20ncxIiIyMPvGJKWbWUlHKXSl8qL25qmwkm0/VkZ3U97+D5Yhr70L/b3txlB0GciUBeB637L77++8ev3CRSeF2cVAGpDFt4ZApcAIKrErNLbDF+vh8rBDd1tsxe1IqXIZe+rLvMYmaj92I9X3NcXO9zwdCs7GrPVjyd84d/fVwEbkzF2U1ILDbzv3P4zffOrzJgOgzJECqSNBTOairGIXIgp0BOYKllqs82GoXBSCm7m/Yl9X3XrdepEDwXhcqtq729fjQUz0+frPMC9boPTcxt9/TJ7nwc01CUiAAhlar6lfCN5S1MoVFYOMK5KtfTgdCQIRGpOrEL3FOQ3v36sqpcycxhpIPBtAzDOnfqFij38nCUb17WoW/5sj2MuV+vrLoeBybZhDafP69HQCPMQcFUSYf2a8ffJ7o3ZmUiJhYp1eh5aEcWQM6k6jB4UuNKPVNa3WRXMgWp5x5ZU9eRZWgBFaW66DC9ZLf2iLlgxCPJCy1BlOp6iMy0oarjKhKBa0XlpfmVsx+u4s0gTOTOgUwm4+Xz6/HEgwQyU7Zc8noVPW9WVvVmBdnJybS4q5ac7Vq9eI3i9QcyPAiTcbhzT+sUKHReHz7vZgFCbFKk6nO3CSNmRFjiKhVu6q/987OH7SiUQCwEBIBmo5PONuNGgUKlz3l1Jeyr65640kLGubA5JS192XRD8tIX8jj0B8ubzWuPb8qbe7UJzJU4yLRPIcA9CqC9dsvN2WhaUyA2ZlCoj954h6Y7oYX7VioAgY/OraxIUWIZlrY47zfcWLJmZwJjdxIrqil1ala6DaWOKkpy8C4N9Xw5/dJeBQFFOLPnenIVGA4i1jpqWzVXazgTmJzhJPbrf9U9uTNluJmYAIASbqZ03Ddi6JfL5cXC6gmPheu2gRKV4lqyl4FzvxkcqfcYubeds93lq5v0C3tQA7k4MaFKVQxOYKhFtigmctERkwg5kTvw9Is/uFocmcBf/ChXTReb+MkyUrde50WZtpPpeEwxGAiAFbNixaT3MpRBNwaC2vi7o53uwfEv7hqEyJjc1SxruBkczhzNlTw0yHtnfRU4x4F9EOj8za+PTl8uxOIoTABEVYzk7tXleshO89hOxnUtLGqyhWTkYp0OubhbSUqTjFRhwPysvv2Gh0IwJleQcSX7s2DY7lk4V8Fqb5d9CMyEVGxg3+P+8moSyBRMKmLA1uEZ8W7J7qYx1hJDISJRNQOZWsopD0WRSs8BXAovSlx++vTvtSosW8Xj6hLaNgQlhoPAIKcaPitpiMzgVELSUFrdLA4DWLVSbKkiwcFNJIWV7Mwi7EHAWojYoZ4GsDCy5d7hmYwu+7S/+YmmRszRAZi5eQx1FS0og92DK8GrqBJb0xycuCg68LrK/bKkQETG5i7ERHCQiUQ3iWqZQcpqzgwnLymXDM1F85CsMoAy0TJMNf1cZeCtm+EAS10zc+jqkBGcHYGUIcSVelaGatpE1brXRR+iFylGIQIEYfMciFiJOFMQqLGqm5urQTbuKEMpqTcGnJQH1nGc322d/IVXSQi0hZawXI9DIDAMrs4AxWCAu5ZVqd3hQyqSLSt5hTa0sSIrxk4gES8g1pBhqh7gtg27TIpZ3yewwVGkKM/S9NMxbGUGiIiYQwCzh9GwbpxBHEjc3FRIrRBctSdWUA4pDVheXvZelWms5cadAxJ3hlBhYTL0TqogtWJiJTvU3HJf4DB2Lf3o6WEa3Qskjq3sZGIKkYjcwixtskZSsINgWeEcNCi0T+zkd798+/3nHD56+04LL7UNVf/08eFNikHrLNmILXPWvk1xYHXVom6uqkOSYh46SvWTt07ayxtvNqyMF3KLWISFzChUIl3qmy0BR84ZTg44NJdImTcfv9Z9+5cffN4+qTLzw1v9pr9/4/H6XlXy4mqxyDya1Q73Xo3d2E2LGpGaAkaWNXoB3/todmc3sgo5ACIQMwFwIAQRZ6iwg1STisFVFAoHK1m5bttfOA92UP/mn3z73/966mejp+svr9/fwbx0636pUjIfHlWKJGRMquoOL9lI4ZLULNw9nk1ferl2IQL8xQMwE9ydQjCuQ5LtkrKEoO7sympqUOX+3v/+pTvlL3/xG/Qv7sXZk7e+9q2L6er/e/n+3uavzq+uUEvrZflhdW9vJzbRtKiB3IqqmlEyNq3PH2AlDw4oeKXujq3S4a0R5UGcqhR9u6XMo5OaCgqZkpcupz9+PXz59LD/6p/+1tGbn66G//n1+/Xj1fLhcV1vPIDj0Fmu6u/lr0xvGgSpOCEnVTVzckUihLXeeN1r5i3RdOawfSHMiQOLOgD3Ym5qTEYE3wb5iqyDvD/7rT/42c9Pwk+88aMv/PKfVvPV9Jfy00f1gmJsQxu/JUg//SjS//u3jNiTA6pmuc+SB+p2TqvlyEQ+tx+wjaaJHSAicagTPDCKG7kak1sBkbq7mcHJlM1zVR7b+Pj9Y387PT6uHrWjz4+O19WD/Cd/e43aNo/vnj979Zfef7v7u4fa88bgxU1LylbUe+rBOq108kqusY3niQjMROzqRmTBncxBW4fEhExVVdxBzF7U+hKD/cyj396zL0yvn/zBrcWbxcf77//2108vfnPzCd0Jn/mJ07/8X6fhV6//r4O9+ibBMnLfZ5IsQy7Gm+netPuZfeYX9hNAzMTsTnCChyTFCNsKgcHIyZVBsEJwKI0M4fD+l85+0E8u+i9U9Xj9+OWuev0f3fi1f7IyfefmWewO+0++cvz7t89vXNi7XyH2NKSkGHRYU6qxOljXey+XukSwKAAGMZErwQxMYeuXeVEiuJHAnMhBJERKRonM3TfN/KOPb736/ihKeHTir9y/Wv53p26nH55+/NLptz/zP72ze+fj/X/4zd+efXS3S93GlDqzrJxmj27V4/xTVYBvdd52CtuQTpmAoGr+YgcBcNPi7OYgQEoxG9g1E/H++8u7mFS9+aaE7z+6vXy/nF9sLN05TaPwjfHRf/z4H/+j/OC1zYej81TVxwG2HL/5OF3vc5oe3XrRUFAnom004y+8SQQvRExZlQROMFUQKwD3kMxKbuCWrNq/89cP5M7i9eznfU7nP7Lxd7XqrrJO6SVa3+d/+q2/cXnc7qmdrPKruarPffdzc/7Cv+3aX+R7pYGi3h4SL2bgRkIAgrI6lZTZYRoNSsEAN3d2K1aVEt2VCG89fvhKvn1nv/Oz49H56dXFaBKvblztN+PUL//F9PrJx6/eHL/5ysF3X3mSvvr460es9Oo777511B1MY+1g5u2c3T0Y6MdMlIKbuxU1wJ3c1RhbCM/qKN6bb/d4u/ibf/jszc1ZLZcv48bNU1vz0fUct4OHsw+H/KnNzvnF+PIP6MmwPzod5c3e/b/+8HY6e/pSvDWqGQRXANuw3oi24zA4grlA3UE+SASyC2yrkOEO3wQHETnbvP/q7+4/+AHCh+PZsOfTIYwe7V3mxerJs1X1hWp33i2o//qzo785++b4/2g2O38yeXIi9U9+683bEnnrTr/YBwzjF4/ghuDuqmXrV7vBnEAguAcns7yhF5Aey+TGf/47J597t5k8XPH+xU9+JKPV+Hrhby/X7Rv7jVbrXSoHDy7+3eK//afrp3eeHf7RP/iO3774jbuhVRcObADImYkcvh00Gzm7m5kZOYKACLWrEdwJ7rrRKzVABebik52/P//m4NV6vbHh3330/Licbtbfv9D7bx3EXEojVlEz4snv/hc/87/8j8v/8I9Gv/Rfpv4jRg5BAjtAEuootPWMQOTmIPozmOWsKsEhXgZimGrOWvJwdV2+9zG3O699ca9umc1Sc/qHw+xgedwiSv9A0/XjPL5z2MRgeSipGkpOQ9ZHr939vdj/4FfTf/TD479DYwkkgY2dmIgJpEpEtE0qEBzmChDAZq4MJ4K7JjhUyuLFtWUSxGoZ9/7+8tlplnXP0jwb5Gpncq9pGVaUInMiTcMQ5+/00xN85kfz/3v4r3283QAgBSFsM4IX7MQdhABzN0eAgQjbfh0RAepm5BtydQITCKw7mzmPD3mdu8XFsijS/TgZOwpyUHPqck6b4mvQnzbpkA83uz9XTzTKNszfmk6AF2cA25gKCObYFg5c3Vgyh+xOCi5qSDQUtqGouhOHoK1AyOdkd1CGtHEbyDOstgJRNR26Il0/XBXY/jXqN44qrbYGKG9bDdtz+49pO8DBnTNTIhCZm8FdzbSoS1LzaKlGyrk4ObERgYUQ3Fy9kpgyezFvwA627N06a9+V56uKgDA9eqMKzEQgYKu8AfdtTrONSYg5wNW3xhhTzsS6xQvX4tvsN0veZCIr4sYMgOFqxASuIHVvtenaActltRk67+lsMXafIex+uoqRiWhLyF5oFd+qxBchhUgAFVcDwR0Oc2ggJ8AU7oUtpCyrZGasLGE7uBcdBVhwDZrVlIth6Pq+L9pdbgKs3dndvLxXy4tUanvGrQdiBiYyI8DILYDYYOQOcibQlqSqO1zdC4Q9XV0dNcKFDMRCbkbikK2nawZ3uGq2Yb3SIZ+tJZZxMx/fus/CIBA5zIQiiAxExgR3MwlQcguAEIZKHQSDKzwDvt3M6snEoVePHmhmUHRHIeYtq4NzdlXAiwO9b1aLlIflMO12T2/Epv10qGsiIiEYwLRlJ3jhB1FgdiFHAIHQoNgWspW8CDGBATdP7uSU37s9dQrm20yHyADSrSpwGEqxwTb5JA/WUbOK3M9o/vIkxFIRiAELYGd3+IsncIIwu5F5gENNihiBnJxcQChuzlzIs7Co4Oqbt47KJKtJYI1GgBlgRS3Ds3e+yteX7WW9Hg9JtO3bcXl9zpEZzKwgdgabE9xAYIGTAEZwJoYbsdqgKSmpuuWiCrgZeHBiY3KPz//8LHWcc9a1r0tJQ1FTMyfSvl9ryv3V4u2jK36OwNHzzF672VbERMIi9OLYtO2R2gvZBCAQNMCNicgJTKZGbgYwkVNf2MidCZyrH7ZfKd6qWljHwciEyN1MzVmX6IfhalmObzysTTTOlrpz1yMYDodF2ItVSMRbE2ir4YFCFAJACsmhELwQiERdU0rulEHqzAQnCt9LX7UhyKhAyVpVENxzLl3X59IvHm8uSM7nKxaL9ebm5+eRApG7Ef+41MNbSP7xq0QAgqEEV2azavMitTHK1Ck52C2WwtsaLDuNnn/9s7sTUa6IwyqoOME056Hvuk13tjjlg0uNTYHQ3qOvjqMQs0cRgRkHoy1BVNt+2gGiQkTELiHHvIXpAnZnUzeqbBCzks3MwW4MojQ/+YvPHowDBhkxJzgcqfRp1XWr7ux4Ue8/qmWzWwbx9ZfvRwkCZ2YyEDNeiGYQ0xZrCREONafQJDYCm6oUcgVrgcHIrOpdibb7GxS6HflgfWNUtRg0ZCZSG4YhXa83q8XxILOTSfF6aEaa5z9bxRiJWJjcwYAyACMCb7n6lp+4MREHdiK2kooiqJIyi/eORJJKJGKzLTRTVaKlDVmRuilRCmlJXV+ur5cXV8YQQqSaStXcevOohBCdid2chIigDAOcSADGC/LL5CAKKWRwoSBOJZOySyncS4qqZE0ukbe00cGoSOrKug1RhYpyP6T+erg4SSSIi93eqmlbLOy96k0UqIatPHNx4xc+JIEJDqYtSVUiCqKkpiI59o4kxZHdiMIWHa00Tv6CrLIDhgkXWm1ScbO06br1tdTOODq2WpC0aXa/xjEwUSBmIsAiiN2IVLZ8eXtJQUJgEIIGdWJ3mLkV5hSDFmRDIBZ3Bv+Y2BLHyOrKI6ORatfl5EYydlfiauqVBapH1Ze8jiy0XYDOIcAdZixCzA6H09bJeKGkAyUiplw8iXqdEYsSVRsEJ7CjydWLDh6IFKGq6xgw5r6vm6Gp21z6jjIzHa1sFLVuXt6pWAA2JyZjcoO5OEACuDDBtx37UCBOoECSkQLAomqodBBWyYXdSYBUokDkBS9xc2EnioRRLPWS4yTlPpsTDxWqUaH68HYQKsETCxSCrQAx4sjicBixQMnBmRnbfhXYt/iZQGQUCkzdKyhJxJDEjAKMQFv/i2MFIqJRYdaShtQqzGADZkPb33jQcoC4R3fEohGMwsLC8uPuhBsczM7MwDY6gpEZK1dWjMwKCw1jZxcRh0SvVCO/IHXbbwVBQiTSupak2bQotIWLNPfCthclXQhSmJSMmCUABiEic3J3YiYy2oqnADCzmagZSY5a4Jrbvh2ECcyWmkwMwdbXIWazbSYPCCSIRnAZCjr0bfd61TiHIZBGz0OduCKQcGBy+PZrF2cxYXPZWgnwYO7QollEszdeO5sIC5pcJYTADYhEHMJuqqYaDczMylV2d4aZy2TldWK9VwfUlIWMHFyRgIiCk2ci5vJCMTkzRX3BTJ0CezbiOms2Z0tR3MEITCztAHXlbQcLcBdhMgMLzACHc2JSCK8Lc8oP9rwKRmKmbBLAwvAtLBJgBEAIDMC2uggwQ7CBGMnNDAEuXohI86SAEKgi2cb8TCWAS8kpqhYnAsyhhXwA6zCQr9KrY22iw409ROcQGSDm7b0npq1p8+KFMpgDyqyBKyQLPQhGsFqjpyxVYiPhQNn1x9qGrAAOtRIYYC1OmgNl8bzYGZb+YCyVaYSrEwcwE1gK3LaVDWJ+4Y3QC4sGBDeX4LCYc7AibCRm5tEVxKZSmkixBAIznM3cUXLtWcDOUIOLaUiRY1rZbm1SgrhSZAmVggUwJiKTrYUiTi5GbEQG3po2rBqIYu/N4EwWMJh4kmjmzsZwiRbwIhYCvAxjG6hSIpet7epmoev7bnzEJ34vGlAbYqBtN+lFT+ZFpsAgchJ6YV443GFu/z9atFMVU0ddBwAAAABJRU5ErkJggg==\n", + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAIEAAACBCAAAAADCy4aMAAAq3ElEQVR4nDW7SZNt2XUe9q219j7N7fJm5svM179qUUUABYA0SDQixMbB1mHJshRuqAHlgTV0hAcOhz3yQDNPHOGQI+yBRlaEw6YUDJG2KFIkRZEUBIAAQRZQKFShutdnnzdvc87Ze6+1PLgPef/A2SvPXl976B+PV/Eb4dffuY1l/2ha7dRhZKVfhrt6kS8Oqt2zy/N+dH/+tExlWsbkqCZtTstw9bnLw0dWpB9Pxn0ehnjB0U2mZlVQqvA05/km+nHxIVoIzdJvjpqF/9G9lD+7jlfNj27sft8+/4Q+5lnT/evdX3tnPHnK15PqYG/sfrq+GB3ZalXuxIO3P1g0e5/Ch9iZ7U1lsrxuZqPletGvfuLq8IlUk81ot7pWst2TqcLGu4AVxJGf5W6mSGtOSjl6QMAm5me+t5wvLyddbkZEdybkPArh47fvvfVXB7uPpiebV/boPITnfDnfWy8wxfjs38xKz69e41bVBlJ7542R52WyhN317vVsHR7N5zbQuhk9rHXEUjl7HJUa/XpztGFKa081kSB0bbkYrcc2ea77n+yXQaaKvRMx5fDD870332vrR018Scqz0XrIR93u6Pn1jRInH1zUp/X4VpPHOzzp+ke7Pwk3Lz0103m8nC71dLanPfpbzy/0hoUYoTHmUOf1eb69aLvUdyNGXaxSLjR+fI/Px9TFyzS+1qrvw9VamyZc1XfOdW+Hupo3lp/o9Ohsgg8DZIQnT8XmuNfkW7GRfHL65l6pBrtIE4674WKvD9fTtgdHevI43KhWVJuzy4SLPgs3dNR1aSg9h9gH4jZXvAmX3bjKTfPxLVVP/bTB3Eto7rXh1ub5zskzDokzpo93O9spd/ty+USi1zeq0XwdF9P3Vz/ppbbrXmZxtFOu59f1Bbd9lnsXl092x+PNvHLjOM710j+mHfV1ztkmQy3kRBT7uNm89/LzndFiOTq/32qZnvvt7vL+ebhbPqkvsgwJZOsxj9fTa1DT9Gn5tF3J5GgmozLu22/u/7RN1qtVifWkDklnXf241gs7uPmDYTnbQdgbaSdNHbw/v5xNBu1z71yJipYmiItJPHr4+ZCbi5Gc1+3aexrH9fXiNKQTr8YhX+b1NFce1tXp3kijhpOrCrvV3Rjr2HTxu6823ZA2iXg0HXkOo5Ve15NLusPf4HPZnRSarfOoqrycnbT3ujPUvWW2KhrCEEO1SURDHL0zWtx8JJiXxYwXPt4gdAjrWOeLw2U30jYWvb79FA03p688XrnU9W3eGXuzqH70qd3xohoohHHdqkm9DCVPzuc7T08mT27vU253irQ2zZuH6civ11H7bCShLlFMGETe9HLJb//GXyUufXtKWFfrdTm5eRU57zHml7l+PMYK/adWk5lOzl59WHOs5vd42hCd4XtH09Gi8JAne9PaEcXqVV9dj5unZ7x6ZW/AqC0+GY8vP3kyvqFXQ1wtB0NbReKmroJAqyGMWvG739lf3VnCrysvY+88LuolN7E/8Uk434nPmsWNvOTlwTuv/Kg+QZzPpdoRK+PHr7aTjTVXGO+MJ02HIMPTtC59+/y8jeOx7s2aKjZ5cXI1mvbnp93mrESuqjrUVV2zMBMFS8rrvPpO6/NLWpt0jLzpfFXqoFflVr4eZNZVZdKetVevvPfgfJKrZjyZlT0unJ7ujKEunU2qSPm6HpstRx++tNp73O1czto483GOw2Y92kknQx4yvJHAysLEkphjBsESk1K785S9UL98cEy6vjl5cudq4LFb10Vyq0Ien+nyDV8cx5zr2XgXe3VfDxe7RxZNNsvxtAqWmknyhV7eL9WqHs52x/tjaalaDLznJx8ul5vC0o4mo9i0TdMGB4uaqY2SyV7fT747Wr+0ZKRchumy1IUo9Lb3VGu/Gy+00TK81v35/blljGdViCHFPstEnPpcZm0IVlpYtVz2E5W8Opu01TSK27pruFstek9GMVKsxJ2YyI2YHASFGj0/+KBdd6vVzTPSXq6aZuj3ksQw8PGIT/er9fVschbun5y2DfpKZi014LCWNL+xlrXxPNQxQCl6uciU9p52Z/v1ZN6YZY+zlfV91zvXVWRUAUZk7MVIUIgJqtVluPz0ozz/7hexsy6prPaXdFFftlUwlGqzv1N6XtW35dHCpiMlmU4CMyGH1e5o8JIxlioKr1tQPglDnDxb6oG281ZLDq1lHzZrCxVVsWYBnMwZKsgEBjHnnBnrUd+slifxzvHVBtBGp4Wk5snmxrpUm0eXtFdt3ukX1QRF2ybEWKarspFoPvRUNyOGLaoYVmtfYLS+XE8w36k2icYNyma96KkZj3fnszZGDjBmDpHAMBcRJi1YbT766TXNv9PsTgzivSzsccdjXiNRi9MyaavqYulNXRtV1bzagBdtVq9zLl61FUvldSxdWa65xmk3pdlukxGaqN31Ym3NrN4ZT9rIgChFiQFBJAoxk7uZ5TDyb70JteuHR02v6pudaSusYXSSQnUy7LT71w+f78TDYbzOk1HXUdcOw9zbdydW2gZaFeWoEq/P6mG2WFi9OwmFmqi6uh6obmMVggVjY0lRi0RVZnFnN3cz8rJaj3aev/V7O9/+ss6rwRorT/rlroVqd5ne+/l3RnZ59p3JG12Yrerni5O9pDc+kqMnq2aWruhBJ2Ap4rLi65lxPuWDZseWExfv1qscqroKUQgQR1B2CmRUMYzA6g6KqVAotkjvP7hanOmd5yWUcXw+VlIerK1Hv/Xy79z43uKqpXFVHdn05IMfPr+gZLmP/V984+H64dXl9XJjVrrxlYWueTq9fXQryw3ltLi8ytJMmrYORAR3bP+YYEU1GREAd1Bax+v71+t+9JnNzvIAtBo97AeXEp7bwOm1j37+CX1v/w6L177L5yflbvh4zM+wzm2fjjfL0WS2Owm1LM7bVD/Fan+i7fr4gIZVkThuogcmZ3en4s7sDpgzETkyAJDUm7ja/d4r71RP237W2q3bm6vLesw2hLp8/NaNRS6XD+GjmkMLi3eG1Uk1uwiHQ7xsnx2k6nxnfX2xvzvl4yamy9DYQY5DudttisWmbhnuDrjBzJhdHQLOQ0mapQQFuzPWs9Xk2cGT+blY3a0nz9A2S8kcprVfr0cppX72qcudSc11GXWTUZe6JGfrna9+PVC7XLUns+dP9/baULzvY9xjKjLuVD3EtiKABYAbFFAoGIN1nhd+VpVc96wViCdOOWNy1czZ7ezsk089/twwrRehTXvjjx8FOmmrNK5bCUyxHS8rlzF/eHv4reVYyisnC/FmtV7S6zM+TnE6rQvTQEoh1kFg2/8/OUFhgKdU7ApLG7qryzrMsdo0tYyeBIsDWT/Z3D6WMunH/TCqQngUr/hqnGbd/nWpJXAQIi5lXbXPhvKXjjMyfTfPm9nEdpavj67qq735NBDIglPLgciJzBgAAXAtCtW8vj6fPg1rHc/b4XiobNM1MfvIEjWrXCnZVV9bM2a3MCJumg/iM3t0dL8KoMAcOF0s6SJ9dDHm8ybEIPtXi1Ob7h1277Z8vbseT1MAa64iERzuYIJ7MZSSN3mdz6kMVdkkIXRdKhhKV7oUveZ6g73FnYWsxmndJpVJDONnY1tVVT9Pt8sIJCFoFfToatmvedRXh6WOMhquKMzTo0+NjsfPb37w4DBHN/cRMbltb5sQKSFvhnXfrzf9erS8CKv2nm3ips65vxyKVWEsEiLVVF/C+sHGnZi3IaE/OEnhufFih8ASmlyRd3d/+EGPnbyz8+nbfxYP/uXB8OjVPUt/8VLgv759UfXjHXIOIDdndzdztlz6zfJ8oV2Qrn+Issv69mEeL1F0OeRLrWZzYhZezyNOdecyTS4PR5rCMuTL0V2/igeP4ewgtF7TzuLZ4ykw/uIX7/lfGjUn6L62WjyRv/rKN+P8ieza1UTIQWDNTnASH1YXl4tOh5TPF3exZ0HLOD2bn1XX1BklCu3N/bGbjtKNDS14HEfcdeMkoZKn0/Z0lT9ZmQo5M0GE9vwLcrw8vT3k8eYn/uS1v/3P13+jHb9398t//Ae8yp+mt9/cG1oCm5s6oDBeLZ+dDktZDGf6mN8pFSqADxe1D9Wy0iun2d1b0bnOPEXadE21upUn5wfjp8F81vjepcswKZqbIIHYg80/E767viqLi+oL/8Hv97f/+26yfvv4P73/t373m/3Dt+6fXVdTeAlsxO7qyP3y+LI77Z9crIZEq9gO3Twf6lP99nRv2OvPNctsWjFJ4dG64Hwz7nZyOnxelzqYtePjEylsOiQQkxBBYrzDm6F+sq42z77yd/+3n561V+Xdr71E8ul3T9M77T0WNUYJDjaDW7/sFleXTxeb7iwHObB0i6vT+hqzq3W5c7keeZge7NdwOHsW5f3z/dHFnSybaQ65bhYzF9fKtGJSgJg1yJxt9Gj00Qdn7/ybz/3sn768Xsy++p+sFieXP3n8bH59ucvkFAjsZupDt1o9PV9f9evV5U63+6nzZ7j+4pv/zy8dvP2tw1Q9dbbr2y/dnDhbD4cLhzC7uL2R04nlUWhtxIs10YbqK4cbOTm7R9vxccl8ubBvfTNOlsPBp/BPni7ltP3s42EdXMXdhZDgppvN1ZOz6XVZXvkvf+Z3Dl8Nb/yfNyfVEW78HP/5Z77TFj4f7+w1JKQKLoOW+cP9Zt14lFzF0I3DshsmA/Vhup6bGwgmDLJ5/ZWdneXq+uM8dOc+//6NpXin4WH5fKhRiUMccLilvPhwOYnDR0/DP/x7+Hq9960/nDY3dH2429351X917/i6tBhVTSBCSoo0Xg/rW9XVvOHgbmGYPxkN969BVc59LmYOghARczkcv7Y43uzzJ22+fvXyHlpcytXR7uTNo0sNLAxycfZ88ej6xs7jDx4d/NKXbPqrv3v1Gz+9uV+eNDesj8v0Q2njzqsvz2OEW0TRdBUff3q6mlUddO4l1LMu9k/i0dPQ0MrUrYRIAOBe0e7o5tH5/tlngn4ye+VQZDrZjJuJB5bAwmB2VU3ddX/w8vrs4f1ff2BD/fP/3p5Ob2/ee+/nq9XZJ39xmuc7N9+4M2uNS1YW7+LZ5k6mPCfPNXIIoeqPNvWyq8bNDmdPQwUNRKBKAxfm2/XBQcVhZ1LXHnznBnvVFOYGEUGgDLfc1y9NLy4W9sd7r7xx79X/6p89L9XV6otfWj/+tx8+pvr+vQd3d6PAMHjFkGeHz9tpunle75803vZhtB5XZ91Fy6mNbUot3EFOBAdDKi1z2h3YPRDX0SqnSANJ7UGYGc5kecBBs7f47OJH35Y/a3eO3vqps4/k9kvhWz/662NpZg8evDLab4Q8E1P2KvtmJw2HJY76pvY0CrvnN/JaRteHLg5116IIAMicGAT3Epxs34iJJcAJMW2sIgJARFCjhnk5fuszP/r28ZWs3/3on03GDf3polvncK/dv31nZxoqUgMAM95QN6fxKhzS1X6sCkIbq1XIHV/fjeRlMNeCwk4kICTjWHggOAWiKHBJgVQkxiBgh8HNdJqob2bnd+NiebFYn3fXV826Xc7C0f3JhF5uJ6OGiGBu7lo6sVy313uBmolyGYfnk6GffEjaX06a0o2HqjZCiU5S2IRVXSv3CONoxSuICRNg5jxU7tkRq1WdJogH3aTbnF136OzZ5JTiLO6HENp9bppI5u5FU7E0EFpa7lgo9TQVnwSS1dPVxb2PKyqXNvCEvITa3chJQS4VAtwpk4RiVa4puzhYAjMFVTd2ilI2VV1iPenni5UlvByHZMNYymTSjtBGuLu7qSa96rVt4jCuUNdchdIE35TNCqGutL945uO7xbwMwdjIXZyJoioMHKynOroHEBhBAoBiJE4mwhaRao0htjtpuM6NXt5ZsBTsNLGuYKJmZlpK0eukhxo7iT7W/vBkh8Lo27c//GTvg7uXedzo7GTIvUR3dy8CyWylyg4oOTc+oHEhUjAHuANm7uQMl9Us1WBiUW72Lj2P0SYqPI3ccKDk7p5SSmnYoGrdvCJqvBQPTXht8d7TenxhYRLjNZXlpNEeUiqYBHWnUY6wUJSj0tgdTghgAZgCWLNb1VVKVEofZSUkpGWChJQrNKGSKogM7K7FS9HSXfX7416bvXUssxRGkzrs/8zJuvpwP2Dz+kVM6241HlTQGMQogohYhZWpkuIM90JCRExEcHInNQ0d3DUHkyAlCKtrUKqFWKQKLiUmN9UhDRt/NsQdkIzqYazjqo91CWP+lW9c8Xp08HAx6XZonfpWsmQWgJRcirmzkxSqMjlIiIgIYHKYg9Q0xEzuYFFJXAgMiNfmRCKBixncTYuXIZUFjyZDrfugZvBWwyQzVXv/Q4yz46ujp03JuxdZUzYrSbOZAg6SwLWgZhcJwgw3wN1B7oC7urMV1eIdrIKqOzgQxSqGWDMxS0kw037IdFbqhq2NtyROZpkxksQmtPvflA7rIa7mQzv4phtyGSyVoqYoREwEqYQdcDcQkTuRKYPIi2Uyk2CsVhEZhUBC5JCKY83szq4gUyubNHSnMfsY46OKxvVURIMRQ2bT+//gLIQ+Xs6coid1y5TMzbyoGcgVEpmI3MycCABAZBCYFyPnQCBSEmGJIcQYuKq2kpkd5qopp147PHfJYVS3hyYjkmmclQHc58Q3vvrFjU9DXh/IeF1yQUIyczd3EArJCxBwd0fODgKIFeauTGBmNElcKcYgW+AOURry6BzczYpp7hednNimxHpmO1pHlhEFoYE/fOfscj3+zXt5FayrLjfXedNnypSLlVTUzdTd3dnVAHMwDCDAmdThDrg6nIAgFCOxaIYQhVCP2Bmq2cqQ+o3l8rRwqKpZM5U6VFWgahg87NnCVMtP/d6rWl/emNXBSyrB3VTYuTgJw6KrqbubuwptJ8IEOEDkpu5ixkUQQERikMAgqLqrFh06HYbSjz8+gAiomXOWFlWpl7IqIZRdmKZffPl7H3V7j+u4MwqbihikmVwMZEKkIHd3ApEykzsEAJkaiEFUSDma5uBOMJCBAHMj12KDJk+a1nFzVVcdEk128yxWdYG267WHqWVXbRevvbT85rtnn1o03djLwEDUTCzsKGxE5A5XcyJ2wE3MmbSogbmkLlIACRggeBATwBWwrKVYV3JarXpo3cc1nNo6N95QvUQZ6DK0UFUruynXv/jlr3/9lctJwxU3OeTaKQWwV+4GsKsVBESAQGTsSiAmuIc8ZJm5cOVMobC5K7u5JHZV9cE2abkiW/LhqRDLNKxqZ8o+6tOyC7WbqQ0h2Ggz+bXP/jENx9ZWaTQUZWmUgUwuQDaACkRBzk5QZgKBhKAwC1xxZLirsbMYuJSiJSdNVrSsi2YWVPXVmJsxVVk9e9/QFU2DwE1MNBWe9XL4m3/8bhkf3zaqInHMTi5KWyVViimLQogAsh/7dg6CGpXtvnQ4MYqCFOa5aEl9Spu8KXUPWD9MmWTwEgby7IvM8yCAu6tKGZybu+c//9a/+v5L1WECayAQQzkwAe7mFNicWZyICAYAzKpqZqsWUAIziB0MU2CwDUruN9r3170TJ1r2s6q+1roUG2bLkqV/aR2IlOGAExkn38nVf/b0Xxa7FTVqH4MWVExqwloQzU0QxQiZNAKm7mBO1YoXt0lK4+QKRciVFVCvRVV7X69yaYvYkMd9MyGMLrmraZD1ZkUSYh/gxICLMluIcmP66l9+uH4wJxqVwCTFhLiokggFJhCREUDuTuLEJFWK7en1HsPciMVKjrkDkjkP3ab0m9wT8oaUr29Q3M9eE8AUqSFU4Z2bo0DGgMMDFDzq9u0XvvatHxwezazRgAzZMlNn34KyEzgAXEicGGQUY20ncxIiIyMPvGJKWbWUlHKXSl8qL25qmwkm0/VkZ3U97+D5Yhr70L/b3txlB0GciUBeB637L77++8ev3CRSeF2cVAGpDFt4ZApcAIKrErNLbDF+vh8rBDd1tsxe1IqXIZe+rLvMYmaj92I9X3NcXO9zwdCs7GrPVjyd84d/fVwEbkzF2U1ILDbzv3P4zffOrzJgOgzJECqSNBTOairGIXIgp0BOYKllqs82GoXBSCm7m/Yl9X3XrdepEDwXhcqtq729fjQUz0+frPMC9boPTcxt9/TJ7nwc01CUiAAhlar6lfCN5S1MoVFYOMK5KtfTgdCQIRGpOrEL3FOQ3v36sqpcycxhpIPBtAzDOnfqFij38nCUb17WoW/5sj2MuV+vrLoeBybZhDafP69HQCPMQcFUSYf2a8ffJ7o3ZmUiJhYp1eh5aEcWQM6k6jB4UuNKPVNa3WRXMgWp5x5ZU9eRZWgBFaW66DC9ZLf2iLlgxCPJCy1BlOp6iMy0oarjKhKBa0XlpfmVsx+u4s0gTOTOgUwm4+Xz6/HEgwQyU7Zc8noVPW9WVvVmBdnJybS4q5ac7Vq9eI3i9QcyPAiTcbhzT+sUKHReHz7vZgFCbFKk6nO3CSNmRFjiKhVu6q/987OH7SiUQCwEBIBmo5PONuNGgUKlz3l1Jeyr65640kLGubA5JS192XRD8tIX8jj0B8ubzWuPb8qbe7UJzJU4yLRPIcA9CqC9dsvN2WhaUyA2ZlCoj954h6Y7oYX7VioAgY/OraxIUWIZlrY47zfcWLJmZwJjdxIrqil1ala6DaWOKkpy8C4N9Xw5/dJeBQFFOLPnenIVGA4i1jpqWzVXazgTmJzhJPbrf9U9uTNluJmYAIASbqZ03Ddi6JfL5cXC6gmPheu2gRKV4lqyl4FzvxkcqfcYubeds93lq5v0C3tQA7k4MaFKVQxOYKhFtigmctERkwg5kTvw9Is/uFocmcBf/ChXTReb+MkyUrde50WZtpPpeEwxGAiAFbNixaT3MpRBNwaC2vi7o53uwfEv7hqEyJjc1SxruBkczhzNlTw0yHtnfRU4x4F9EOj8za+PTl8uxOIoTABEVYzk7tXleshO89hOxnUtLGqyhWTkYp0OubhbSUqTjFRhwPysvv2Gh0IwJleQcSX7s2DY7lk4V8Fqb5d9CMyEVGxg3+P+8moSyBRMKmLA1uEZ8W7J7qYx1hJDISJRNQOZWsopD0WRSs8BXAovSlx++vTvtSosW8Xj6hLaNgQlhoPAIKcaPitpiMzgVELSUFrdLA4DWLVSbKkiwcFNJIWV7Mwi7EHAWojYoZ4GsDCy5d7hmYwu+7S/+YmmRszRAZi5eQx1FS0og92DK8GrqBJb0xycuCg68LrK/bKkQETG5i7ERHCQiUQ3iWqZQcpqzgwnLymXDM1F85CsMoAy0TJMNf1cZeCtm+EAS10zc+jqkBGcHYGUIcSVelaGatpE1brXRR+iFylGIQIEYfMciFiJOFMQqLGqm5urQTbuKEMpqTcGnJQH1nGc322d/IVXSQi0hZawXI9DIDAMrs4AxWCAu5ZVqd3hQyqSLSt5hTa0sSIrxk4gES8g1pBhqh7gtg27TIpZ3yewwVGkKM/S9NMxbGUGiIiYQwCzh9GwbpxBHEjc3FRIrRBctSdWUA4pDVheXvZelWms5cadAxJ3hlBhYTL0TqogtWJiJTvU3HJf4DB2Lf3o6WEa3Qskjq3sZGIKkYjcwixtskZSsINgWeEcNCi0T+zkd798+/3nHD56+04LL7UNVf/08eFNikHrLNmILXPWvk1xYHXVom6uqkOSYh46SvWTt07ayxtvNqyMF3KLWISFzChUIl3qmy0BR84ZTg44NJdImTcfv9Z9+5cffN4+qTLzw1v9pr9/4/H6XlXy4mqxyDya1Q73Xo3d2E2LGpGaAkaWNXoB3/todmc3sgo5ACIQMwFwIAQRZ6iwg1STisFVFAoHK1m5bttfOA92UP/mn3z73/966mejp+svr9/fwbx0636pUjIfHlWKJGRMquoOL9lI4ZLULNw9nk1ferl2IQL8xQMwE9ydQjCuQ5LtkrKEoO7sympqUOX+3v/+pTvlL3/xG/Qv7sXZk7e+9q2L6er/e/n+3uavzq+uUEvrZflhdW9vJzbRtKiB3IqqmlEyNq3PH2AlDw4oeKXujq3S4a0R5UGcqhR9u6XMo5OaCgqZkpcupz9+PXz59LD/6p/+1tGbn66G//n1+/Xj1fLhcV1vPIDj0Fmu6u/lr0xvGgSpOCEnVTVzckUihLXeeN1r5i3RdOawfSHMiQOLOgD3Ym5qTEYE3wb5iqyDvD/7rT/42c9Pwk+88aMv/PKfVvPV9Jfy00f1gmJsQxu/JUg//SjS//u3jNiTA6pmuc+SB+p2TqvlyEQ+tx+wjaaJHSAicagTPDCKG7kak1sBkbq7mcHJlM1zVR7b+Pj9Y387PT6uHrWjz4+O19WD/Cd/e43aNo/vnj979Zfef7v7u4fa88bgxU1LylbUe+rBOq108kqusY3niQjMROzqRmTBncxBW4fEhExVVdxBzF7U+hKD/cyj396zL0yvn/zBrcWbxcf77//2108vfnPzCd0Jn/mJ07/8X6fhV6//r4O9+ibBMnLfZ5IsQy7Gm+netPuZfeYX9hNAzMTsTnCChyTFCNsKgcHIyZVBsEJwKI0M4fD+l85+0E8u+i9U9Xj9+OWuev0f3fi1f7IyfefmWewO+0++cvz7t89vXNi7XyH2NKSkGHRYU6qxOljXey+XukSwKAAGMZErwQxMYeuXeVEiuJHAnMhBJERKRonM3TfN/KOPb736/ihKeHTir9y/Wv53p26nH55+/NLptz/zP72ze+fj/X/4zd+efXS3S93GlDqzrJxmj27V4/xTVYBvdd52CtuQTpmAoGr+YgcBcNPi7OYgQEoxG9g1E/H++8u7mFS9+aaE7z+6vXy/nF9sLN05TaPwjfHRf/z4H/+j/OC1zYej81TVxwG2HL/5OF3vc5oe3XrRUFAnom004y+8SQQvRExZlQROMFUQKwD3kMxKbuCWrNq/89cP5M7i9eznfU7nP7Lxd7XqrrJO6SVa3+d/+q2/cXnc7qmdrPKruarPffdzc/7Cv+3aX+R7pYGi3h4SL2bgRkIAgrI6lZTZYRoNSsEAN3d2K1aVEt2VCG89fvhKvn1nv/Oz49H56dXFaBKvblztN+PUL//F9PrJx6/eHL/5ysF3X3mSvvr460es9Oo777511B1MY+1g5u2c3T0Y6MdMlIKbuxU1wJ3c1RhbCM/qKN6bb/d4u/ibf/jszc1ZLZcv48bNU1vz0fUct4OHsw+H/KnNzvnF+PIP6MmwPzod5c3e/b/+8HY6e/pSvDWqGQRXANuw3oi24zA4grlA3UE+SASyC2yrkOEO3wQHETnbvP/q7+4/+AHCh+PZsOfTIYwe7V3mxerJs1X1hWp33i2o//qzo785++b4/2g2O38yeXIi9U9+683bEnnrTr/YBwzjF4/ghuDuqmXrV7vBnEAguAcns7yhF5Aey+TGf/47J597t5k8XPH+xU9+JKPV+Hrhby/X7Rv7jVbrXSoHDy7+3eK//afrp3eeHf7RP/iO3774jbuhVRcObADImYkcvh00Gzm7m5kZOYKACLWrEdwJ7rrRKzVABebik52/P//m4NV6vbHh3330/Licbtbfv9D7bx3EXEojVlEz4snv/hc/87/8j8v/8I9Gv/Rfpv4jRg5BAjtAEuootPWMQOTmIPozmOWsKsEhXgZimGrOWvJwdV2+9zG3O699ca9umc1Sc/qHw+xgedwiSv9A0/XjPL5z2MRgeSipGkpOQ9ZHr939vdj/4FfTf/TD479DYwkkgY2dmIgJpEpEtE0qEBzmChDAZq4MJ4K7JjhUyuLFtWUSxGoZ9/7+8tlplnXP0jwb5Gpncq9pGVaUInMiTcMQ5+/00xN85kfz/3v4r3283QAgBSFsM4IX7MQdhABzN0eAgQjbfh0RAepm5BtydQITCKw7mzmPD3mdu8XFsijS/TgZOwpyUHPqck6b4mvQnzbpkA83uz9XTzTKNszfmk6AF2cA25gKCObYFg5c3Vgyh+xOCi5qSDQUtqGouhOHoK1AyOdkd1CGtHEbyDOstgJRNR26Il0/XBXY/jXqN44qrbYGKG9bDdtz+49pO8DBnTNTIhCZm8FdzbSoS1LzaKlGyrk4ObERgYUQ3Fy9kpgyezFvwA627N06a9+V56uKgDA9eqMKzEQgYKu8AfdtTrONSYg5wNW3xhhTzsS6xQvX4tvsN0veZCIr4sYMgOFqxASuIHVvtenaActltRk67+lsMXafIex+uoqRiWhLyF5oFd+qxBchhUgAFVcDwR0Oc2ggJ8AU7oUtpCyrZGasLGE7uBcdBVhwDZrVlIth6Pq+L9pdbgKs3dndvLxXy4tUanvGrQdiBiYyI8DILYDYYOQOcibQlqSqO1zdC4Q9XV0dNcKFDMRCbkbikK2nawZ3uGq2Yb3SIZ+tJZZxMx/fus/CIBA5zIQiiAxExgR3MwlQcguAEIZKHQSDKzwDvt3M6snEoVePHmhmUHRHIeYtq4NzdlXAiwO9b1aLlIflMO12T2/Epv10qGsiIiEYwLRlJ3jhB1FgdiFHAIHQoNgWspW8CDGBATdP7uSU37s9dQrm20yHyADSrSpwGEqxwTb5JA/WUbOK3M9o/vIkxFIRiAELYGd3+IsncIIwu5F5gENNihiBnJxcQChuzlzIs7Co4Oqbt47KJKtJYI1GgBlgRS3Ds3e+yteX7WW9Hg9JtO3bcXl9zpEZzKwgdgabE9xAYIGTAEZwJoYbsdqgKSmpuuWiCrgZeHBiY3KPz//8LHWcc9a1r0tJQ1FTMyfSvl9ryv3V4u2jK36OwNHzzF672VbERMIi9OLYtO2R2gvZBCAQNMCNicgJTKZGbgYwkVNf2MidCZyrH7ZfKd6qWljHwciEyN1MzVmX6IfhalmObzysTTTOlrpz1yMYDodF2ItVSMRbE2ir4YFCFAJACsmhELwQiERdU0rulEHqzAQnCt9LX7UhyKhAyVpVENxzLl3X59IvHm8uSM7nKxaL9ebm5+eRApG7Ef+41MNbSP7xq0QAgqEEV2azavMitTHK1Ck52C2WwtsaLDuNnn/9s7sTUa6IwyqoOME056Hvuk13tjjlg0uNTYHQ3qOvjqMQs0cRgRkHoy1BVNt+2gGiQkTELiHHvIXpAnZnUzeqbBCzks3MwW4MojQ/+YvPHowDBhkxJzgcqfRp1XWr7ux4Ue8/qmWzWwbx9ZfvRwkCZ2YyEDNeiGYQ0xZrCREONafQJDYCm6oUcgVrgcHIrOpdibb7GxS6HflgfWNUtRg0ZCZSG4YhXa83q8XxILOTSfF6aEaa5z9bxRiJWJjcwYAyACMCb7n6lp+4MREHdiK2kooiqJIyi/eORJJKJGKzLTRTVaKlDVmRuilRCmlJXV+ur5cXV8YQQqSaStXcevOohBCdid2chIigDAOcSADGC/LL5CAKKWRwoSBOJZOySyncS4qqZE0ukbe00cGoSOrKug1RhYpyP6T+erg4SSSIi93eqmlbLOy96k0UqIatPHNx4xc+JIEJDqYtSVUiCqKkpiI59o4kxZHdiMIWHa00Tv6CrLIDhgkXWm1ScbO06br1tdTOODq2WpC0aXa/xjEwUSBmIsAiiN2IVLZ8eXtJQUJgEIIGdWJ3mLkV5hSDFmRDIBZ3Bv+Y2BLHyOrKI6ORatfl5EYydlfiauqVBapH1Ze8jiy0XYDOIcAdZixCzA6H09bJeKGkAyUiplw8iXqdEYsSVRsEJ7CjydWLDh6IFKGq6xgw5r6vm6Gp21z6jjIzHa1sFLVuXt6pWAA2JyZjcoO5OEACuDDBtx37UCBOoECSkQLAomqodBBWyYXdSYBUokDkBS9xc2EnioRRLPWS4yTlPpsTDxWqUaH68HYQKsETCxSCrQAx4sjicBixQMnBmRnbfhXYt/iZQGQUCkzdKyhJxJDEjAKMQFv/i2MFIqJRYdaShtQqzGADZkPb33jQcoC4R3fEohGMwsLC8uPuhBsczM7MwDY6gpEZK1dWjMwKCw1jZxcRh0SvVCO/IHXbbwVBQiTSupak2bQotIWLNPfCthclXQhSmJSMmCUABiEic3J3YiYy2oqnADCzmagZSY5a4Jrbvh2ECcyWmkwMwdbXIWazbSYPCCSIRnAZCjr0bfd61TiHIZBGz0OduCKQcGBy+PZrF2cxYXPZWgnwYO7QollEszdeO5sIC5pcJYTADYhEHMJuqqYaDczMylV2d4aZy2TldWK9VwfUlIWMHFyRgIiCk2ci5vJCMTkzRX3BTJ0CezbiOms2Z0tR3MEITCztAHXlbQcLcBdhMgMLzACHc2JSCK8Lc8oP9rwKRmKmbBLAwvAtLBJgBEAIDMC2uggwQ7CBGMnNDAEuXohI86SAEKgi2cb8TCWAS8kpqhYnAsyhhXwA6zCQr9KrY22iw409ROcQGSDm7b0npq1p8+KFMpgDyqyBKyQLPQhGsFqjpyxVYiPhQNn1x9qGrAAOtRIYYC1OmgNl8bzYGZb+YCyVaYSrEwcwE1gK3LaVDWJ+4Y3QC4sGBDeX4LCYc7AibCRm5tEVxKZSmkixBAIznM3cUXLtWcDOUIOLaUiRY1rZbm1SgrhSZAmVggUwJiKTrYUiTi5GbEQG3po2rBqIYu/N4EwWMJh4kmjmzsZwiRbwIhYCvAxjG6hSIpet7epmoev7bnzEJ34vGlAbYqBtN+lFT+ZFpsAgchJ6YV443GFu/z9atFMVU0ddBwAAAABJRU5ErkJggg==", "text/plain": [ "" ] @@ -334,7 +330,7 @@ }, { "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAIEAAACBCAAAAADCy4aMAAAc30lEQVR4nGV725IsR46cuyMyq7r7XMghZ5baXZtd6UFm+6IP1+/oUdLaaK9zI3l4+laVGYDrISKrD8kyGtnsS2YEAnA4HAj+T8P0/NAGALhcLtjVUbXe3T0sfvqhOz9/fjaYaZugYVCKoAAGAMMUCe8bmhpcWQaqMrO7ZFcZBozee9p0M8ZLARt2AQAN2wWUkRsfPrxTId73/dU9TdomAcMwiPkEk6ZJSrBtoejjlVWZac+X2N6zV5FAQ3kuCiwPG5RdBZR0QaHiq6/3T/tXbf/z554FBWGDTIMhkUBBJCG4qCAcUVV9vK0b5ey94CqwMp07L3uBbUHz3P0w4bABXAZQhiux3t192Dbd37222jtgmzQIghTH9mEbIEACIG62dWUa5cwqlE04C3bvu01CrQzPrbsw/aDsQhq1bXr3m2/f6fzVftdeXh5fIYgEAJAkCJgYR2GWSBEkaRfpYU7Dvfc0DJIMgmChnN3RErZhWKj5fiRczpTy8vjuw99/bNf7b52X10+fnlsDxbQNzqWQJEwXzZI0DOHxbVcahjMzAZF0k+nc1Cp7L7Y+bDXWMU/BKJeRtZfWb/723Dev/uHPf/zpkgtBUkYBw+VIktMpTUomh4+QI6rSqLLHosokZDcwCSRbAqSH9wO3aCyDe+366pvffLO+OC/bf/7hj58twKgCpTIBzFMxAVsEOR5iG6Q94nE8bzhcjW9LQQBk20hyuPOMFYOZBbou27uP333zsOxxffzhD//n8dpaEgWAjVVGKQjRBuWCKcKAc7qKXTaUmWbYZpVNoAAbDKjcdlHjDw0DdJVjHEtufvj22/vr5fX16fnPf/zxSokFgJBIJagQbZsiCUQ00mVTI7hAShZIwlVFH85WBQqW2y4pSh4AM86ibJf7XqePf/Ph+h+9P/3w448vqwp9D0kiRCBMBjm2ZYkMKVC2XWMdjbIqlyJRPTcPExpGURAKrStsHt/2EQuu3Lze331ojz/ufvnLX586WyViImABECm5ABKkQqJGLI7zhEyFBWcBzK3ncc42AdAmmu0Cx/FhIG6NR1jL2T89PX5/7f2nzxcbKEW50uN8qVibyHEELYIAYQMybTo9Y1U2wKq1U1ll2zkcXmhE0RPXQKCySpCYbqf1+sf++vpyuW69NhuxNF+v16zcu8l1XU9ra01CiE0FgwYDrOH+VTUyBV1seqWioxKVLW27OtrAUgADayc8F8yQ6um6X7fLdS94L9DudXndK/sIL1J01UJSkkY0ze2QKoMqjVgDwNXZ5RKKSQ+XaKaLE5RIw2UYLotkf3nqPfftkmTlCOttqwAXlrUEXfumdW1swymqfKQJgpArRGAifxhGoUARKFKu5nkIAAYwwYDLJLHvn39MM7OuKSq363XPQlvP69Jghij37m251xIjH8xcm4YNAgHOJ84wLlUYdoAGXO14PUwPwxCFqmitvz49PRfpzL3b7Nu15x6LSKo1IohAhiMkIVMzt3hsivI818kARhou16BAIEm1sbS5BNigCbgiTr48vr4a4dzSdd36ntJpOS0RANXQQo02JKyBbolVOHjLtO2xABbcq3rmnnayPJJKm78+fxcGQMvF1vD6+oIysvdCf3nZU+fl7tQowF1aWos1QiNPVlVKnNsH54YnPpHE3ntlr6p05UgiBTT87DPRJiHU9vr6ckXBWai67he39bRE7WwuZ+5nkDlAyFKhYED19jTSsnIkSRciaVcNwKxJStvtvW+rcEFLbI+fPj8zNCCsJ86xrkFfdi+rwGXdt9OytiUlRaw419475gKmWVkqmaqqKiAgukpk7c4q026/XICASrW1Xx4/PT4v98IO9szUqa0L3F9ffHYjjcp97UvPJi3yoqLQv1zBQWU52HjdfiJFJSoJ/+oUZnoPv3764fufXt/d0zAEk5aq98vTi7Su95F53bXmmhWhXkjBDDN/dqgDFzhIRGVm5syRZTYAaINs25gk0wq7+rr/8MfvX3hnppnSAqK2frm8Pl+XdNzfXZ73a+2oSoVOrmVZ1GZO4sgPhRqJgSlaVbll0ci0FzabLDTiVqcMLKONaE3ee+eebbp2CJv3bbt0+3o5bS3RtyxU9YhWvS/LuqyYwDwy3wA5AqO+cRVAEwRGCrWpt1MgJt91llpbTncPqKiaK2CAuV2vbovy0epIwJuz3y+Ve/a27ifIto+jH5HJCUSA2lgfKVMSmWk12D/zRJerbKwP7zr2xRMtFCKrZza1en3dkGe1yp6711YpZ8/uWGoQo7GEQeI5QcmhUN8JanCnAdgtYZAGeOQmIK+5XlPu+24IdrWl0QTVGG3fX/PlvC5yZk9GrjH+cOkcyVD0YYNRXroqi5wUnyIhgWBWmxXQDY9I45q97U/V920BqRKXNdABxMJo5aq0lkUu5/7UHx5aC0quFKHiYN8z141P7mmyDsqiGp6RvRXgNxizRWe9vPRt79ctl/PZu6y2LJlqCxamy47T6f7D6QWVvatnnsEWRNXY2kjNgAXA3SRqL1QTANkGGoIJor7AAwOACKD255fnS0+//+7vTs8/bNayhjF5aCbXu69+97uvl0/cX8WICCqEqlk63nbOkSaHe8Lebm9h0QpbbDczefgB7KLq818et/uvv/2H/17/9mNyWRZlKJqd1TMevv7tf/kvX7e1v16K93d3y1Gxj4qFnpzI81+2IRu9TNVBi8lwyzbqnWGC8U8mkZfn5/7wzd9/99vLX9fOFlK0EAkQOreH9+/uTl5O91e09+emEDArafpmA97O1pAO7lUGbbqKI8njWMIs/apyvzy94P7+9//jv358ufL90gFDklAOsEHafrz+mdf9Ye1aKAok1cYSfeMGb+5FBVAB5G5QMKqKzr23Ue7jkETsLGz9UvfLd//0T3+7/+lSH0+Pu50WskioAYiXdLb7d/dLoHeTVYzWZuV8YIxtomBTVjjVUNXNMGzUDld3myE4ohZlBS5bP384ffV3f/egfff5rl32na7KgkgbkDpyP5+13LXaY8BQSKSGBvS291nECpaKkgiWZ4UPRGtliBzAARta8mnfP3z9ze+/WS//nrhfFc9Pe0/7apEt0yapUzvd34XLoFAiY9SOBAcMHi5gHzU+oWjILEtBCVWxzMRzUDUpFkbi4+/+9h/un/7fX9/95t09+n277BurewGHVKR2ev+wroFMu2ARCunmB+Np9IB5g5BFgZ68WaGgCsupkSAKhsbW6lLx8avvfv8+Xh9rWdclePpwwefsCbbB/RnLInCJWTebFCXqzQbDALyJZAApGCLJiIhFIlssayOGZGUYENk34Lv3f//7/V///fLw3UnIOL33fkF2LmMFENsSEfSoDAUHgppqytuHxzkQg6MMxihS0ShC6xJNs9AiNB6+8eHbb357/+Pn7/HxPdwJ3vuhydnUaASwLIw1so/dukIj2UowijAgm7BYIGUZAFVR2AtiSI2GlnV1m1KaQYv2xeJv/tt9/9e//nRaT1R679mvpRBd1QK0WwzSyuHTinkAswA3DqYEYBTzg5FEGa3CZoQERGPu7ablgbRzM9bffKc//K/Hu2/Oy+6ocvbXzgghswVDkJbzWul5vBHBEaWHHkYM1XXE+QgNSjbU1LOgRpHRol8bXNN7iOrZ24cP73D5/j/z/v6UmVn2vl+6JdllK4IRy6KDW4oYKs2BaeWZhqXByVQTbUbmIwtvkavmg9MUnfuGd//wLf758U/nOLGSuEKuzBzLdGWUFEs4h+sCAks3guFClcUCdGDdGy7I1FgUCzZ7aw+zcrVk71le3333N//2zz/Ux/OZ6eqbQnalTRVZBCAKaQqkSWvIygekZRWKCRU0Y//Qigc5g6ih64FuSxt4HHSvytOHd9/o+fOnl/v3980FD2ZfhloEItoaaiGj14SftxR4K45HCV7y8cfgOJcCoBblcKYhEXYj4aSwXXrjw++/je//90/73W++OtVulwOu6qW1shhtPS+QWNWLCiu+YBcFAmUOlgFM2dAwTZZdCS5tZQJWFRRG740DEZ3O0+nr737705//Oe8fPtzFrgmhlWW1Fqpo692pnM7Mog7cBWCWfRRcM9+7MKRtmxzcxwGFZJTDbMpMN5iU0zjdffvtB/7lT58yzvfc2r7tPXNW+bCtiOV0d+p7ZeXQdG+RDNM1fg91yJKcidEgoaIij5+ERdC5O5ppy1lqX/3jP/pf/vCXy/n0/q4/KrF15/ByOrPEtqynhpxarci3HDhLRA6JBKzxXr7Vg+RwjCQYbpthV3dro1gt8N3H9x+ef/qXH9+/e3gIPndh7wWCLIt2FMGlRVXvQ5EQqQgUJGGcftmuN2ikYJsasgLVQDiXIHd29SybzXbI5vrh7vn//vSn/fT+48Ode25kVQGQgKn5kKiooQhL4hAzZ5zJdTTMaqIh3I9QqZrFCkC1cL88xzuZVHPBKmg5+U8//nTR+w/vTste1U3OZ2X2UWkIlS23PrjGYBniIf5M4BlahSbKG6NGd5mCAFaLWL1tz8vdUpAaaHdE1JO///NrW+/ouvSKJUeli6q+b/vAlMod1z0VoCQxNCzEUWvZLqf9pTB4cxPMRAGUGY2srjKqyaiuldu/XJ5euGp77KulVftWCaGc23XvNfpRufet15ETxSHuDxbmo+wFNMii2W6qDomj7dK5Rqznyh2uaiCdbN7/8nlHC/fH16Wt59acFgj23HuvrDRdfcdervHewwtgyzI9tLP5NhvW8eWNu1Bm9RSW8+4EnI0AUSRi77WRvbzq/O4c4CJ5KxEY8otQ3bEbVZYnwx4INMxSA4wmUgxuJtSISWpEllB1ue5YymkGGgxTZnv46ukpN3Pv0vnh3XlZzvetawMyVX3P2foxWT2mXDhib7xsNAuJg3ePcxrcyyCg8V9GPW+V0S7FUQnCZDIejNfe3feOuF5fz6d7rwEC0ST3Prp1nh00zW0mOeBfE4riKNgIStSkSqOiLBiE+svFrQUcsUQbG9jF5X57QPVMoOFygc4fv30fPausECo7TAIaGoSGXktrurynDYYSMuLx4MngwLARJr0bkJY7t6XFVHUz2ykq9XxFa2q5vVyu+fVPv71fxL2TEqvDoKMGMSMqANRsEM1DmSEH0LYMehL0Gs1YAKyeWr0EGpaFbjDEJIG4Sy6ft+ZFsTlf61PTdm5RZQtEpbWdGDDgFFjMQVgwqLiKpmy7KMyW9ORIAozZpG5syhRjWVttjTEFt06dqq3XfQvxYb2/21U/Xc5LkwKa6u5gE0GiINk0UKPLPB3ShqyjHJ+Ze2C6jZbdsQrYtorTuUVF49SivSlO7WF/fXwu4PyQ7/N6eblcltNyFmNNzQArKwbMiiMLadK+SZaI+QNrLKFgN4qViNqqnVfhFVs8nCNKs3YGnKDWaiy8XPsqrXsfTTBFILK38I2XHSg30/IEPJAYMf9l7Xojca7BpPeNQmYBISqaR8PQZlYUtX44+fll6/TeS6e703ldg27LsmAoJWNno0gbnVJKHnufpenEhC9KFiJhO816zWD0Xeh7K6LVQYHoKgVjXZ+x7yDNhfcP62kJwtFaFEFTQ6UN6WaGIT/M9tVIlT5khAEMKM/2k2ovVwNWZ5JkG/Mvnskt3RTrElmhhdByH0GT1dqqAGhNXAsdNvbMfT7ifbSKGCMp6RanBYPBqL7vsdj9WlrUgHHaQ/wpMxF3/YJ4uCfVgoC7oKaVCYaII9A0iwQOnzvalaNoGqMaE4eGxB/eoabezcqTkJe+ch28adJ+GlWM03nPHqe7xlFzlJNSLB45rqg5+oApFd3I7K3pD0pDO8foYY4ircClLZ3Vua4B1w60g3GPeq6gpZ0iGtelLSjXTZlqi1GHSe0aezvsPnucI/9qdMWBAZhDaPRULKOt63rNU5MZ7PvRPiBqmLStyxJsy6lFoGob+S3LbKMEdRFGAZJnfxIuBMpDSEOYg15iSojjpMpGOWtdvL52Eoror5d2w3IOp060WO4WtCXoMcYBAFmxprrNEsHAFI1UUBz+OKRkz0pccBkF2wxXwSnWhuoswBu11/782H4uPtJULA90Q5BmtGHGSqNFtHTREeKtm42Z8yYOYL6SuMXIAV6uKmTfynCVCuj75eXSvpBgCVCqXqfOtarb1oLKKmdCKyqzdkIRhKaEK5JGzamPMRaEAyxvjAVzKMfVcx9zOma/Xi6v+YUNpuZhs3peiogIuRIFVFIevjUq+LHeCTe8yWVvGDw7fIOgFKoKRvb9uqVB0Ua/blui0TxMNSA397y+PL7y7nw6R6CEQq9O7gMGMz3YpzVOjbKHXgJxSLi8HQcAsCqdJrhvr5cN0rCJs3hqv+g3SvX68vz49MNnvX/3INUqK1zZDzEqbmL8Abk8oslv3PXAiomamePrvl1eUwtH2xsKr+fmI6sadsjXx5enp5fLFdVfHs/LqS00JGTVUmpLSWWKIRI1p4AYLipgQEkM4DgyhFFjyMvOnlQ7nbKz4LTarzrfwuU//vQ5e4D5GKfTen/3cBYIp/u+LIwwusEI3oSs0SQbiYGmoeKtXTFKeVdVZl2Tjct5cdKVnezSL1YAonp/qnVBdVzW5fXueh/s1+tWaKub2YDyQcGmUEX4WMUUUIoo1IzRqjKcW6YZahIDSCQV/KUNGKevNm6XJIKoLfeXxyWovqXbWlyWhhhQT3mOwky5ADee5onAo5YoO6swmhuCkArMab0gf2WD028XXZBiEEClny0tBTDLWpcmiuVC4IigmZBMYHQ0ClNUh2F3ONMYcw9UIIlwZs8SxXbTomVIwOr+8SfmqXZUcO8uuDYpWEDfew8R8qDIkxGMcm1kJ930izpAqbLKzp5zipZOs0JVbK23G4YcfVedHv7mQ8b2dHULLUOUVRNraarsbsJRh97UYxocHTYf2ROHwJlVWeXeR48GU30dMcTRgxwodoBZrO/X4kvvaEs4FrlKEk0Ce4AD0kbEl46ScE7gDGF5fDFElapM96y9PMaKh+mqqGiVv/QD1H69fO7WjoXLKixrODtJdxeFHQShuZOae5mhN9W1Gu1up43DJyqPwd45n5QltRjV+5fNd9f++vjDlk1Y27Kua1tGbqjas0PjUY7R8jJKo54a/IRG0j+zQaFcVZVVnJOQJsyepSZWNt7sf0TSfn1+3ZbTeiLV7hWCyz0z+qA8A26OCe2pat9KptIh8QOFspHOrDzmf8ChsWXVEgSqHdXtnM0rrfd3cAhDh90UgpmVsE6ViUoLQNFhMzWmy46BZbvsTFOuMXfqnpnOMfMw9XhXuWJt7tf9Sz8gLPoOn+/7HqzslKgmTzlPA/aHXKfBS2hgNnYGGURHVVUUakxDeXQpAMXRUgB6GdEic/tyBVPZXPTu63px9dG4UA0RQiQRqMCc4BMx65LJluY4bsGj/VJZcNq9evfQ3o7Qr0pGW5TV+89sYKcD67vf6YenHS5mFudsjQJgaY6LeyhXuDX0bqcwqHGZqCpXlTfn6G6Rc/6les+Fa9iH3DgXQLsctTyE6rKJ5DFADQ49SPU2LTH7/ON/kvC0AexKk6gcI2gds3rHsERmZmHlwsz82USUQSGFfuLHx/vBUjULQykUdLFiBtvMQFNRpWuUf5O00uVyunpWP4RHUIpjCEiLqu97R8NxNgYgiZlN54f30UlzvJiNEWoolaZsfBPuR0qUqwbzmssbEz2VlVmCOOrcqWNI4rrUnr3Xr7Kz0Isnfbi2zVVal1EVKEIBuZAj5x4KQc08XOPCxZiMhcbAcjmzMGZY51A/XGWtEafYMLT1n6/AidL6Ifuua3VHxCg+xhcjlicajlQ8OQDHKHZVlSmm0x7rERDRhhA/hkjLijUaZ779xQqqYJ7e1/7q1ksah8qgpGF7mXQd85xzKB+ja+lD6Blh4BztyZEPpjM60dqytnEW/NVsHgAwACoa0dTHZDSliJosfQxUHxNnxwoMVh6srCeGzE5GE+eQyHQ4Le20tLKG+tjq0BhGsmMsy/XT/rx1Cy4pC1IwjnsjLrLGqOU46dlLvOkYrlEjpof/xICiASOJOEdbA+lt368dHBOSX5SOiuD+03659qMWx0hpGrzTR8U4GjDHKP7MuLNUrpmZgmqtSYfyC0MR5yVQVVWV4K/8AJLq5ftt49QpRVAMBvGWdj16gDhywbzzgcQgpkPKMyS1ZRnQNWIm2rIsC2DOVuiwwc9WgN2fP11zaZQUnBdn5gEAlc6R7auGcOTZ6vXR9x/6tTQmJdYFKDtQBbdYTouSgMqMlvmrFZCV/fHHjXeNipVRt7kCjiK+O2sOEObUiFzIepu0Gi3xCIJqp/OSroRAFFpbl3bAptqaG9y+GBspSJH95XrZopFGJ6Acd1cCqMrurKxRA83LaGP09zaVbYEabSiEWmtNSBhghNd5w6DGrZJAdrdReM60xmXZ6uWlduceBlFTvo4ouqr3rHQBnn44A8LzRtCooeJo8Te2JjbI6GA0LkQVaVcpUMy+/8oPiOy9J/toJecwkarIXs5efey0DLh6GVWHUgTNKxzjSpMiFCEGuMoVbRXsqiAKCNYYohjd/9vHVRD3zZ0WgNhGbahOJzwcbkYjaralCx7XtBhfDGRQY5KNBNRsSepTfh4PH8my4dauGjZQRAuhulSTd89p48SYRwBBZ09n1SEENo6FchD5eZcEAJxjgsueZdzodrlc0LI0DxvczFC74/xwX0Ymw46hzRgcN0+G8gA4q7B7CEcSx2CFph6Ho+EdY8hhDF1MLj3EPqrMZWmRbT6SIOR04SHycnq8IiiiLtFE0mnTlTnbuhRG5hPJCMGgQwVpInxE4FAWjKFIkqNjPQb2VubSlu1nTNUoh84f+gpsFRx3xqYqOkQIF6oyEbJJEE1UzEmYCb2eSWBqvgYKI6OPssAz+GJZlqY3jkSDcArtfheW18JUaibsjfEOIoTsve8RJhUtRnuBh5kxFzDfNueRvrhTMFaihM0W/IUNyNrlc7S7y77XoBmeNeFIcHenU1R//fTiaHO0CvDoZB2dtjEVoIGYcz6AutXGQ/lz7ltRPqYDDxuw9sCptfN12/pezsycgu1guuv9Xav9ca/CpE4GqHF16Oj2iHEQEgzWOCVEHBDPqixboS/uMx37LHatsfQ9s1zZc4YcGBGM85l1dY/LVpA8VJwgEQXNG36ivnQhwGMeod5EjxyRH/GLGxRjioBWazXYxkAdjKJLEiNcrbW7Hz9dxmWqoKa0yhjNfoXAoVu6WBxLmr1vDD3elBTS0vjF601oikDB2WAeV6SM0TINRpXR2Nb++podVDhEEGZMGUWDlAzd8uiG800MHydaotza8la9c046c1LRWeR+6cNJC3tPQ6d4WhenFdBov01ZlFQsETKM7pHcB2BOqWU0lebMChSNxzEM6ukxQJEzpKtGj9tQMtOhyJ7m0trD+0u/2E1VskbL20RELGtI5TRqavAEUJpDmH7TTMhoP1MwhiDJQ5W3y6njmKZSvNtApdv53eV6mdeVoBGQBNja6RSsGl1Yixp/yNJowo6OmGAJMU7hCzFtzA8xNLA0ACKJIsvFCBgN2c2q9X29PlKGO9uocmm307KsK4Hq3qk5ekIkgBqKEUOAStZSzb9SdX/mlphDhvjCXUUSpSqdL0vUYO+FINOWlvWuLUFbKOw3nXHU7by5HCCYFqJ9ca/t9kPSNwl/+u8bhzCIlqiMu+t5SVcnKyHapVhOpzUay6QaEjUk5iEA40hTM1+VGe2X/YWbAYAZ2piZ83CeMSFu93U9n087RpIaxCXWdV0ieEgOMe8ajmiZSibG1HUhs0TGmw3eDDQvAMxTOK7o3dpRRZjl4np338dVVtNJxLouTaLmDRYJt+kAHj2MMSJUEy6h5f8DUNDK5v3avawAAAAASUVORK5CYII=\n", + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAIEAAACBCAAAAADCy4aMAAAc30lEQVR4nGV725IsR46cuyMyq7r7XMghZ5baXZtd6UFm+6IP1+/oUdLaaK9zI3l4+laVGYDrISKrD8kyGtnsS2YEAnA4HAj+T8P0/NAGALhcLtjVUbXe3T0sfvqhOz9/fjaYaZugYVCKoAAGAMMUCe8bmhpcWQaqMrO7ZFcZBozee9p0M8ZLARt2AQAN2wWUkRsfPrxTId73/dU9TdomAcMwiPkEk6ZJSrBtoejjlVWZac+X2N6zV5FAQ3kuCiwPG5RdBZR0QaHiq6/3T/tXbf/z554FBWGDTIMhkUBBJCG4qCAcUVV9vK0b5ey94CqwMp07L3uBbUHz3P0w4bABXAZQhiux3t192Dbd37222jtgmzQIghTH9mEbIEACIG62dWUa5cwqlE04C3bvu01CrQzPrbsw/aDsQhq1bXr3m2/f6fzVftdeXh5fIYgEAJAkCJgYR2GWSBEkaRfpYU7Dvfc0DJIMgmChnN3RErZhWKj5fiRczpTy8vjuw99/bNf7b52X10+fnlsDxbQNzqWQJEwXzZI0DOHxbVcahjMzAZF0k+nc1Cp7L7Y+bDXWMU/BKJeRtZfWb/723Dev/uHPf/zpkgtBUkYBw+VIktMpTUomh4+QI6rSqLLHosokZDcwCSRbAqSH9wO3aCyDe+366pvffLO+OC/bf/7hj58twKgCpTIBzFMxAVsEOR5iG6Q94nE8bzhcjW9LQQBk20hyuPOMFYOZBbou27uP333zsOxxffzhD//n8dpaEgWAjVVGKQjRBuWCKcKAc7qKXTaUmWbYZpVNoAAbDKjcdlHjDw0DdJVjHEtufvj22/vr5fX16fnPf/zxSokFgJBIJagQbZsiCUQ00mVTI7hAShZIwlVFH85WBQqW2y4pSh4AM86ibJf7XqePf/Ph+h+9P/3w448vqwp9D0kiRCBMBjm2ZYkMKVC2XWMdjbIqlyJRPTcPExpGURAKrStsHt/2EQuu3Lze331ojz/ufvnLX586WyViImABECm5ABKkQqJGLI7zhEyFBWcBzK3ncc42AdAmmu0Cx/FhIG6NR1jL2T89PX5/7f2nzxcbKEW50uN8qVibyHEELYIAYQMybTo9Y1U2wKq1U1ll2zkcXmhE0RPXQKCySpCYbqf1+sf++vpyuW69NhuxNF+v16zcu8l1XU9ra01CiE0FgwYDrOH+VTUyBV1seqWioxKVLW27OtrAUgADayc8F8yQ6um6X7fLdS94L9DudXndK/sIL1J01UJSkkY0ze2QKoMqjVgDwNXZ5RKKSQ+XaKaLE5RIw2UYLotkf3nqPfftkmTlCOttqwAXlrUEXfumdW1swymqfKQJgpArRGAifxhGoUARKFKu5nkIAAYwwYDLJLHvn39MM7OuKSq363XPQlvP69Jghij37m251xIjH8xcm4YNAgHOJ84wLlUYdoAGXO14PUwPwxCFqmitvz49PRfpzL3b7Nu15x6LSKo1IohAhiMkIVMzt3hsivI818kARhou16BAIEm1sbS5BNigCbgiTr48vr4a4dzSdd36ntJpOS0RANXQQo02JKyBbolVOHjLtO2xABbcq3rmnnayPJJKm78+fxcGQMvF1vD6+oIysvdCf3nZU+fl7tQowF1aWos1QiNPVlVKnNsH54YnPpHE3ntlr6p05UgiBTT87DPRJiHU9vr6ckXBWai67he39bRE7WwuZ+5nkDlAyFKhYED19jTSsnIkSRciaVcNwKxJStvtvW+rcEFLbI+fPj8zNCCsJ86xrkFfdi+rwGXdt9OytiUlRaw419475gKmWVkqmaqqKiAgukpk7c4q026/XICASrW1Xx4/PT4v98IO9szUqa0L3F9ffHYjjcp97UvPJi3yoqLQv1zBQWU52HjdfiJFJSoJ/+oUZnoPv3764fufXt/d0zAEk5aq98vTi7Su95F53bXmmhWhXkjBDDN/dqgDFzhIRGVm5syRZTYAaINs25gk0wq7+rr/8MfvX3hnppnSAqK2frm8Pl+XdNzfXZ73a+2oSoVOrmVZ1GZO4sgPhRqJgSlaVbll0ci0FzabLDTiVqcMLKONaE3ee+eebbp2CJv3bbt0+3o5bS3RtyxU9YhWvS/LuqyYwDwy3wA5AqO+cRVAEwRGCrWpt1MgJt91llpbTncPqKiaK2CAuV2vbovy0epIwJuz3y+Ve/a27ifIto+jH5HJCUSA2lgfKVMSmWk12D/zRJerbKwP7zr2xRMtFCKrZza1en3dkGe1yp6711YpZ8/uWGoQo7GEQeI5QcmhUN8JanCnAdgtYZAGeOQmIK+5XlPu+24IdrWl0QTVGG3fX/PlvC5yZk9GrjH+cOkcyVD0YYNRXroqi5wUnyIhgWBWmxXQDY9I45q97U/V920BqRKXNdABxMJo5aq0lkUu5/7UHx5aC0quFKHiYN8z141P7mmyDsqiGp6RvRXgNxizRWe9vPRt79ctl/PZu6y2LJlqCxamy47T6f7D6QWVvatnnsEWRNXY2kjNgAXA3SRqL1QTANkGGoIJor7AAwOACKD255fnS0+//+7vTs8/bNayhjF5aCbXu69+97uvl0/cX8WICCqEqlk63nbOkSaHe8Lebm9h0QpbbDczefgB7KLq818et/uvv/2H/17/9mNyWRZlKJqd1TMevv7tf/kvX7e1v16K93d3y1Gxj4qFnpzI81+2IRu9TNVBi8lwyzbqnWGC8U8mkZfn5/7wzd9/99vLX9fOFlK0EAkQOreH9+/uTl5O91e09+emEDArafpmA97O1pAO7lUGbbqKI8njWMIs/apyvzy94P7+9//jv358ufL90gFDklAOsEHafrz+mdf9Ye1aKAok1cYSfeMGb+5FBVAB5G5QMKqKzr23Ue7jkETsLGz9UvfLd//0T3+7/+lSH0+Pu50WskioAYiXdLb7d/dLoHeTVYzWZuV8YIxtomBTVjjVUNXNMGzUDld3myE4ohZlBS5bP384ffV3f/egfff5rl32na7KgkgbkDpyP5+13LXaY8BQSKSGBvS291nECpaKkgiWZ4UPRGtliBzAARta8mnfP3z9ze+/WS//nrhfFc9Pe0/7apEt0yapUzvd34XLoFAiY9SOBAcMHi5gHzU+oWjILEtBCVWxzMRzUDUpFkbi4+/+9h/un/7fX9/95t09+n277BurewGHVKR2ev+wroFMu2ARCunmB+Np9IB5g5BFgZ68WaGgCsupkSAKhsbW6lLx8avvfv8+Xh9rWdclePpwwefsCbbB/RnLInCJWTebFCXqzQbDALyJZAApGCLJiIhFIlssayOGZGUYENk34Lv3f//7/V///fLw3UnIOL33fkF2LmMFENsSEfSoDAUHgppqytuHxzkQg6MMxihS0ShC6xJNs9AiNB6+8eHbb357/+Pn7/HxPdwJ3vuhydnUaASwLIw1so/dukIj2UowijAgm7BYIGUZAFVR2AtiSI2GlnV1m1KaQYv2xeJv/tt9/9e//nRaT1R679mvpRBd1QK0WwzSyuHTinkAswA3DqYEYBTzg5FEGa3CZoQERGPu7ablgbRzM9bffKc//K/Hu2/Oy+6ocvbXzgghswVDkJbzWul5vBHBEaWHHkYM1XXE+QgNSjbU1LOgRpHRol8bXNN7iOrZ24cP73D5/j/z/v6UmVn2vl+6JdllK4IRy6KDW4oYKs2BaeWZhqXByVQTbUbmIwtvkavmg9MUnfuGd//wLf758U/nOLGSuEKuzBzLdGWUFEs4h+sCAks3guFClcUCdGDdGy7I1FgUCzZ7aw+zcrVk71le3333N//2zz/Ux/OZ6eqbQnalTRVZBCAKaQqkSWvIygekZRWKCRU0Y//Qigc5g6ih64FuSxt4HHSvytOHd9/o+fOnl/v3980FD2ZfhloEItoaaiGj14SftxR4K45HCV7y8cfgOJcCoBblcKYhEXYj4aSwXXrjw++/je//90/73W++OtVulwOu6qW1shhtPS+QWNWLCiu+YBcFAmUOlgFM2dAwTZZdCS5tZQJWFRRG740DEZ3O0+nr737705//Oe8fPtzFrgmhlWW1Fqpo692pnM7Mog7cBWCWfRRcM9+7MKRtmxzcxwGFZJTDbMpMN5iU0zjdffvtB/7lT58yzvfc2r7tPXNW+bCtiOV0d+p7ZeXQdG+RDNM1fg91yJKcidEgoaIij5+ERdC5O5ppy1lqX/3jP/pf/vCXy/n0/q4/KrF15/ByOrPEtqynhpxarci3HDhLRA6JBKzxXr7Vg+RwjCQYbpthV3dro1gt8N3H9x+ef/qXH9+/e3gIPndh7wWCLIt2FMGlRVXvQ5EQqQgUJGGcftmuN2ikYJsasgLVQDiXIHd29SybzXbI5vrh7vn//vSn/fT+48Ode25kVQGQgKn5kKiooQhL4hAzZ5zJdTTMaqIh3I9QqZrFCkC1cL88xzuZVHPBKmg5+U8//nTR+w/vTste1U3OZ2X2UWkIlS23PrjGYBniIf5M4BlahSbKG6NGd5mCAFaLWL1tz8vdUpAaaHdE1JO///NrW+/ouvSKJUeli6q+b/vAlMod1z0VoCQxNCzEUWvZLqf9pTB4cxPMRAGUGY2srjKqyaiuldu/XJ5euGp77KulVftWCaGc23XvNfpRufet15ETxSHuDxbmo+wFNMii2W6qDomj7dK5Rqznyh2uaiCdbN7/8nlHC/fH16Wt59acFgj23HuvrDRdfcdervHewwtgyzI9tLP5NhvW8eWNu1Bm9RSW8+4EnI0AUSRi77WRvbzq/O4c4CJ5KxEY8otQ3bEbVZYnwx4INMxSA4wmUgxuJtSISWpEllB1ue5YymkGGgxTZnv46ukpN3Pv0vnh3XlZzvetawMyVX3P2foxWT2mXDhib7xsNAuJg3ePcxrcyyCg8V9GPW+V0S7FUQnCZDIejNfe3feOuF5fz6d7rwEC0ST3Prp1nh00zW0mOeBfE4riKNgIStSkSqOiLBiE+svFrQUcsUQbG9jF5X57QPVMoOFygc4fv30fPausECo7TAIaGoSGXktrurynDYYSMuLx4MngwLARJr0bkJY7t6XFVHUz2ykq9XxFa2q5vVyu+fVPv71fxL2TEqvDoKMGMSMqANRsEM1DmSEH0LYMehL0Gs1YAKyeWr0EGpaFbjDEJIG4Sy6ft+ZFsTlf61PTdm5RZQtEpbWdGDDgFFjMQVgwqLiKpmy7KMyW9ORIAozZpG5syhRjWVttjTEFt06dqq3XfQvxYb2/21U/Xc5LkwKa6u5gE0GiINk0UKPLPB3ShqyjHJ+Ze2C6jZbdsQrYtorTuUVF49SivSlO7WF/fXwu4PyQ7/N6eblcltNyFmNNzQArKwbMiiMLadK+SZaI+QNrLKFgN4qViNqqnVfhFVs8nCNKs3YGnKDWaiy8XPsqrXsfTTBFILK38I2XHSg30/IEPJAYMf9l7Xojca7BpPeNQmYBISqaR8PQZlYUtX44+fll6/TeS6e703ldg27LsmAoJWNno0gbnVJKHnufpenEhC9KFiJhO816zWD0Xeh7K6LVQYHoKgVjXZ+x7yDNhfcP62kJwtFaFEFTQ6UN6WaGIT/M9tVIlT5khAEMKM/2k2ovVwNWZ5JkG/Mvnskt3RTrElmhhdByH0GT1dqqAGhNXAsdNvbMfT7ifbSKGCMp6RanBYPBqL7vsdj9WlrUgHHaQ/wpMxF3/YJ4uCfVgoC7oKaVCYaII9A0iwQOnzvalaNoGqMaE4eGxB/eoabezcqTkJe+ch28adJ+GlWM03nPHqe7xlFzlJNSLB45rqg5+oApFd3I7K3pD0pDO8foYY4ircClLZ3Vua4B1w60g3GPeq6gpZ0iGtelLSjXTZlqi1GHSe0aezvsPnucI/9qdMWBAZhDaPRULKOt63rNU5MZ7PvRPiBqmLStyxJsy6lFoGob+S3LbKMEdRFGAZJnfxIuBMpDSEOYg15iSojjpMpGOWtdvL52Eoror5d2w3IOp060WO4WtCXoMcYBAFmxprrNEsHAFI1UUBz+OKRkz0pccBkF2wxXwSnWhuoswBu11/782H4uPtJULA90Q5BmtGHGSqNFtHTREeKtm42Z8yYOYL6SuMXIAV6uKmTfynCVCuj75eXSvpBgCVCqXqfOtarb1oLKKmdCKyqzdkIRhKaEK5JGzamPMRaEAyxvjAVzKMfVcx9zOma/Xi6v+YUNpuZhs3peiogIuRIFVFIevjUq+LHeCTe8yWVvGDw7fIOgFKoKRvb9uqVB0Ua/blui0TxMNSA397y+PL7y7nw6R6CEQq9O7gMGMz3YpzVOjbKHXgJxSLi8HQcAsCqdJrhvr5cN0rCJs3hqv+g3SvX68vz49MNnvX/3INUqK1zZDzEqbmL8Abk8oslv3PXAiomamePrvl1eUwtH2xsKr+fmI6sadsjXx5enp5fLFdVfHs/LqS00JGTVUmpLSWWKIRI1p4AYLipgQEkM4DgyhFFjyMvOnlQ7nbKz4LTarzrfwuU//vQ5e4D5GKfTen/3cBYIp/u+LIwwusEI3oSs0SQbiYGmoeKtXTFKeVdVZl2Tjct5cdKVnezSL1YAonp/qnVBdVzW5fXueh/s1+tWaKub2YDyQcGmUEX4WMUUUIoo1IzRqjKcW6YZahIDSCQV/KUNGKevNm6XJIKoLfeXxyWovqXbWlyWhhhQT3mOwky5ADee5onAo5YoO6swmhuCkArMab0gf2WD028XXZBiEEClny0tBTDLWpcmiuVC4IigmZBMYHQ0ClNUh2F3ONMYcw9UIIlwZs8SxXbTomVIwOr+8SfmqXZUcO8uuDYpWEDfew8R8qDIkxGMcm1kJ930izpAqbLKzp5zipZOs0JVbK23G4YcfVedHv7mQ8b2dHULLUOUVRNraarsbsJRh97UYxocHTYf2ROHwJlVWeXeR48GU30dMcTRgxwodoBZrO/X4kvvaEs4FrlKEk0Ce4AD0kbEl46ScE7gDGF5fDFElapM96y9PMaKh+mqqGiVv/QD1H69fO7WjoXLKixrODtJdxeFHQShuZOae5mhN9W1Gu1up43DJyqPwd45n5QltRjV+5fNd9f++vjDlk1Y27Kua1tGbqjas0PjUY7R8jJKo54a/IRG0j+zQaFcVZVVnJOQJsyepSZWNt7sf0TSfn1+3ZbTeiLV7hWCyz0z+qA8A26OCe2pat9KptIh8QOFspHOrDzmf8ChsWXVEgSqHdXtnM0rrfd3cAhDh90UgpmVsE6ViUoLQNFhMzWmy46BZbvsTFOuMXfqnpnOMfMw9XhXuWJt7tf9Sz8gLPoOn+/7HqzslKgmTzlPA/aHXKfBS2hgNnYGGURHVVUUakxDeXQpAMXRUgB6GdEic/tyBVPZXPTu63px9dG4UA0RQiQRqMCc4BMx65LJluY4bsGj/VJZcNq9evfQ3o7Qr0pGW5TV+89sYKcD67vf6YenHS5mFudsjQJgaY6LeyhXuDX0bqcwqHGZqCpXlTfn6G6Rc/6les+Fa9iH3DgXQLsctTyE6rKJ5DFADQ49SPU2LTH7/ON/kvC0AexKk6gcI2gds3rHsERmZmHlwsz82USUQSGFfuLHx/vBUjULQykUdLFiBtvMQFNRpWuUf5O00uVyunpWP4RHUIpjCEiLqu97R8NxNgYgiZlN54f30UlzvJiNEWoolaZsfBPuR0qUqwbzmssbEz2VlVmCOOrcqWNI4rrUnr3Xr7Kz0Isnfbi2zVVal1EVKEIBuZAj5x4KQc08XOPCxZiMhcbAcjmzMGZY51A/XGWtEafYMLT1n6/AidL6Ifuua3VHxCg+xhcjlicajlQ8OQDHKHZVlSmm0x7rERDRhhA/hkjLijUaZ779xQqqYJ7e1/7q1ksah8qgpGF7mXQd85xzKB+ja+lD6Blh4BztyZEPpjM60dqytnEW/NVsHgAwACoa0dTHZDSliJosfQxUHxNnxwoMVh6srCeGzE5GE+eQyHQ4Le20tLKG+tjq0BhGsmMsy/XT/rx1Cy4pC1IwjnsjLrLGqOU46dlLvOkYrlEjpof/xICiASOJOEdbA+lt368dHBOSX5SOiuD+03659qMWx0hpGrzTR8U4GjDHKP7MuLNUrpmZgmqtSYfyC0MR5yVQVVWV4K/8AJLq5ftt49QpRVAMBvGWdj16gDhywbzzgcQgpkPKMyS1ZRnQNWIm2rIsC2DOVuiwwc9WgN2fP11zaZQUnBdn5gEAlc6R7auGcOTZ6vXR9x/6tTQmJdYFKDtQBbdYTouSgMqMlvmrFZCV/fHHjXeNipVRt7kCjiK+O2sOEObUiFzIepu0Gi3xCIJqp/OSroRAFFpbl3bAptqaG9y+GBspSJH95XrZopFGJ6Acd1cCqMrurKxRA83LaGP09zaVbYEabSiEWmtNSBhghNd5w6DGrZJAdrdReM60xmXZ6uWlduceBlFTvo4ouqr3rHQBnn44A8LzRtCooeJo8Te2JjbI6GA0LkQVaVcpUMy+/8oPiOy9J/toJecwkarIXs5efey0DLh6GVWHUgTNKxzjSpMiFCEGuMoVbRXsqiAKCNYYohjd/9vHVRD3zZ0WgNhGbahOJzwcbkYjaralCx7XtBhfDGRQY5KNBNRsSepTfh4PH8my4dauGjZQRAuhulSTd89p48SYRwBBZ09n1SEENo6FchD5eZcEAJxjgsueZdzodrlc0LI0DxvczFC74/xwX0Ymw46hzRgcN0+G8gA4q7B7CEcSx2CFph6Ho+EdY8hhDF1MLj3EPqrMZWmRbT6SIOR04SHycnq8IiiiLtFE0mnTlTnbuhRG5hPJCMGgQwVpInxE4FAWjKFIkqNjPQb2VubSlu1nTNUoh84f+gpsFRx3xqYqOkQIF6oyEbJJEE1UzEmYCb2eSWBqvgYKI6OPssAz+GJZlqY3jkSDcArtfheW18JUaibsjfEOIoTsve8RJhUtRnuBh5kxFzDfNueRvrhTMFaihM0W/IUNyNrlc7S7y77XoBmeNeFIcHenU1R//fTiaHO0CvDoZB2dtjEVoIGYcz6AutXGQ/lz7ltRPqYDDxuw9sCptfN12/pezsycgu1guuv9Xav9ca/CpE4GqHF16Oj2iHEQEgzWOCVEHBDPqixboS/uMx37LHatsfQ9s1zZc4YcGBGM85l1dY/LVpA8VJwgEQXNG36ivnQhwGMeod5EjxyRH/GLGxRjioBWazXYxkAdjKJLEiNcrbW7Hz9dxmWqoKa0yhjNfoXAoVu6WBxLmr1vDD3elBTS0vjF601oikDB2WAeV6SM0TINRpXR2Nb++podVDhEEGZMGUWDlAzd8uiG800MHydaotza8la9c046c1LRWeR+6cNJC3tPQ6d4WhenFdBov01ZlFQsETKM7pHcB2BOqWU0lebMChSNxzEM6ukxQJEzpKtGj9tQMtOhyJ7m0trD+0u/2E1VskbL20RELGtI5TRqavAEUJpDmH7TTMhoP1MwhiDJQ5W3y6njmKZSvNtApdv53eV6mdeVoBGQBNja6RSsGl1Yixp/yNJowo6OmGAJMU7hCzFtzA8xNLA0ACKJIsvFCBgN2c2q9X29PlKGO9uocmm307KsK4Hq3qk5ekIkgBqKEUOAStZSzb9SdX/mlphDhvjCXUUSpSqdL0vUYO+FINOWlvWuLUFbKOw3nXHU7by5HCCYFqJ9ca/t9kPSNwl/+u8bhzCIlqiMu+t5SVcnKyHapVhOpzUay6QaEjUk5iEA40hTM1+VGe2X/YWbAYAZ2piZ83CeMSFu93U9n087RpIaxCXWdV0ieEgOMe8ajmiZSibG1HUhs0TGmw3eDDQvAMxTOK7o3dpRRZjl4np338dVVtNJxLouTaLmDRYJt+kAHj2MMSJUEy6h5f8DUNDK5v3avawAAAAASUVORK5CYII=", "text/plain": [ "" ] diff --git a/scripts/dsprocess_biwi.py b/scripts/dsprocess_biwi.py index c1b1dfc..9dd29da 100644 --- a/scripts/dsprocess_biwi.py +++ b/scripts/dsprocess_biwi.py @@ -1,6 +1,7 @@ import os import sys import zipfile +import pandas import numpy as np from os.path import join, dirname, basename, splitext import re @@ -16,17 +17,44 @@ import io from collections import defaultdict from zipfile import ZipFile -from typing import Tuple +from typing import Tuple, Sequence, Any, Optional, Dict +from numpy.typing import NDArray + +from facenet_pytorch import MTCNN from trackertraincode.datasets.preprocessing import imdecode, imencode, extract_image_roi from trackertraincode.datasets.dshdf5pose import create_pose_dataset, FieldCategory +from dsprocess_lapa import improve_roi_with_face_detector +from filter_dataset import filter_file_by_frames + C = FieldCategory +# See "FSA-Net: Learning Fine-Grained Structure Aggregation for Head Pose Estimation from a Single Image" +# as reference for the evaluation protocol. Uses MTCNN to generate bounding boxes. +# https://github.com/shamangary/FSA-Net/blob/master/data/TYY_create_db_biwi.py +# Differences: +# * Projection using the camera matrix +# * Keeping the aspect ratio in the face crop. (The bbox is extended along the shorter side.) +# * Skip frames where MTCNN predicts a box far off the projected head center. (In FSA frames where skipped +# based on the inter-frame movement difference.) +# * Where multiple detections: Take the one closest to the projected head center. (In FSA, the detection +# closest to a fixed position in the image was picked, approximating the heads locations) +# * When generating the dataset with cropped images, a rotation correction is applied which is due to perspective. +# Thereby the angle spanned between the forward direction and the head position is added to the head orientation. +# Currently, this assumes a prescribed FOV. +# * Only small number of images is affected by failed detections. 15074 of 15678 are good. + +# Update: +# This script can now load the anotations from +# https://github.com/pcr-upm/opal23_headpose/blob/main/annotations/biwi_ann.txt +# for best reproducability and fair comparison. + PROJ_FOV = 65. HEAD_SIZE_MM = 100. -PREFIX='faces_0/' +PREFIX1='faces_0/' +PREFIX2='kinect_head_pose_db/' def get_pose_from_mat(f): @@ -47,6 +75,19 @@ def get_camera_extrinsics(zf : ZipFile, fn) -> Tuple[Rotation,np.ndarray]: return rot, pos +Affine = Tuple[Rotation,NDArray[Any]] + + +def affine_to_str(affine : Affine): + rot, pos = affine + return f"R={rot.as_matrix()}, T={pos}" + + +def assert_extrinsics_have_identity_rotation(extrinsics : Sequence[Tuple[Any,Affine]]): + for id_, (rot,_) in extrinsics: + assert np.allclose(rot.as_matrix(), np.eye(3),atol=0.04, rtol=0.), f"Rotation {rot.as_matrix()} of {id_} is far from identity" + + class PinholeCam(object): def __init__(self, fov, w, h): self.fov = fov @@ -78,36 +119,41 @@ def project_size_to_image(self, depth, scale): def compute_rotation_to_vector(pos): - # Computes a rotation which aligns - # the x axes with the given head position. + # Computes a rotation which aligns the z axes with the given position. + # It's done in terms of a rotation matrix which represents the new aligned + # coordinate frame, i.e. it's z-axis will point towards "pos". This leaves + # a degree of rotation around the this axis. This is resolved by constraining + # the x axis to the horizonal plane (perpendicular to the global y-axis). z = pos / np.linalg.norm(pos) x = np.cross([0.,1.,0.],z) + x = x / np.linalg.norm(x) y = -np.cross(x, z) + y = y / np.linalg.norm(y) M = np.array([x,y,z]).T rot = Rotation.from_matrix(M) return rot -def apply_local_head_origin_offset(rot, sz, offset): +def transform_local_to_screen_offset(rot, sz, offset): offset = rot.apply(offset)*sz # world to screen transform: offset = offset[:2] return offset -def find_image_file_names(zf : ZipFile): +def find_image_file_names(filelist : Sequence[str]): """ Returns dict of filename lists. Keys are person numbers. """ - regex = re.compile(PREFIX+r'(\d\d)/frame_(\d\d\d\d\d)_rgb.png') + regex = re.compile(PREFIX1+r'(\d\d)/frame_(\d\d\d\d\d)_rgb.png') samples = defaultdict(list) - for f in zf.filelist: - m = regex.match(f.orig_filename) + for f in filelist: + m = regex.match(f) if m is None: continue person = int(m.group(1)) frame = m.group(2) - samples[person].append((frame, f.orig_filename)) + samples[person].append((frame, f)) for k, v in samples.items(): # Sort by frame number then discard frame number v = sorted(v, key = lambda t: t[0]) @@ -116,7 +162,7 @@ def find_image_file_names(zf : ZipFile): def find_cal_files(zf : ZipFile): - regex = re.compile(PREFIX+r'(\d\d)/rgb.cal') + regex = re.compile(PREFIX1+r'(\d\d)/rgb.cal') cal_files = {} for f in zf.filelist: m = regex.match(f.orig_filename) @@ -127,8 +173,7 @@ def find_cal_files(zf : ZipFile): return cal_files - -def read_data(zf, imagefile, cal): +def read_data(zf, imagefile, cam_extrinsics_inv, mtcnn : MTCNN | None, box_annotation : tuple[int,int,int,int] | None): posefile = imagefile[:-len('_rgb.png')]+'_pose.txt' imgbuffer = zf.read(imagefile) @@ -138,26 +183,28 @@ def read_data(zf, imagefile, cal): with io.StringIO(zf.read(posefile).decode('ascii')) as f: rot, pos = get_pose_from_mat(f) - cam_inv = cal #utils.affine3d_inv(cal) - rot, pos = utils.affine3d_chain(cam_inv, (rot, pos)) - - rot_correction = compute_rotation_to_vector(pos) - rot = rot_correction.inv() * rot + rot, pos = utils.affine3d_chain(cam_extrinsics_inv, (rot, pos)) cam = PinholeCam(PROJ_FOV, w, h) x, y = cam.project_to_image(pos) size = cam.project_size_to_image(pos[2], HEAD_SIZE_MM) - roi = np.array([x-size, y-size, x+size, y+size]) - img, offset = extract_image_roi(img, roi, 0.5, return_offset=True) - roi[[0,1]] += offset - roi[[2,3]] += offset - x += offset[0] - y += offset[1] + if box_annotation: + roi = np.asarray(box_annotation) + else: + roi = np.array([x-size, y-size, x+size, y+size]) + + if mtcnn is not None: + roi, ok = improve_roi_with_face_detector(img, roi, mtcnn, iou_threshold=0.01, confidence_threshold=0.9) + if not ok: + print (f"WARNING: MTCNN didn't find a face that overlaps with the projected head position. Frame {imagefile}.") + else: + ok = True # Offset in local frame is given as argument. # It was found by eyemeasure. It could perhaps be improved by optimizing it during the training. - offset = apply_local_head_origin_offset(rot, size, np.array([0.03,-0.35,-0.2])) + # It does not affect the rotation, and thus also not the benchmarks, so I'm free to do this. + offset = transform_local_to_screen_offset(rot, size, np.array([0.03,-0.35,-0.2])) x += offset[0] y += offset[1] @@ -166,39 +213,61 @@ def read_data(zf, imagefile, cal): 'coord' : np.array([x, y, size]), 'roi' : roi, 'image' : img, - } - - -def generate_hdf5_dataset(source_file, outfilename, count=None): + }, ok + + +def generate_hdf5_dataset(source_file, outfilename, opal_annotation : str | None, count : int | None =None): + mtcnn = None + sequence_frames = None + box_annotations = None + if opal_annotation: + dataframe = pandas.read_csv(opal_annotation, header=0, sep=';') + dataframe.columns = dataframe.columns[1:].append(pandas.Index([ 'dummy'])) + filelist : list[str] = dataframe['image'].values.tolist() + filelist = [ f.replace(PREFIX2,PREFIX1) for f in filelist ] + box_annotations = dataframe[list('tl_x;tl_y;br_x;br_y'.split(';'))].values.tolist() + box_annotations = dict(zip(filelist, box_annotations)) + sequence_frames = find_image_file_names(filelist) + assert sum(len(frames) for frames in sequence_frames.values()) == len(filelist) + else: + mtcnn = MTCNN(keep_all=True, device='cpu') every = 1 with ZipFile(source_file,mode='r') as zf: calibration_data = { k:get_camera_extrinsics(zf,fn) for k,fn in find_cal_files(zf).items() } - print ("Found calibration files: ", calibration_data.keys()) - sequence_frames = find_image_file_names(zf) + #print ("Found calibration files: ", calibration_data.keys()) + print ("Sample camera params: ", affine_to_str(next(iter(calibration_data.values())))) + assert_extrinsics_have_identity_rotation(calibration_data.items()) + if opal_annotation is None: + sequence_frames = find_image_file_names([ f.orig_filename for f in zf.filelist]) if count or every: for k, v in sequence_frames.items(): sequence_frames[k] = v[slice(0,count,every)] - sequence_lengths = [len(v) for v in sequence_frames.values()] - print ([(k,len(v)) for k,v in sequence_frames.items()]) - N = sum(sequence_lengths) + max_num_frames = sum(len(v) for v in sequence_frames.values()) + print ("Found videos (id,length): ",[(k,len(v)) for k,v in sequence_frames.items()]) with h5py.File(outfilename, mode='w') as f: - ds_img = create_pose_dataset(f, C.image, count=N) - ds_roi = create_pose_dataset(f, C.roi, count=N) - ds_quats = create_pose_dataset(f, C.quat, count=N) - ds_coords = create_pose_dataset(f, C.xys, count=N) - f.create_dataset('sequence_starts', data = np.cumsum([0]+sequence_lengths)) + ds_img = create_pose_dataset(f, C.image, count=max_num_frames) + ds_roi = create_pose_dataset(f, C.roi, count=max_num_frames) + ds_quats = create_pose_dataset(f, C.quat, count=max_num_frames) + ds_coords = create_pose_dataset(f, C.xys, count=max_num_frames) i = 0 - with tqdm.tqdm(total=N) as bar: + sequence_starts = [0] + with tqdm.tqdm(total=max_num_frames) as bar: for ident, frames in sequence_frames.items(): for fn in frames: - sample = read_data(zf, fn, calibration_data[ident]) - ds_img[i] = sample['image'] - ds_quats[i] = sample['pose'] - ds_coords[i] = sample['coord'] - ds_roi[i] = sample['roi'] - i += 1 + sample, ok = read_data(zf, fn, calibration_data[ident], mtcnn, box_annotations[fn] if box_annotations else None) + if ok: + ds_img[i] = sample['image'] + ds_quats[i] = sample['pose'] + ds_coords[i] = sample['coord'] + ds_roi[i] = sample['roi'].tolist() + i += 1 bar.update(1) + assert i != sequence_starts[-1], "Every sequence should have at least one good frame!" + sequence_starts.append(i) + for ds in [ds_img, ds_roi, ds_quats, ds_coords]: + ds.resize(i, axis=0) + f.create_dataset('sequence_starts', data = sequence_starts) if __name__ == '__main__': @@ -206,7 +275,8 @@ def generate_hdf5_dataset(source_file, outfilename, count=None): parser.add_argument('source', help="source file", type=str) parser.add_argument('destination', help='destination file', type=str, nargs='?', default=None) parser.add_argument('-n', dest = 'count', type=int, default=None) + parser.add_argument('--opal-annotation', help='Use annotations from opal paper', type=str, nargs='?', default=None) args = parser.parse_args() dst = args.destination if args.destination else \ splitext(args.source)[0]+'.h5' - generate_hdf5_dataset(args.source, dst, args.count) \ No newline at end of file + generate_hdf5_dataset(args.source, dst, args.opal_annotation, args.count) \ No newline at end of file diff --git a/scripts/dsprocess_lapa.py b/scripts/dsprocess_lapa.py index 09e37f6..0c96493 100644 --- a/scripts/dsprocess_lapa.py +++ b/scripts/dsprocess_lapa.py @@ -72,14 +72,21 @@ def poor_mans_roi(points: np.ndarray): return np.array([x0, y0, x1, y1]) -def improve_roi_with_face_detector(img, roi, mtcnn : MTCNN): - new_roi, _ = mtcnn.detect(img) +def improve_roi_with_face_detector(img, roi, mtcnn : MTCNN, iou_threshold : float = 0.25, confidence_threshold : float = 0.): + new_roi, confidences = mtcnn.detect(img) if new_roi is not None: + # Filter by confidence + mask = confidences > confidence_threshold + new_roi = new_roi[mask] + confidences = confidences[mask] + if len(new_roi) <= 0: + return roi, False + # Find the box with highest iou overlap with input roi iou = box_iou(roi, new_roi) i = np.argmax(iou) new_roi = new_roi[i] iou = iou[i] - if iou > 0.25: + if iou > iou_threshold: return new_roi, True return roi, False diff --git a/scripts/evaluate_pose_network.py b/scripts/evaluate_pose_network.py index 038c4cd..65df64f 100644 --- a/scripts/evaluate_pose_network.py +++ b/scripts/evaluate_pose_network.py @@ -6,17 +6,17 @@ import torch.multiprocessing torch.multiprocessing.set_sharing_strategy('file_system') -from typing import Any, List, NamedTuple, Tuple, Dict +from typing import Any, List, NamedTuple, Tuple, Dict, Callable, Literal import numpy as np import argparse import tqdm import tabulate import json import os +import copy import pprint from numpy.typing import NDArray from matplotlib import pyplot -from os.path import basename import functools import torch from torch import Tensor @@ -25,15 +25,19 @@ from trackertraincode.datasets.batch import Batch, Metadata import trackertraincode.datatransformation as dtr -import trackertraincode.eval import trackertraincode.pipelines import trackertraincode.vis as vis import trackertraincode.utils as utils -from trackertraincode.eval import load_pose_network, predict +from trackertraincode.eval import load_pose_network, predict, compute_opal_paper_alignment, PerspectiveCorrector load_pose_network = functools.lru_cache(maxsize=1)(load_pose_network) +# According to this https://gmv.cast.uark.edu/scanning/hardware/microsoft-kinect-resourceshardware/ +# The horizontal field of view of the kinect is .. +BIWI_HORIZONTAL_FOV = 57. + + class RoiConfig(NamedTuple): expansion_factor : float = 1.1 center_crop : bool = False @@ -43,7 +47,6 @@ def __str__(self): crop = ['ROI','CC'][self.center_crop] return f'{"(H_roi)" if self.use_head_roi else "(F_roi)"}{crop}{self.expansion_factor:0.1f}' -normal_roi_configs = [ RoiConfig() ] comprehensive_roi_configs = [ RoiConfig(*x) for x in [ (1.2, False), (1.1, False), @@ -61,7 +64,17 @@ def determine_roi(sample : Batch, use_center_crop : bool): return torch.tensor([0,0,h,w], dtype=torch.float32) -def compute_predictions_and_targets(loader, net, keys, roi_config : RoiConfig): +EvalResults = Dict[str, Tensor] +BatchPerspectiveCorrector = Callable[[Tensor,EvalResults],EvalResults] + +AlignmentScheme = Literal['perspective','opal23_headpose','none'] + + +def compute_predictions_and_targets(loader, net, keys, roi_config : RoiConfig) -> Tuple[EvalResults, EvalResults]: + """ + Return: + Prediction and GT, each in a dict. + """ preds = defaultdict(list) targets = defaultdict(list) first = True @@ -86,7 +99,50 @@ def compute_predictions_and_targets(loader, net, keys, roi_config : RoiConfig): return preds, targets -def iterate_predictions(loader, preds : Batch): +def compute_predictions_and_targets_biwi(loader, net, keys, roi_config : RoiConfig, alignment : AlignmentScheme) -> Tuple[EvalResults, EvalResults]: + """ + Return: + Prediction and GT, each in a dict. + """ + preds = defaultdict(list) + targets = defaultdict(list) + other = defaultdict(list) + bar = tqdm.tqdm(total = len(loader.dataset)) + first = True + for batch in utils.iter_batched(loader, 32): + images = [ sample['image'] for sample in batch ] + rois = torch.stack([ determine_roi(sample, roi_config.center_crop) for sample in batch ]) + pred = predict( + net, + images, + rois=rois, + focus_roi_expansion_factor=roi_config.expansion_factor) + if first: + keys = list(frozenset(pred.keys()).intersection(frozenset(keys))) + first = False + for k in keys: + preds[k].append(pred[k]) + targets[k].append(torch.stack([sample[k] for sample in batch])) + other['image_sizes'].extend([ img.shape[:2][::-1] for img in images ]) + other['individual'].extend([ sample['individual'] for sample in batch ]) + bar.update(len(batch)) + preds = { k:torch.cat(v) for k,v in preds.items() } + targets = { k:torch.cat(v) for k,v in targets.items() } + other = { k:np.asarray(v) for k,v in other.items() } + if alignment == 'perspective': + corrector = PerspectiveCorrector(BIWI_HORIZONTAL_FOV) + assert 'pt3d_68' not in targets, "Unsupported. Must be computed after correction." + preds['pose'] = corrector.corrected_rotation(torch.from_numpy(other['image_sizes']), preds['coord'], preds['pose']) + elif alignment == 'opal23_headpose': + assert 'pt3d_68' not in targets, "Unsupported. Must be computed after correction." + preds['pose'] = compute_opal_paper_alignment(preds['pose'],targets['pose'], other['individual']) + return preds, targets + + +def zip_gt_with_pred(loader, preds : Batch): + ''' + Returns iterator over tuples of the Batch type. + ''' pred_iter = (dtr.to_numpy(s) for s in preds.iter_frames()) sample_iter = (dtr.to_numpy(sample) for batch in loader for sample in dtr.undo_collate(batch)) yield from zip(sample_iter, pred_iter) @@ -127,15 +183,11 @@ def __init__(self): self._header = [ 'Data', 'Pitch°', 'Yaw°', 'Roll°', 'Mean°', 'Geodesic°', 'XY%', 'S%', 'NME3d%', 'NME2d%_30', 'NME2d%_60', 'NME2d%_90', 'NME2d%_avg' ] self._entries_by_model = defaultdict(list) - def add_row(self, model : str, data : str, euler_angles : List[float], geodesic : float, rmse_pos : float, rmse_size : float, uw_nme_3d, nme_2d, data_aux_string = None): - # maxlen = 30 - # if len(model) > maxlen+3: - # model = '...'+model[-maxlen:] - - uw_nme_3d = uw_nme_3d*100 if uw_nme_3d is not None else 'n/a' - nme_2d_30, nme_2d_60, nme_2d_90, nme_2d_avg = [(x*100 if nme_2d is not None else 'n/a') for x in nme_2d] + def add_row(self, model : str, data : str, euler_angles : List[float], geodesic : float, rmse_pos : float, rmse_size : float, unweighted_nme_3d, nme_2d, data_aux_string = None): + unweighted_nme_3d = unweighted_nme_3d*100 if unweighted_nme_3d is not None else 'n/a' + nme_2d_30, nme_2d_60, nme_2d_90, nme_2d_avg = ['/na' for _ in range(4)] if nme_2d is None else [x*100 for x in nme_2d] data = self.data_name_table.get(data, data) + (data_aux_string if data_aux_string is not None else '') - self._entries_by_model[model] += [[data] + euler_angles + [ np.average(euler_angles).tolist(), geodesic, rmse_pos, rmse_size, uw_nme_3d, nme_2d_30, nme_2d_60, nme_2d_90, nme_2d_avg]] + self._entries_by_model[model] += [[data] + euler_angles + [ np.average(euler_angles).tolist(), geodesic, rmse_pos, rmse_size, unweighted_nme_3d, nme_2d_30, nme_2d_60, nme_2d_90, nme_2d_avg]] def build(self) -> str: prefix = commonprefix(list(self._entries_by_model.keys())) @@ -165,7 +217,11 @@ def model_table(rows): def report(net_filename, data_name, roi_config : RoiConfig, args : argparse.Namespace, builder : TableBuilder): loader = trackertraincode.pipelines.make_validation_loader(data_name, use_head_roi=roi_config.use_head_roi) net = load_pose_network(net_filename, args.device) - preds, targets = compute_predictions_and_targets(loader, net, ['coord','pose', 'roi', 'pt3d_68'], roi_config) + if data_name == 'biwi': + preds, targets = compute_predictions_and_targets_biwi(loader, net, ['coord','pose', 'roi'], roi_config, args.alignment_scheme) + + else: + preds, targets = compute_predictions_and_targets(loader, net, ['coord','pose', 'roi', 'pt3d_68'], roi_config) # Position and size errors are measured relative to the ROI size. Hence in percent. poseerrs = trackertraincode.eval.PoseErr()(preds, targets) eulererrs = trackertraincode.eval.EulerAngleErrors()(preds, targets) @@ -185,7 +241,7 @@ def report(net_filename, data_name, roi_config : RoiConfig, args : argparse.Name rmse_pos=(rmse_pos*100.).tolist(), rmse_size=(rmse_size*100.).tolist(), data_aux_string=' / ' + str(roi_config), - uw_nme_3d=np.average(uw_nme_3d) if uw_nme_3d is not None else None, + unweighted_nme_3d=np.average(uw_nme_3d) if uw_nme_3d is not None else None, nme_2d=nme_2d ) @@ -202,7 +258,7 @@ def report(net_filename, data_name, roi_config : RoiConfig, args : argparse.Name order = np.ascontiguousarray(np.argsort(quantity)[::-1]) loader = trackertraincode.pipelines.make_validation_loader(data_name, order=order) new_preds = Batch(Metadata(0, batchsize=len(order)), **{k:v[order] for k,v in preds.items()}) - worst_rot_iter = iterate_predictions(loader, new_preds) + worst_rot_iter = zip_gt_with_pred(loader, new_preds) history = DrawPredictionsWithHistory(data_name + '/' + net_filename) fig, btn = vis.matplotlib_plot_iterable(worst_rot_iter, history) fig.suptitle(data_name + ' / ' + str(roi_config) + ' / ' + net_filename) @@ -214,7 +270,17 @@ def report(net_filename, data_name, roi_config : RoiConfig, args : argparse.Name def run(args): gui = [] table_builder = TableBuilder() - roi_configs = comprehensive_roi_configs if args.comprehensive_roi else normal_roi_configs + + if not args.comprehensive_roi: + if args.roi_expansion is not None: + roi_configs = [ RoiConfig(expansion_factor=args.roi_expansion) ] + else: + roi_configs = [ RoiConfig() ] + else: + assert args.roi_expansion is None, "Conflicting arguments" + roi_configs = comprehensive_roi_configs + + datasets = args.ds.split('+') for net_filename in args.filenames: for name in datasets: @@ -240,6 +306,8 @@ def run(args): parser.add_argument('--vis', dest='vis', help='visualization of worst', default='none', choices=['none','kpts','rot','size']) parser.add_argument('--device', help='select device: cpu or cuda', default='cuda', type=str) parser.add_argument('--comprehensive-roi', action='store_true', default=False) + parser.add_argument('--alignment-scheme', choices=['perspective','opal23_headpose','none'], default='none') + parser.add_argument('--roi-expansion', default=None, type=float) parser.add_argument('--json', type=str, default=None) parser.add_argument('--ds', type=str, default='aflw2k3d') args = parser.parse_args() diff --git a/scripts/evaluate_stability.py b/scripts/evaluate_stability.py index 4830e78..32c8b41 100644 --- a/scripts/evaluate_stability.py +++ b/scripts/evaluate_stability.py @@ -185,9 +185,9 @@ def make_nice(axes): axes[k+3].plot(pred.coord_scales[...,1,0]) axes[k+4].plot(pred.coord_scales[...,2,0]) axes[k+5].plot(pred.coord_scales[...,2,1]) - for i, label in zip(range(k,k+6),['y-sz', 'x-sz', 'sz', 'x', 'y', 'x-y']): + for i, label in zip(range(k,k+6),['sz', 'x', 'y', 'x-y', 'y-sz', 'x-sz' ]): axes[i].set(ylabel=label) - else: + elif pred.coord_scales is not None: axes[k].plot(pred.coord_scales[...,2]) axes[k].set(ylabel='sz') make_nice(axes) @@ -470,6 +470,9 @@ def compute_metrics_for_roi(gt : torch.Tensor, preds : torch.Tensor): prefix = os.path.commonprefix([p for p,_,_ in metrics_by_network_and_noise_and_quantity.keys()]) metrics_by_network_and_noise_and_quantity = { (os.path.relpath(k,prefix),n,q):v for (k,n,q),v in metrics_by_network_and_noise_and_quantity.items() } + with open('/tmp/noise_resist_result_v2.pkl','wb') as f: + pickle.dump(metrics_by_network_and_noise_and_quantity, f) + # Mean and average over ensemble metrics_by_network_and_quantity = defaultdict(list) for (cp, noise, quantity), results in metrics_by_network_and_noise_and_quantity.items(): @@ -485,9 +488,6 @@ def compute_metrics_for_roi(gt : torch.Tensor, preds : torch.Tensor): rows = zip(noise, avg[:,0], maybe_std[:,0], avg[:,1], maybe_std[:,1]) print (tabulate.tabulate(rows, [ 'noise', 'center', '+/-', 'spread', '+/-', 'err', '+/-' ], tablefmt='github', floatfmt=".2f")) - with open('/tmp/noise_resist_result.pkl','wb') as f: - pickle.dump(metrics_by_network_and_quantity, f) - if 1: # vis checkpoints = frozenset([cp for cp,_ in metrics_by_network_and_quantity.keys()]) for quantity in ['pose','roi']: diff --git a/scripts/export_model.py b/scripts/export_model.py index ed9c2cf..18e11ab 100644 --- a/scripts/export_model.py +++ b/scripts/export_model.py @@ -4,6 +4,13 @@ import os import torch import copy +import tqdm +import itertools + +from torch.ao.quantization import get_default_qconfig_mapping, QConfig, QConfigMapping +from torch.ao.quantization.quantize_fx import convert_fx, prepare_fx +from torch.ao.quantization import fake_quantize +from torch.ao.quantization import observer import torch.onnx import torch.nn as nn @@ -14,7 +21,9 @@ ort = None print ("Warning cannot import ONNX runtime: Runtime checks disabled") +from trackertraincode.neuralnets.bnfusion import fuse_convbn import trackertraincode.neuralnets.models +import trackertraincode.pipelines # Only single thread for inference time measurement os.environ["OMP_NUM_THREADS"] = "1" @@ -26,7 +35,7 @@ def clear_denormals(state_dict, threshold=1.e-20): # decrease compared to pretrained weights from torchvision. # The real denormals start below 2.*10^-38 # Denormals make computations on CPU very slow .. at least back then ... - state_dict = copy.deepcopy(state_dict) + state_dict = { k:v.detach().clone() for k,v in state_dict.items() } print ("Denormals or zeros:") for k, v in state_dict.items(): if v.dtype == torch.float32: @@ -38,6 +47,64 @@ def clear_denormals(state_dict, threshold=1.e-20): return state_dict + +def quantize_backbone(original : trackertraincode.neuralnets.models.NetworkWithPointHead): + original = copy.deepcopy(original) + + dsid = trackertraincode.pipelines.Id + train_loader, _, _ = trackertraincode.pipelines.make_pose_estimation_loaders( + inputsize = original.input_resolution, + batchsize = 128, + datasets = [dsid.REPO_300WLP,dsid.SYNFACE,dsid], + dataset_weights = {}, + use_weights_as_sampling_frequency=True, + enable_image_aug=True, + rotation_aug_angle=30., + roi_override='original', + device=None) + example_input = (next(iter(train_loader))[0]['image'],) + original.eval() + + # Configuration chosen as per advice from + # https://oscar-savolainen.medium.com/how-to-quantize-a-neural-network-model-in-pytorch-an-in-depth-explanation-d4a2cdf632a4 + config = QConfig(activation=fake_quantize.FusedMovingAvgObsFakeQuantize.with_args(observer=observer.MovingAverageMinMaxObserver, + quant_min=0, + quant_max=255, + dtype=torch.quint8, + qscheme=torch.per_tensor_affine), + weight=fake_quantize.FakeQuantize.with_args(observer=fake_quantize.MovingAveragePerChannelMinMaxObserver, + quant_min=-128, + quant_max=127, + dtype=torch.qint8, + qscheme=torch.per_channel_symmetric)) + #qconfig_mapping = get_default_qconfig_mapping("x86") + #qconfig_mapping = get_default_qconfig_mapping("fbgemm") + #qconfig_mapping = get_default_qconfig_mapping("qnnpack") + qconfig_mapping = QConfigMapping() + qconfig_mapping.set_global(config) + # Disable quantization after the convolutional layers. + # The final relu+global pooling seems to be fast enough to do in float32 without + # significant slowdown. + qconfig_mapping = qconfig_mapping.set_object_type(nn.AdaptiveAvgPool2d, None) + if original.config == 'resnet18': + # TODO: better SW design? + # Disables quantization of the input of the AveragePooling + qconfig_mapping = qconfig_mapping.set_module_name('layers.7.1.relu', None) + + convnet = prepare_fx( + fuse_convbn(torch.fx.symbolic_trace(original.convnet)), + qconfig_mapping, + example_input) + + for _, batches in tqdm.tqdm(zip(range(20), train_loader)): + for batch in batches: + convnet(batch['image']) + + original.convnet = convert_fx(convnet) + + return original + + class ModelForOpenTrack(nn.Module): """ Rearranges the model output @@ -94,19 +161,6 @@ def forward(self, x): return tuple(y[k] for k in self.output_names) -def load_posemodel(args): - sd = torch.load(args.posemodelfilename) - net = trackertraincode.neuralnets.models.NetworkWithPointHead( - enable_point_head=True, - enable_face_detector=False, - config='mobilenetv1', - enable_uncertainty=True, - backbone_args = {'use_blurpool' : False} - ) - net.load_state_dict(sd, strict=True) - return net - - def load_facelocalizer(args): sd = torch.load(args.localizermodelfilename) net = trackertraincode.neuralnets.models.LocalizerNet() @@ -137,8 +191,10 @@ def compare_network_outputs(torchmodel, ort_session, inputs): @torch.no_grad() -def convert_posemodel_onnx(net : nn.Module, filename, for_opentrack=True): +def convert_posemodel_onnx(net : nn.Module, filename, for_opentrack=True, quantize=False): net.load_state_dict(clear_denormals(net.state_dict())) + if quantize: + net = quantize_backbone(net) if for_opentrack: net = ModelForOpenTrack(net) else: @@ -153,7 +209,7 @@ def convert_posemodel_onnx(net : nn.Module, filename, for_opentrack=True): B, C, H, W = inputs[0].shape - destination = splitext(filename)[0]+('.onnx' if for_opentrack else '_complete.onnx') + destination = splitext(filename)[0]+('_ptq' if quantize else '')+('.onnx' if for_opentrack else '_complete.onnx') print (f"Exporting {net.__class__}, input size = {H},{W} to {destination}") @@ -174,6 +230,13 @@ def convert_posemodel_onnx(net : nn.Module, filename, for_opentrack=True): dynamic_axes = dynamic_axes, verbose=False) + # torch.onnx.dynamo_export( + # net, # model being run + # inputs, # model input (or a tuple for multiple inputs) + # export_options=torch.onnx.ExportOptions( + # dynamic_shapes=False, + # )).save(destination) + onnxmodel = onnx.load(destination) onnxmodel.doc_string = 'Head pose prediction' onnxmodel.model_version = 4 # This must be an integer or long. @@ -233,10 +296,11 @@ def convert_localizer(args): parser.add_argument('--posenet', dest = 'posemodelfilename', help="filename of model checkpoint", type=str, default=None) parser.add_argument('--full', action='store_true', default=False) parser.add_argument('--localizer', dest = 'localizermodelfilename', help="filename of model checkpoint", type=str, default=None) + parser.add_argument('--quantize', action='store_true', default=False) args = parser.parse_args() if args.posemodelfilename: - net = load_posemodel(args) - convert_posemodel_onnx(net, args.posemodelfilename, for_opentrack=not args.full) + net = trackertraincode.neuralnets.models.load_model(args.posemodelfilename) + convert_posemodel_onnx(net, args.posemodelfilename, for_opentrack=not args.full, quantize=args.quantize) if args.localizermodelfilename: convert_localizer(args) if not args.posemodelfilename and not args.localizermodelfilename: diff --git a/scripts/show_train_test_splits.py b/scripts/show_train_test_splits.py index 4eb6ddf..6ead42e 100644 --- a/scripts/show_train_test_splits.py +++ b/scripts/show_train_test_splits.py @@ -1,3 +1,4 @@ +#!/usr/bin/env python3 import numpy as np import torch from matplotlib import pyplot @@ -21,7 +22,7 @@ def iterate_predictions(): the_iter = itertools.chain.from_iterable(loader) if loader_outputs_list_of_batches else loader for subset in the_iter: print(subset.meta.tag) - subset = dtr.to_device('cpu', subset) + subset = subset.to('cpu') subset['image'] = trackertraincode.pipelines.unwhiten_image(subset['image']) subset = dtr.unnormalize_batch(subset) subset = dtr.to_numpy(subset) diff --git a/scripts/train_poseestimator.py b/scripts/train_poseestimator.py index acc1895..83f814a 100644 --- a/scripts/train_poseestimator.py +++ b/scripts/train_poseestimator.py @@ -15,6 +15,7 @@ import torch.optim as optim import torch +import torch.nn as nn import trackertraincode.neuralnets.losses as losses import trackertraincode.neuralnets.models as models import trackertraincode.neuralnets.negloglikelihood as NLL @@ -96,18 +97,26 @@ def setup_datasets(args : MyArgs): return train_loader, test_loader, ds_size -def parameter_groups_with_decaying_learning_rate(parameter_groups, slow_lr, fast_lr): - # Note: Parameters are enumerated in the order from the input to the output layers - factor = (fast_lr/slow_lr)**(1./(len(parameter_groups)-1)) +def find_variance_parameters(net : nn.Module): + if isinstance(net,(NLL.FeaturesAsTriangularScale,NLL.FeaturesAsDiagonalScale,NLL.DiagonalScaleParameter)): + return list(net.parameters()) + else: + return sum((find_variance_parameters(x) for x in net.children()), start=[]) + + +def setup_lr_with_slower_variance_training(net, base_lr): + variance_params = find_variance_parameters(net) + other_params = list(frozenset(net.parameters()).difference(frozenset(variance_params))) return [ - { 'params' : p, 'lr' : slow_lr*factor**i } for i,p in enumerate(parameter_groups) + { 'params' : other_params, 'lr' : base_lr }, + { 'params' : variance_params, 'lr' : 0.1*base_lr } ] def create_optimizer(net, args : MyArgs): - to_optimize = net.parameters() - #optimizer = optim.AdamW(to_optimize, lr=args.lr, weight_decay=1.e-3) - optimizer = optim.Adam(to_optimize, lr=args.lr) + optimizer = optim.Adam( + setup_lr_with_slower_variance_training(net,args.lr), + lr=args.lr,) if args.find_lr: print ("LR finding mode!") n_epochs = args.epochs @@ -116,7 +125,8 @@ def create_optimizer(net, args : MyArgs): scheduler = optim.lr_scheduler.LambdaLR(optimizer, lambda e: base**e, verbose=True) else: n_epochs = args.epochs - scheduler = train.LinearUpThenSteps(optimizer, max(1,n_epochs//(2*10)), 0.1, [n_epochs//2]) + #scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [n_epochs//2], 0.1) + scheduler = train.ExponentialUpThenSteps(optimizer, max(1,n_epochs//(10)), 0.1, [n_epochs//2]) return optimizer, scheduler, n_epochs @@ -133,8 +143,6 @@ class SaveBestSpec(NamedTuple): def setup_losses(args : MyArgs, net): C = train.Criterion - - chasface = [ C('hasface', losses.HasFaceLoss(), 0.01) ] cregularize = [ C('quatregularization1', losses.QuaternionNormalizationSoftConstraint(), 1.e-6), ] @@ -145,24 +153,29 @@ def setup_losses(args : MyArgs, net): shapeparamloss = [] if args.with_nll_loss: - nllw = 0.01 + def ramped_up_nll_weight(multiplier): + def wrapped(step): + strength = min(1., max(0., (step / args.epochs - 0.1) * 10.)) + return 0.01 * strength * multiplier + return wrapped + #return multiplier * 0.01 poselosses += [ - C('nllrot', NLL.QuatPoseNLLLoss().to('cuda'), 0.3*nllw), - C('nllcoord', NLL.CorrelatedCoordPoseNLLLoss().cuda(), 0.3*nllw) + C('nllrot', NLL.QuatPoseNLLLoss().to('cuda'), ramped_up_nll_weight(0.5)), + C('nllcoord', NLL.CorrelatedCoordPoseNLLLoss().cuda(), ramped_up_nll_weight(0.5)) ] if args.with_roi_train: roilosses += [ - C('nllbox', NLL.BoxNLLLoss(distribution='gaussian'), 0.01*nllw) + C('nllbox', NLL.BoxNLLLoss(distribution='gaussian'), ramped_up_nll_weight(0.01)) ] if args.with_pointhead: pointlosses += [ - C('nllpoints3d', NLL.Points3dNLLLoss(chin_weight=0.8, eye_weight=0., distribution='laplace').cuda(), 0.7*nllw) + C('nllpoints3d', NLL.Points3dNLLLoss(chin_weight=0.8, eye_weight=0., distribution='gaussian').cuda(), ramped_up_nll_weight(0.5)) ] pointlosses25d = [ - C('nllpoints3d', NLL.Points3dNLLLoss(chin_weight=0.8, eye_weight=0., pointdimension=2, distribution='laplace').cuda(), 0.7*nllw) + C('nllpoints3d', NLL.Points3dNLLLoss(chin_weight=0.8, eye_weight=0., pointdimension=2, distribution='gaussian').cuda(), ramped_up_nll_weight(0.5)) ] shapeparamloss += [ - C('nllshape', NLL.ShapeParamsNLLLoss(distribution='gaussian'), 0.05*nllw) + #C('nllshape', NLL.ShapeParamsNLLLoss(distribution='gaussian'), ramped_up_nll_weight(0.01)) ] if 1: poselosses += [ @@ -176,29 +189,32 @@ def setup_losses(args : MyArgs, net): ] if args.with_pointhead: pointlosses += [ - C('points3d', losses.Points3dLoss('l1', chin_weight=0.8, eye_weights=0.).cuda(), 0.7) + C('points3d', losses.Points3dLoss('l2', chin_weight=0.8, eye_weights=0.).cuda(), 0.5), + ] + pointlosses25d += [ + C('points3d', losses.Points3dLoss('l2', pointdimension=2, chin_weight=0.8, eye_weights=0.).cuda(), 0.5), ] - pointlosses25d += [ C('points3d', losses.Points3dLoss('l1', pointdimension=2, chin_weight=0.8, eye_weights=0.).cuda(), 0.7) ] shapeparamloss += [ - C('shp_l2', losses.ShapeParameterLoss(), 0.05), + C('shp_l2', losses.ShapeParameterLoss(), 0.1), + ] + cregularize += [ + C('nll_shp_gmm', losses.ShapePlausibilityLoss().cuda(), 0.1), ] train_criterions = { - Tag.ONLY_POSE : train.MultiTaskLoss(poselosses + cregularize), - Tag.POSE_WITH_LANDMARKS : train.MultiTaskLoss(poselosses + cregularize + pointlosses + shapeparamloss + roilosses), - Tag.POSE_WITH_LANDMARKS_3D_AND_2D : train.MultiTaskLoss(poselosses + cregularize + pointlosses + shapeparamloss + roilosses), - Tag.ONLY_LANDMARKS : train.MultiTaskLoss(pointlosses + cregularize), - Tag.ONLY_LANDMARKS_25D : train.MultiTaskLoss(pointlosses25d + cregularize), - Tag.FACE_DETECTION : train.MultiTaskLoss(chasface + roilosses), + Tag.ONLY_POSE : train.CriterionGroup(poselosses + cregularize), + Tag.POSE_WITH_LANDMARKS : train.CriterionGroup(poselosses + cregularize + pointlosses + shapeparamloss + roilosses), + Tag.POSE_WITH_LANDMARKS_3D_AND_2D : train.CriterionGroup(poselosses + cregularize + pointlosses + shapeparamloss + roilosses), + Tag.ONLY_LANDMARKS : train.CriterionGroup(pointlosses + cregularize), + Tag.ONLY_LANDMARKS_25D : train.CriterionGroup(pointlosses25d + cregularize), } test_criterions = { - Tag.POSE_WITH_LANDMARKS : train.DefaultTestFunc(poselosses + pointlosses + roilosses + shapeparamloss), - Tag.FACE_DETECTION : train.DefaultTestFunc(chasface + roilosses), + Tag.POSE_WITH_LANDMARKS : train.DefaultTestFunc(poselosses + pointlosses + roilosses + shapeparamloss + cregularize), } savebest = SaveBestSpec( - [l.w for l in poselosses], - [ l.name for l in poselosses]) + [ 1.0, 1.0, 1.0], + [ 'rot', 'xy', 'sz' ]) return train_criterions, test_criterions, savebest @@ -212,7 +228,6 @@ def create_net(args : MyArgs): backbone_args={'use_blurpool' : args.with_blurpool} ) - def main(): np.seterr(all='raise') cv2.setNumThreads(1) @@ -281,11 +296,11 @@ def main(): if args.swa: swa_filename = join(args.outdir,f'swa_{swa_model.module.name}.ckpt') - torch.save(swa_model.module.state_dict(), swa_filename) + models.save_model(swa_model.module, swa_filename) last_save_filename = join(args.outdir,f'last_{net.name}.ckpt') - torch.save(net.state_dict(), last_save_filename) + models.save_model(net, last_save_filename) if args.export_onnx: from scripts.export_model import convert_posemodel_onnx @@ -293,7 +308,7 @@ def main(): net.to('cpu') convert_posemodel_onnx(net, filename=last_save_filename) - net.load_state_dict(torch.load(save_callback.filename)) + net.load_state_dict(torch.load(save_callback.filename)['state_dict']) convert_posemodel_onnx(net, filename=save_callback.filename) if args.swa: diff --git a/test/test_affine_img_trafo.py b/test/test_affine_img_trafo.py index 094121b..0528bb4 100644 --- a/test/test_affine_img_trafo.py +++ b/test/test_affine_img_trafo.py @@ -1,5 +1,5 @@ import numpy as np -from typing import Callable, Set, Sequence, Union, List, Tuple, Dict, Optional, NamedTuple +from typing import Callable, Set, Sequence, Union, List, Tuple, Dict, Optional, NamedTuple, Any, Literal from matplotlib import pyplot import pytest import numpy.testing @@ -9,13 +9,11 @@ import torch.nn.functional as F import trackertraincode.datatransformation as dtr -from trackertraincode.datatransformation.affinetrafo import ( - transform_image_torch, - transform_image_pil, - croprescale_image_cv2) - -from trackertraincode.datasets.batch import Batch, Metadata +from trackertraincode.datatransformation.image_geometric_torch import croprescale_image_torch, affine_transform_image_torch +from trackertraincode.datatransformation.image_geometric_cv2 import croprescale_image_cv2, affine_transform_image_cv2, DownFilters, UpFilters from trackertraincode.neuralnets.affine2d import Affine2d +from trackertraincode.neuralnets.math import affinevecmul +from trackertraincode.datasets.batch import Batch, Metadata from kornia.geometry.subpix import spatial_expectation2d @@ -32,108 +30,64 @@ ''' -def no_randomization(B, scaling_mode) -> dtr.RoiFocusRandomizationParameters: +def no_randomization(B, filter_args) -> dtr.RoiFocusRandomizationParameters: return dtr.RoiFocusRandomizationParameters( scales = torch.tensor(1.), angles = torch.tensor(0.), translations = torch.tensor([0.,0.]), - scaling_mode = scaling_mode) + **filter_args) -def with_some_similarity_trafo(B, scaling_mode) -> dtr.RoiFocusRandomizationParameters: + +def with_some_similarity_trafo(B, filter_args) -> dtr.RoiFocusRandomizationParameters: return dtr.RoiFocusRandomizationParameters( - scales = torch.tensor(1.5), + scales = torch.tensor(0.75), angles = torch.tensor(20.*np.pi/180.), - translations = torch.tensor([-0.30,-0.05]), - scaling_mode = scaling_mode) + translations = torch.tensor([-0.1, 0.03]), + **filter_args) +UpDownSampleHint = Literal["up","down"] -class Case(NamedTuple): +class TestData(NamedTuple): S : int # size R : int # new size - X : int # point position - Y : int # point position - batch : Batch - tol : List[float] - - -def make_batch(scale_up_or_down, aligned_corners): - tolerances = [ 1.5, 0.1, 1.5] - if aligned_corners: - if scale_up_or_down == 'down': - S = 101 - R = (S-1)//10+1 - X, Y = 30, 5 - else: - S = 10 # S-1 segments. Every segment will have S-1 more points - R = (S-1)*10+1 - X, Y = 3, 2 - tolerances = [ 4., 1., 4. ] - else: - if scale_up_or_down == 'down': - S = 100 - R = 10 - X, Y = 30, 10 - else: - S = 10 - R = 100 - X, Y = 3, 2 - - img = torch.zeros((3,S,S), dtype=torch.float32) - img[0,0,0] = 255. - img[1,Y,X] = 255. - img[2,S-1,S-1] = 255. - points = torch.tensor([[0.,0.,0.],[X,Y,0.],[S-1,S-1,0.]], dtype=torch.float32) - roi = torch.tensor([0.,0.,S-1,S-1]) - if not aligned_corners: - points += 0.5 - roi = torch.tensor([0.,0.,S,S]) - batch = Batch( - Metadata(_imagesize = S, batchsize=0, categories= - {'image' : dtr.FieldCategory.image, - 'pt3d_68' : dtr.FieldCategory.points, - 'roi' : dtr.FieldCategory.roi}), { - 'image' : img, - 'pt3d_68' : points, - 'roi' : roi - }) - return Case(S, R, X, Y, batch, tolerances) - - -def make_batch_with_room_around_roi(scale_up_or_down, aligned_corners): - tolerances = [ 1.5, 0.1, 1.5] - if aligned_corners: - if scale_up_or_down == 'down': - pad = 50 - S = 101 - R = (S-1)//10+1 - X, Y = 30, 5 - else: - pad = 2 - S = 10 # S-1 segments. Every segment will have S-1 more points - R = (S-1)*10+1 - X, Y = 3, 2 - tolerances = [ 4., 1., 4. ] + batch : Batch # Contains image, 3d points, and roi. + tol : float # Pixel tolerance for point reconstruction based on heatmap. + + +def make_test_data(scale_up_or_down : UpDownSampleHint): + '''Creates a heatmap with 3 peaks and corresponding 3d points and an roi bounding the full image. + + When downscaling is used then the image is made up of 10x10 blocks, so that the downscaling + will produce 1 pixel per block so that the points can be reconstructed accurately. + + Otherwise a single pixels will be set per point. + ''' + if scale_up_or_down == 'down': + S = 200 + R = 20 + points = torch.tensor([[15,15,0],[45,35,0],[85,85,0]], dtype=torch.float32) + points += 50 else: - if scale_up_or_down == 'down': - pad = 50 - S = 100 - R = 10 - X, Y = 30, 10 - else: - pad = 2 - S = 10 - R = 100 - X, Y = 3, 2 - - img = torch.zeros((3,S+2*pad,S+2*pad), dtype=torch.float32) - img[0,pad,pad] = 255. - img[1,Y+pad,X+pad] = 255. - img[2,S+pad-1,S+pad-1] = 255. - points = torch.tensor([[pad,pad,0.],[X+pad,Y+pad,0.],[S+pad-1,S+pad-1,0.]], dtype=torch.float32) - roi = torch.tensor([pad,pad,S+pad-1,S+pad-1], dtype=torch.float32) - if not aligned_corners: + S = 20 + R = 200 + points = torch.tensor([[1,1,0],[4,3,0],[8,8,0]], dtype=torch.float32) + # For align_corners=False when pixel values are cell centered: points += 0.5 - roi = torch.tensor([pad,pad,S+pad,S+pad], dtype=torch.float32) + points += 5 + + img = torch.zeros((3,20,20), dtype=torch.float32) + # Leave space at border to account for blurring so that the points can be + # reconstructed from the peaks very precisely. + img[0,5+1,5+1] = 255. + img[1,5+3,5+4] = 255. + img[2,5+8,5+8] = 255. + if scale_up_or_down == 'down': + img = img.repeat_interleave(10,dim=1).repeat_interleave(10,dim=2) + + # For align_corners=False, the roi subsumes the area from the first + # to the last pixel completely + roi = torch.tensor([0.,0.,S,S]) + batch = Batch( Metadata(_imagesize = S, batchsize=0, categories= {'image' : dtr.FieldCategory.image, @@ -143,132 +97,95 @@ def make_batch_with_room_around_roi(scale_up_or_down, aligned_corners): 'pt3d_68' : points, 'roi' : roi }) - return Case(S, R, X, Y, batch, tolerances) - + return TestData(S, R, batch, 0.01) -def check(result : Batch, td : Case, align_corners : bool): - hm = result['image'][None,...] - assert hm.shape == (1,len(td.batch['pt3d_68']),td.R,td.R) +def check(td : TestData, image, points): + """Check if heatmap and points match.""" + hm = image[None,...] + assert hm.shape == (1,len(points),td.R,td.R) hm /= hm.sum(dim=[-1,-2],keepdims=True) heatmap_points = spatial_expectation2d(hm, normalized_coordinates=False)[0] - if not align_corners: - # Because the vertices of the heatmap are centered in the middle of the pixel areas - heatmap_points += 0.5 - coord_points = result['pt3d_68'][:,:2] - for i, (a, b, tol) in enumerate(zip(heatmap_points, coord_points, td.tol)): - numpy.testing.assert_allclose(a, b, atol = tol, err_msg=f'Mismatch at point {i}') - - -def vis(td, result, align_corners): - fig, ax = pyplot.subplots(1,1) - extent = [-0.5,td.R-1+0.5,td.R-1+0.5,-0.5] if align_corners else [0.,td.R,td.R,0.] - img = ax.imshow( - dtr._ensure_image_nhwc(result['image']).mean(dim=-1), - interpolation='bilinear' if align_corners else 'nearest', - vmin=0., - extent=extent) - ax.scatter(*result['pt3d_68'].T[:2]) - fig.colorbar(img, ax=ax) - pyplot.show() - - -@pytest.mark.parametrize('scaling_way', [ 'up','down']) -@pytest.mark.parametrize('sampling_method', [ - dtr.ScalingMode.TORCH_GRID_SAMPLE_NO_ALIGN_CORNERS, - dtr.ScalingMode.TORCH_GRID_SAMPLE_ALIGN_CORNERS, - dtr.ScalingMode.PIL_HAMMING_WINDOW, - dtr.ScalingMode.OPENCV_AREA -]) -def test_scalingtrafo(scaling_way, sampling_method): - align_corners = sampling_method==dtr.ScalingMode.TORCH_GRID_SAMPLE_ALIGN_CORNERS - td = make_batch(scaling_way, align_corners) - augmentation = dtr.RandomFocusRoi(new_size = td.R) - augmentation.make_randomization_parameters = partial(no_randomization, scaling_mode=sampling_method) - result = augmentation(td.batch) - #vis(td, result, align_corners) - if sampling_method in (dtr.ScalingMode.PIL_HAMMING_WINDOW, dtr.ScalingMode.OPENCV_AREA) and scaling_way == 'down': - # At least one pixel tolerance, due to aliasing. - td = td._replace(tol = [max(1.,t) for t in td.tol]) - check(result,td,align_corners) - - -@pytest.mark.parametrize('scaling_way', ['down', 'up']) -@pytest.mark.parametrize('sampling_method', [ - dtr.ScalingMode.TORCH_GRID_SAMPLE_NO_ALIGN_CORNERS, - dtr.ScalingMode.TORCH_GRID_SAMPLE_ALIGN_CORNERS, - dtr.ScalingMode.PIL_HAMMING_WINDOW, - dtr.ScalingMode.OPENCV_AREA -]) -def test_scalingtrafo_with_randomizer(scaling_way, sampling_method): - align_corners = sampling_method==dtr.ScalingMode.TORCH_GRID_SAMPLE_ALIGN_CORNERS - td = make_batch_with_room_around_roi(scaling_way, align_corners) - augmentation = dtr.RandomFocusRoi(new_size = td.R) - augmentation.make_randomization_parameters = partial(with_some_similarity_trafo, scaling_mode=sampling_method) - result = augmentation(td.batch) - #vis(td, result, align_corners) - if sampling_method in (dtr.ScalingMode.PIL_HAMMING_WINDOW, dtr.ScalingMode.OPENCV_AREA) and scaling_way == 'down': - # At least one pixel tolerance, due to aliasing. - td = td._replace(tol = [max(1.,t) for t in td.tol]) - check(result,td,align_corners) - - -@pytest.mark.parametrize('scaling_mode', [ - dtr.ScalingMode.PIL_HAMMING_WINDOW, - dtr.ScalingMode.OPENCV_AREA, - dtr.ScalingMode.TORCH_GRID_SAMPLE_NO_ALIGN_CORNERS, - dtr.ScalingMode.TORCH_GRID_SAMPLE_ALIGN_CORNERS, -]) -@pytest.mark.parametrize('dt',[torch.float32, torch.uint8]) -def test_transform_image_only(scaling_mode, dt): - img = 1.*torch.ones((1,8,18),dtype=torch.float32) - img = torch.nn.functional.pad(img, [1,1,1,1], mode='constant', value=1.) - if dt == torch.uint8: - img = (img * 255).to(dt) - - if scaling_mode == dtr.ScalingMode.TORCH_GRID_SAMPLE_ALIGN_CORNERS: - tr = Affine2d.range_remap_2d([15.,5.],[24.,14.], [0., 0.], [99.,99.]) - else: - roi = torch.tensor([15, 5, 25, 15], dtype=torch.float32) - tr = Affine2d.range_remap_2d([15,5],[25.,15.], [0., 0.], [100.,100.]) - # x x x x o o corner = pixel - # | | - # - # x x x x o o cell-center = pixel - # | | | - if scaling_mode in (dtr.ScalingMode.TORCH_GRID_SAMPLE_NO_ALIGN_CORNERS, dtr.ScalingMode.TORCH_GRID_SAMPLE_ALIGN_CORNERS): - new_img = transform_image_torch(img, tr, 100, scaling_mode==dtr.ScalingMode.TORCH_GRID_SAMPLE_ALIGN_CORNERS, dtr.FieldCategory.image) - else: - new_img = { - dtr.ScalingMode.PIL_HAMMING_WINDOW : transform_image_pil, - dtr.ScalingMode.OPENCV_AREA : croprescale_image_cv2 - }[scaling_mode](img, roi, 100, dtr.FieldCategory.image) + # Because the vertices of the heatmap are centered in the middle of the pixel areas + heatmap_points += 0.5 + coord_points = points[:,:2] + for i, (a, b) in enumerate(zip(heatmap_points, coord_points)): + numpy.testing.assert_allclose(a, b, atol = td.tol, err_msg=f'Mismatch at point {i}') - nonzeroval, zero, tol = { - torch.float32 : (1., 0., 1.e-3), - torch.uint8 : (255, 0, 1) - }[dt] +def vis(td, image, points): if 0: - print (scaling_mode) - pyplot.imshow(new_img[0]) + fig, ax = pyplot.subplots(1,1) + extent = [0.,td.R,td.R,0.] + img = ax.imshow( + dtr._ensure_image_nhwc(image).mean(dim=-1), + interpolation='nearest', + vmin=0., + extent=extent) + ax.scatter(*points.T[:2]) + fig.colorbar(img, ax=ax) pyplot.show() - assert new_img.shape==(1,100,100) - numpy.testing.assert_allclose(new_img[:,:40,:40], np.array(nonzeroval), atol=tol) - numpy.testing.assert_allclose(new_img[:,60:,:], np.array(zero), atol=tol) - numpy.testing.assert_allclose(new_img[:,:,60:], np.array(zero), atol=tol) -# def test_transform_image_pil_aliasing(): -# img = 255.*torch.eye(100,dtype=torch.float32).expand(1,-1,-1) -# roi = torch.tensor([0., 0., 99., 99.], dtype=torch.float32) +up_down_sample_configs = [ + ('up', { 'upfilter' : 'linear' }, 0.,), + ('up', { 'upfilter' : 'cubic' }, 0.5,), + ('up', { 'upfilter' : 'lanczos' }, 0.5,), + ('down', {'downfilter' : 'gaussian' }, 0.), + ('down', {'downfilter' : 'hamming' }, 0.), + ('down', {'downfilter' : 'area' }, 0.) +] -# new_img = dtr.transform_image_opencv(img, roi, 10, dtr.FieldCategoryPoseDataset.image) -# pyplot.imshow(new_img[0]) -# pyplot.show() +@pytest.mark.parametrize('scaling_way, filter_args, tol', up_down_sample_configs) +def test_scalingtrafo(scaling_way, filter_args, tol): + td = make_test_data(scaling_way) + augmentation = dtr.RandomFocusRoi(new_size = td.R) + augmentation.make_randomization_parameters = partial(no_randomization, filter_args=filter_args) + td = td._replace(tol = td.tol + tol) + result = augmentation(td.batch) + vis(td, result['image'], result['pt3d_68']) + check(td, result['image'], result['pt3d_68']) -if __name__ == '__main__': - #pytest.main(["-s","-x",__file__, "-k", "test_transform_image_only"]) - pytest.main(["-s","-x",__file__]) \ No newline at end of file +@pytest.mark.parametrize('scaling_way, filter_args, tol', up_down_sample_configs) +def test_scalingtrafo_with_randomizer(scaling_way, filter_args, tol): + td = make_test_data(scaling_way) + augmentation = dtr.RandomFocusRoi(new_size = td.R) + augmentation.make_randomization_parameters = partial(with_some_similarity_trafo, filter_args=filter_args) + result = augmentation(td.batch) + td = td._replace(tol = td.tol + tol + (0.2 if scaling_way=='down' else 0.5)) + vis(td, result['image'], result['pt3d_68']) + check(td, result['image'], result['pt3d_68']) + + +@pytest.mark.parametrize('scaling_way', ['up','down']) +def test_image_affine_transform(scaling_way : str): + td = make_test_data(scaling_way) + tr = Affine2d.range_remap_2d([0,0], [td.S, td.S], [0,0], [td.R, td.R]) + tr = Affine2d.trs( + translations=torch.tensor([3.,-td.R*0.45]), + angles = torch.tensor(20.*np.pi/180.), + scales = torch.tensor(1.5)) @ tr + td = td._replace(tol = 1.0) + cv2result = affine_transform_image_cv2(td.batch['image'], tr, td.R) + torchresult = affine_transform_image_torch(td.batch['image'], tr, td.R) + points = affinevecmul(tr.tensor(), td.batch['pt3d_68'][...,:2]) + assert torch.sqrt(torch.nn.functional.mse_loss(cv2result, torchresult)).item() <= (2. if scaling_way=='up' else 10.) + vis(td, cv2result, points) + check(td, cv2result, points) + check(td, torchresult, points) + + +@pytest.mark.parametrize('scaling_way', ['up','down']) +def test_image_crop_rescale(scaling_way : str): + td = make_test_data(scaling_way) + td.batch['roi'] = td.batch['roi'].to(torch.int32) + tr = Affine2d.range_remap_2d(td.batch['roi'][:2], td.batch['roi'][2:], [0,0], [td.R, td.R]) + cv2result = croprescale_image_cv2(td.batch['image'], roi=td.batch['roi'], new_size=(td.R, td.R)) + torchresult = croprescale_image_torch(td.batch['image'], roi=td.batch['roi'], new_size=(td.R, td.R)) + points = affinevecmul(tr.tensor(), td.batch['pt3d_68'][...,:2]) + assert torch.sqrt(torch.nn.functional.mse_loss(cv2result, torchresult)).item() <= 7. + vis(td, cv2result, points) + check(td, cv2result, points) + check(td, torchresult, points) \ No newline at end of file diff --git a/test/test_datatransformation.py b/test/test_datatransformation.py new file mode 100644 index 0000000..75ca337 --- /dev/null +++ b/test/test_datatransformation.py @@ -0,0 +1,22 @@ +import torch +import pytest + +from trackertraincode.datatransformation.sample_geometric import GeneralFocusRoi + +@pytest.mark.parametrize('bbox,f,t,bbs, expected',[ + ([-10,-10,10,10], 1., [-1.,0.], 0.3, [-16,-10,4,10]), + ([-10,-10,10,10], 1., [ 1.,0.], 0.3, [-4,-10,16,10]), + ([-10,-10,10,10], 1., [0.,-1.], 0.3, [-10,-16,10,4]), + ([-10,-10,10,10], 1., [0., 1.], 0.3, [-10,-4,10,16]), + ([-10,-10,10,10], 2., [0., 0.], 0.3, [-20,-20,20,20]), + ([-10,-10,10,10], 2., [-1., 0.], 0.3, [-36,-20,4,20]), + ([-10,-10,10,10], 0.5, [0., 0.], 0.3, [-5,-5,5,5]), + ([-10,-10,10,10], 0.5, [-1., 0.], 0.3, [-13,-5,-3,5]), +]) +def test_compute_view_roi(bbox, f, t, bbs, expected): + outbox = GeneralFocusRoi._compute_view_roi( + face_bbox = torch.tensor(bbox, dtype=torch.float32), + enlargement_factor = torch.tensor(f), + translation_factor = torch.tensor(t), + beyond_border_shift = bbs) + assert outbox.numpy().tolist() == expected \ No newline at end of file diff --git a/test/test_eval.py b/test/test_eval.py new file mode 100644 index 0000000..1fc24b1 --- /dev/null +++ b/test/test_eval.py @@ -0,0 +1,104 @@ +from torch import nn +from torch import Tensor +import torch + +import math +from scipy.spatial.transform import Rotation +import pytest +import numpy as np + +from trackertraincode.eval import PerspectiveCorrector, _compute_mean_rotation, compute_opal_paper_alignment + + +def _make_rotations_around_mean(mean : Rotation, count : int, spread_deg : float, random_state : np.random.RandomState | None): + output = Rotation.identity(num=0) + while len(output) < count: + random_rots = Rotation.random(count, random_state) + mask = (mean.inv() * random_rots).magnitude() < spread_deg * np.pi / 180. + random_rots = random_rots[mask] + output = Rotation.concatenate([output,random_rots]) + return output[:count] + + + +def test_mean_rotation(): + # Setup + # Warning: It works only for small rotations up to pi/2 at the very most + center_rot = Rotation.from_euler('XYZ', [20.,10.,5.], degrees=True) + random_rots = _make_rotations_around_mean(center_rot, 100, 60., np.random.RandomState(seed=123456)) + # Test if the average matches the known mean + mean = _compute_mean_rotation(random_rots) + assert (mean.inv() * center_rot).magnitude() < 5. * np.pi / 180. + + +def test_opal_alignment(): + rand_state = np.random.RandomState(seed=123456) + N = 100 + center1 = Rotation.from_euler('XYZ', [20.,10.,5.], degrees=True) + center2 = Rotation.from_euler('XYZ', [4.,2.,30.], degrees=True) + center3 = Rotation.from_euler('XYZ', [3.,5.,10.], degrees=True) + center4 = Rotation.from_euler('XYZ', [5.,20.,5.], degrees=True) + #rots1, rots2, rots3, rots4 = (Rotation.from_quat(np.broadcast_to(c.as_quat(), (N,4))) for c in [center1,center2,center3,center4]) + rots1, rots2, rots3, rots4 = (_make_rotations_around_mean(c, N, 30., rand_state) for c in [center1,center2,center3,center4]) + rots12 = Rotation.concatenate([rots1,rots2]) + rots34 = Rotation.concatenate([rots3,rots4]) + clusters = np.concatenate([np.full((N,), 0, dtype=np.int32), np.full((N,), 1, dtype=np.int32)]) + + print(f"expected delta1: {(center1.inv()*center3).magnitude()*180./np.pi}") + print(f"expected delta2: {(center2.inv()*center4).magnitude()*180./np.pi}") + print(f"actual delta: {(rots12[:N].mean().inv() * center3).magnitude()*180./np.pi}") + print(f"actual delta: {(rots12[N:].mean().inv() * center4).magnitude()*180./np.pi}") + + assert (rots12[:N].mean().inv() * center3).magnitude() > 15. * np.pi / 180. + assert (rots12[N:].mean().inv() * center4).magnitude() > 30. * np.pi / 180. + + aligned12 = Rotation.from_quat(compute_opal_paper_alignment(torch.from_numpy(rots12.as_quat()), torch.from_numpy(rots34.as_quat()), clusters).numpy()) + + print(f"aligned delta: {(aligned12[:N].mean().inv() * center3).magnitude()*180./np.pi}") + print(f"aligned delta: {(aligned12[N:].mean().inv() * center4).magnitude()*180./np.pi}") + + assert (aligned12[:N].mean().inv() * center3).magnitude() < 3. * np.pi / 180. + assert (aligned12[N:].mean().inv() * center4).magnitude() < 3. * np.pi / 180. + + + +def fov_h(fov, aspect): + # w/h = aspect + # w/f = 2*tan(fov_w/2) + # h/f = 2*tan(a) + # aspect = tan(fov_w/2)/tan(a) + # -> a = atan(1/aspect * tan(fov_w/2)) + return 2.*math.atan(1./aspect*math.tan(fov/2.*math.pi/180.))*180./math.pi + + +@pytest.mark.parametrize("fov, image_size, coord, pose, expected", [ + # Rotation matches the fov angle when position is at the edge of the screen (horizontal) + (90.0, [200,100], [200.,50.,1.], Rotation.identity(), Rotation.from_rotvec([0.,45.,0.],degrees=True)), + # Rotation matches the fov angle when position is at the edge of the screen (vertial) + (90.0, [200,100], [100.,100.,1.], Rotation.identity(), Rotation.from_rotvec([-fov_h(90.,2.)/2.,0.,0.],degrees=True)), + # Returns identity for position in the center + (90.0, [200,100], [100.,50.,1.], Rotation.identity(), Rotation.identity()), + # Test if original rotation is considered + (90.0, [200,100], [100.,50.,1.], Rotation.from_rotvec([10.,20.,30.],degrees=True), Rotation.from_rotvec([10.,20.,30.],degrees=True)), +]) +def test_perspective_corrector(fov, image_size, coord, pose, expected): + corrector = PerspectiveCorrector(fov) + result = Rotation.from_quat(corrector.corrected_rotation( + image_sizes = torch.as_tensor(image_size,dtype=torch.long), + coord = torch.as_tensor(coord, dtype=torch.float32), + pose = torch.from_numpy(pose.as_quat()).to(dtype=torch.float32) + ).numpy()) + assert Rotation.approx_equal(expected, result, atol=0.01, degrees=True), f"Converted to quats: expected = {expected.as_quat()} vs result = {result.as_quat()}" + + +def test_make_look_at_matrix(): + m = PerspectiveCorrector.make_look_at_matrix(torch.as_tensor([0.,0.,1.])).numpy() + np.testing.assert_allclose(m, np.eye(3)) + + SQRT3 = math.sqrt(3.) + m = PerspectiveCorrector.make_look_at_matrix(torch.as_tensor([1.,1.,1.])).numpy() + np.testing.assert_allclose(m[:,2], np.asarray([1./SQRT3,1./SQRT3,1./SQRT3])) + assert np.abs(np.dot(m[:,0],np.asarray([0.,1.,0.]))) < 1.e-6 + assert m[0,0] > 0.1 + assert m[1,1] > 0.1 + diff --git a/test/test_io.py b/test/test_io.py new file mode 100644 index 0000000..338068d --- /dev/null +++ b/test/test_io.py @@ -0,0 +1,36 @@ +from trackertraincode.neuralnets import io + +from torch import nn +from torch import Tensor +import torch + +import pytest + +def test_load_save(tmp_path): + class Model(nn.Module): + def __init__(self, input_size : int, width : int): + super().__init__() + self._input_size = input_size + self._width = width + self.layers = nn.Sequential( + nn.Linear(input_size, width), + nn.ReLU(), + nn.Linear(width, 1) + ) + def __call__(self, x : Tensor): + return self.layers(x) + def get_config(self): + return { + 'input_size' : self._input_size, + 'width' : self._width + } + + m = Model(42, 32) + m(torch.zeros((1,42))) # Model is executable? + + filename = tmp_path / 'model.ckpt' + io.save_model(m, filename) + + restored : Model = io.load_model(filename, [Model]) + for p, q in zip(restored.parameters(), m.parameters()): + torch.testing.assert_close(p,q) \ No newline at end of file diff --git a/test/test_landmarks.py b/test/test_landmarks.py index 82e7c56..cf8e012 100644 --- a/test/test_landmarks.py +++ b/test/test_landmarks.py @@ -3,7 +3,6 @@ from matplotlib import projections, pyplot from mpl_toolkits.mplot3d import Axes3D import numpy as np -from scipy.spatial.transform import Rotation import random from functools import partial @@ -17,6 +16,7 @@ from trackertraincode.datasets.dshdf5pose import Hdf5PoseDataset import trackertraincode.vis as vis +# TODO: rename to test_modelcomponents.py def test_landmarks(): @@ -52,5 +52,6 @@ def test_landmarks(): assert torch.max(diff) < 0.01, f"Landmark reconstruction error too large: {torch.max(diff)}" + if __name__ == '__main__': - test_landmarks() \ No newline at end of file + raise RuntimeError("Run pytest") \ No newline at end of file diff --git a/test/test_math.py b/test/test_math.py index e0296d9..faad29f 100644 --- a/test/test_math.py +++ b/test/test_math.py @@ -1,11 +1,12 @@ from scipy.spatial.transform import Rotation from trackertraincode.neuralnets.math import affinevecmul, random_choice -from trackertraincode.neuralnets.torchquaternion import mult, rotate, tomatrix, from_rotvec, iw, to_rotvec, slerp, geodesicdistance +from trackertraincode.neuralnets.torchquaternion import mult, rotate, tomatrix, from_rotvec, iw, to_rotvec, slerp, geodesicdistance, from_matrix, positivereal from trackertraincode.neuralnets.affine2d import Affine2d, roi_normalizing_transform import torch import numpy as np - +import onnx +import io def test_quaternions(): us = Rotation.from_rotvec(np.random.uniform(0.,1.,size=(7,3))) @@ -53,6 +54,56 @@ def test_quaternions(): expected_angle_diff = (rots.inv() * rots2).magnitude() assert np.allclose(angle_difference, expected_angle_diff) + for rots in [Rotation.random(100), Rotation.identity()]: + input_mat = rots.as_matrix() + expected_quat = positivereal(torch.from_numpy(rots.as_quat())) + output_quat = from_matrix(torch.from_numpy(input_mat)) + np.testing.assert_allclose(expected_quat.numpy(), output_quat.numpy()) + + + +def _export_func(model, inputs, filename): + torch.onnx.export( + model, # model being run + inputs, # model input (or a tuple for multiple inputs) + filename, + training=torch.onnx.TrainingMode.EVAL, + export_params=True, # store the trained parameter weights inside the model file + opset_version=13, # the ONNX version to export the model to + do_constant_folding=True, # whether to execute constant folding for optimization + keep_initializers_as_inputs=False, + verbose=False) + + +def test_quaternion_mult_onnx_export(tmp_path): + class Model(torch.nn.Module): + def __call__(self, q, p): + return mult(q,p) + + q = torch.from_numpy(Rotation.random(10).as_quat()) + p = torch.from_numpy(Rotation.random(10).as_quat()) + + _export_func( + Model(), # model being run + (q,p), # model input (or a tuple for multiple inputs) + tmp_path / 'mult_model.onnx') + + +def test_quaternion_rotate_onnx_export(tmp_path): + class Model(torch.nn.Module): + def __call__(self, q, p): + return rotate(q,p) + + q = torch.from_numpy(Rotation.random(10).as_quat()) + p = torch.from_numpy(np.random.uniform(-10.,10.,size=(10,3))) + + # /tmp/pytest-of-mwelter/pytest-current/test_quaternion_rotate_onnx_excurrent/rotate_model.onnx + _export_func( + Model(), # model being run + (q,p), # model input (or a tuple for multiple inputs) + tmp_path / 'rotate_model.onnx') + + def test_transforms(): # Multiplying a transform with its inverse should result in identity matrix diff --git a/test/test_models_sanity.py b/test/test_models_sanity.py index 493c733..61d715b 100644 --- a/test/test_models_sanity.py +++ b/test/test_models_sanity.py @@ -43,13 +43,17 @@ def onnx_export_and_inference_speed(net): print (f"ONNX Inference time: {time/N*1000:.01f} ms averaged over {N} runs") -def test_pose_network_sanity(): +def test_pose_network_sanity(tmp_path): torch.set_num_threads(1) net = trackertraincode.neuralnets.models.NetworkWithPointHead(config='mobilenetv1', enable_uncertainty=True) net.eval() timing_and_output(net, torch.rand(1, 1, net.input_resolution, net.input_resolution)) + filename = tmp_path / 'model.onnx' + trackertraincode.neuralnets.models.save_model(net, filename) + trackertraincode.neuralnets.models.load_model(filename) + onnx_export_and_inference_speed(net) # Check if gradients can be computed. diff --git a/test/test_negloglikelihood.py b/test/test_negloglikelihood.py new file mode 100644 index 0000000..65eff4a --- /dev/null +++ b/test/test_negloglikelihood.py @@ -0,0 +1,49 @@ + +import torch +import torch.nn as nn + +from trackertraincode.neuralnets.negloglikelihood import ( + FeaturesAsTriangularScale, + TangentSpaceRotationDistribution, + FeaturesAsDiagonalScale) + + +def test_tangent_space_rotation_distribution(): + with torch.autograd.set_detect_anomaly(True): + B = 5 + q = torch.rand((B, 4), requires_grad=True) + cov_features = torch.rand((B, 6), requires_grad=True) + r = torch.rand((B, 4)) + cov_converter = FeaturesAsTriangularScale(6,3) + dist = TangentSpaceRotationDistribution(q, cov_converter(cov_features)) + val = dist.log_prob(r).sum() + val.backward() + assert q.grad is not None + assert cov_features.grad is not None + + +def test_feature_to_variance_mapping(): + B = 1 + N = 7 + M = 3 + q = torch.zeros((B, N), requires_grad=True) + m = FeaturesAsDiagonalScale(N,M).eval() + v = m(q) + val = v.sum() + val.backward() + assert next(iter(m.parameters())).grad is not None + assert q.grad is not None + torch.testing.assert_close(v, torch.ones((B,M)), atol=0.1, rtol=0.1) + +def test_feature_as_triangular_cov_factor(): + B = 1 + N = 7 + M = 3 + q = torch.zeros((B,N), requires_grad=True) + m = FeaturesAsTriangularScale(N, M).eval() + v = m(q) + val = v.sum() + val.backward() + assert next(iter(m.parameters())).grad is not None + assert q.grad is not None + torch.testing.assert_close(v, torch.eye(M)[None,...], atol=0.1, rtol=0.1) \ No newline at end of file diff --git a/test/test_train.py b/test/test_train.py index 3c86b74..0728e82 100644 --- a/test/test_train.py +++ b/test/test_train.py @@ -10,20 +10,22 @@ from trackertraincode.datatransformation import PostprocessingDataLoader -def update_fun(net, batch : Batch, optimizer : torch.optim.Optimizer, state : train.State, loss): +def update_fun(net, batch : Batch, optimizer : torch.optim.Optimizer, state : train.State, loss : train.CriterionGroup): optimizer.zero_grad() y = net(batch['image']) - lossvals : List[train.LossVal] = loss(y, batch) - l = sum((l.val for l in lossvals), 0.) + lossvals : List[train.LossVal] = loss.evaluate(y, batch, state.step) + lossvals = [ v._replace(weight = v.val.new_full(v.val.shape, v.weight)) for v in lossvals ] + l = sum((l.val.sum() for l in lossvals), 0.) l.backward() optimizer.step() - return [ (l.name,l.val) for l in lossvals ] + lossvals = [ v._replace(val = v.val.detach().to('cpu', non_blocking=True)) for v in lossvals ] + return [lossvals] def test_run_the_training(): class LossMock(object): def __call__(self, pred, batch): - return torch.nn.functional.mse_loss(pred, batch['y']) + return torch.nn.functional.mse_loss(pred, batch['y'], reduction='none') class MockDataset(Dataset): def __init__(self, n): self.n = n @@ -38,6 +40,7 @@ def __getitem__(self, i): torch.nn.Linear(5,128), torch.nn.ReLU(), torch.nn.Linear(128,5)) + net.get_config = lambda : {} trainloader = DataLoader(MockDataset(20), batch_size=2, collate_fn=Batch.collate) testloader = PostprocessingDataLoader(MockDataset(8), batch_size=2, collate_fn=Batch.collate, unroll_list_of_batches=True) @@ -55,7 +58,7 @@ def cbsleep(state): net, trainloader, testloader, - functools.partial(update_fun,loss=train.MultiTaskLoss([ c1, c3 ])), + functools.partial(update_fun,loss=train.CriterionGroup([ c1, c3 ])), train.DefaultTestFunc([c1, c2]), callbacks = [cbsleep, train.SaveBestCallback(net, 'c2', model_dir='/tmp',retain_max=3)], close_plot_on_exit=True, diff --git a/test/test_utils.py b/test/test_utils.py index ba65544..9d9fcdc 100644 --- a/test/test_utils.py +++ b/test/test_utils.py @@ -9,7 +9,7 @@ def test_affine3d(): t1 = np.random.rand(3) inv = utils.affine3d_inv((R1,t1)) R2, t2 = utils.affine3d_chain((R1,t1), inv) - np.testing.assert_allclose(R2.as_matrix(), np.eye(3)) + np.testing.assert_allclose(R2.as_matrix(), np.eye(3), atol = 1.e-15) np.testing.assert_allclose(t2, 0., atol = 1.e-9) diff --git a/trackertraincode/backbones/resnet.py b/trackertraincode/backbones/resnet.py index e084974..75296d3 100644 --- a/trackertraincode/backbones/resnet.py +++ b/trackertraincode/backbones/resnet.py @@ -52,7 +52,6 @@ def __init__(self, *args, **kwargs): super().__init__() use_blurpool = kwargs.pop('use_blurpool') - print ("ResNet blurpool = ", use_blurpool) kwargs['block']=CustomBlock if use_blurpool else torchvision.models.resnet.BasicBlock diff --git a/trackertraincode/datasets/batch.py b/trackertraincode/datasets/batch.py index e1c1c87..b4c9509 100644 --- a/trackertraincode/datasets/batch.py +++ b/trackertraincode/datasets/batch.py @@ -142,6 +142,14 @@ def undo_collate(self) -> List["Batch"]: def pin_memory(self): return Batch(self.meta, pin_memory(self._data)) + def copy(self): + '''Shallow copy.''' + return Batch(self.meta, **self._data) + + def to(self, *args, **kwargs): + assert all(isinstance(x,torch.Tensor) for x in self._data.values()), "Only applicable to PyTorch" + return Batch(self.meta, ((k,v.to(*args,**kwargs)) for k,v in self._data.items())) + class Collation(object): def __init__(self, divide_by_tag : bool = True, divide_by_image_size : bool = False, ragged_categories : Optional[Set[Any]] = None): diff --git a/trackertraincode/datasets/dshdf5.py b/trackertraincode/datasets/dshdf5.py index 54308e1..6bea2ae 100644 --- a/trackertraincode/datasets/dshdf5.py +++ b/trackertraincode/datasets/dshdf5.py @@ -81,6 +81,9 @@ def __setitem__(self, index : int, value) : def __len__(self): return len(self.ds) + def resize(self, size, axis): + return self.ds.resize(size, axis) + @cached_property def attrs(self): return self.ds.attrs @@ -125,6 +128,9 @@ def __getitem__(self, index : int): def __len__(self): return len(self._filelist) + def resize(self, size, axis): + return self.ds.resize(size, axis) + @cached_property def attrs(self): return self._ds.attrs @@ -183,6 +189,9 @@ def attrs(self): def __len__(self): return len(self.ds) + def resize(self, size, axis): + return self.ds.resize(size, axis) + @staticmethod def create(g : h5py.Group, name : str, size : int, sample_dimensionality : int, maxsize : Optional[int] = None): dt = np.dtype([('shape','i4',(sample_dimensionality,)), ('minval', 'f4'), ('maxval','f4'), ('buffer',variable_length_hdf5_buffer_dtype)]) diff --git a/trackertraincode/datatransformation/__init__.py b/trackertraincode/datatransformation/__init__.py index 8abbc4d..5add010 100644 --- a/trackertraincode/datatransformation/__init__.py +++ b/trackertraincode/datatransformation/__init__.py @@ -1,12 +1,11 @@ +from trackertraincode.datatransformation.misc import PutRoiFromLandmarks, StabilizeRoi +from trackertraincode.datatransformation.image_geometric_torch import croprescale_image_torch, affine_transform_image_torch +from trackertraincode.datatransformation.image_geometric_cv2 import affine_transform_image_cv2, croprescale_image_cv2 from trackertraincode.datatransformation.affinetrafo import ( position_normalization, position_unnormalization, - apply_affine2d, - affine_transform_image_torch, - affine_transform_image_cv2, - croprescale_image_torch, - croprescale_image_cv2) + apply_affine2d) -from trackertraincode.datatransformation.imageaugment import ( +from trackertraincode.datatransformation.image_intensity import ( KorniaImageDistortions, RandomBoxBlur, RandomPlasmaBrightness, @@ -32,8 +31,7 @@ collate_list_of_batches, undo_collate, DeleteKeys, - WhitelistKeys, - to_device) + WhitelistKeys) from trackertraincode.datatransformation.normalization import ( normalize_batch, @@ -51,9 +49,7 @@ from_numpy_or_tensor ) -from trackertraincode.datatransformation.otheraugment import ( - PutRoiFromLandmarks, - StabilizeRoi, +from trackertraincode.datatransformation.sample_geometric import ( RandomFocusRoi, FocusRoi, RoiFocusRandomizationParameters, diff --git a/trackertraincode/datatransformation/affinetrafo.py b/trackertraincode/datatransformation/affinetrafo.py index deba3a5..769dfa4 100644 --- a/trackertraincode/datatransformation/affinetrafo.py +++ b/trackertraincode/datatransformation/affinetrafo.py @@ -1,21 +1,11 @@ import numpy as np -from copy import copy -from typing import Callable, Set, Sequence, Union, List, Tuple, Dict, Optional, NamedTuple, Any -from PIL import Image -from numpy.typing import NDArray -import enum -import cv2 +from typing import Callable, Set, Sequence, Union, List, Optional, NamedTuple, Literal import torch -from torch import Tensor -import torch.nn.functional as F -from torchvision.transforms.functional import crop, resize -import kornia.filters from trackertraincode.datasets.dshdf5pose import FieldCategory, imagelike_categories from trackertraincode.neuralnets.affine2d import Affine2d from trackertraincode.neuralnets.math import affinevecmul -from trackertraincode.datatransformation.core import _ensure_image_nchw, _ensure_image_nhwc def position_normalization(w : int ,h : int): @@ -26,154 +16,6 @@ def position_unnormalization(w : int, h : int): return Affine2d.range_remap_2d([-1.,-1.], [1.,1.], [0., 0.], [w, h]) -def _extract_size_tuple(new_size : Union[int, Tuple[int,int]]): - try: - new_w, new_h = new_size - except TypeError: - assert int(new_size), "Must be convertible to single integer" - new_w = new_h = new_size - return new_w, new_h - - -def _fixup_image_format_for_resample(img : Tensor): - original_dtype = img.dtype - assert original_dtype in (torch.uint8, torch.float32, torch.float16) - if img.device == torch.device('cpu'): - new_dtype = torch.float32 # Float32 and Uint8 are not supported - else: - new_dtype = torch.float16 if original_dtype==torch.uint8 else original_dtype - img = img.to(new_dtype) - def restore(x : Tensor): - if original_dtype == torch.uint8: - x = x.clip(0., 255.).to(dtype=torch.uint8) - return x - return img, restore - - -def _numpy_extract_roi(img : np.ndarray, roi : Tensor): - # TODO: use OpenCV's copy border function? - h, w, c = img.shape - x0, y0, x1, y1 = tuple(map(int,roi)) - xmin = min(x0,0) - ymin = min(y0,0) - xmax = max(x1,w) - ymax = max(y1,h) - canvas_size = (ymax-ymin,xmax-xmin,c) - canvas = np.zeros(canvas_size, dtype=img.dtype) - x0 -= xmin - x1 -= xmin - y0 -= ymin - y1 -= ymin - canvas[0-ymin:h-ymin,0-xmin:w-xmin,:] = img - img = np.ascontiguousarray(canvas[y0:y1,x0:x1,:]) - return img - - -def croprescale_image_cv2(img : Tensor, roi : Tensor, new_size): - new_w,new_h = _extract_size_tuple(new_size) - img = _ensure_image_nhwc(img) - img = _numpy_extract_roi(img.numpy(), roi) - interpolation = cv2.INTER_AREA if (img.shape[-2] >= new_size) else cv2.INTER_LINEAR - img = cv2.resize(img, (new_w, new_h), interpolation=interpolation) - img = torch.from_numpy(img) - if img.ndim == 2: # Add back channel dimension which might have been removed by opencv - img = img[...,None] - img = _ensure_image_nchw(img) - return img - - -def affine_transform_image_cv2(img : Tensor, tr : Affine2d, new_size : Union[int, Tuple[int,int]]): - new_w,new_h = _extract_size_tuple(new_size) - img = _ensure_image_nhwc(img) - scale_factor = float(tr.scales.numpy()) - if scale_factor > 1.: - img = cv2.warpAffine( - img.numpy(), - M=tr.tensor().numpy(), - dsize=(new_w,new_h), - flags=cv2.INTER_LINEAR, - borderMode=cv2.BORDER_CONSTANT, - borderValue=None) - else: - rot_w, rot_h = round(new_w/scale_factor), round(new_h/scale_factor) - scale_compensation = rot_h / new_h - scaletr = Affine2d.trs(scales=torch.tensor(scale_compensation)) - rotated = cv2.warpAffine( - img.numpy(), - M=(scaletr @ tr).tensor().numpy(), - dsize=(rot_w,rot_h), - flags=cv2.INTER_LINEAR, - borderMode=cv2.BORDER_CONSTANT, - borderValue=None) - img = cv2.resize(rotated, dsize=(new_w, new_h), interpolation=cv2.INTER_AREA) - if img.ndim == 2: # Add back channel dimension which might have been removed by opencv - img = img[...,None] - return torch.from_numpy(_ensure_image_nchw(img)) - - -def _normalize_transform(tr : Affine2d, wh : Tuple[int,int], new_wh : Tuple[int,int]): - '''Normalize an affine image transform so that the input/output domain is [-1,1] instead of pixel ranges. - Image size is given as (width,height) tuples. - ''' - w, h = wh - new_w, new_h = new_wh - m1= Affine2d.range_remap_2d([-1,-1], [1,1], [0, 0], [w, h])[None,...] - m2 = Affine2d.range_remap_2d([0, 0], [new_w, new_h], [-1,-1], [1, 1])[None,...] - return m2 @ tr @ m1 - - -def croprescale_image_torch(img : Tensor, roi : Tensor, new_size : Union[int, Tuple[int,int]]): - assert roi.dtype == torch.int32 - assert img.ndim == 3 - assert roi.ndim == 1 - img, restore_dtype = _fixup_image_format_for_resample(img) - new_w, new_h = _extract_size_tuple(new_size) - lt = roi[:2] - wh = roi[2:]-roi[:2] - img = crop(img, lt[1], lt[0], wh[1], wh[0]) - img = resize(img, (new_h, new_w), antialias=True) - assert img.shape[-2:] == (new_h, new_w), f"Torchvision resize failed. Expected shape ({new_h,new_w}). Got {img.shape[-2:]}." - img = restore_dtype(img) - return img - - -def affine_transform_image_torch(tmp : Tensor, tr : Affine2d, new_size : Union[int, Tuple[int,int]], antialias=False): - '''Basically torch.grid_sample. - - WARNING: tr is defined w.r.t. pixel ranges, not [-1,1] - ''' - # For regular images. TODO: semseg - tmp, restore_dtype = _fixup_image_format_for_resample(tmp) - new_w, new_h = _extract_size_tuple(new_size) - C, H, W = tmp.shape - tmp = tmp[None,...] # Add batch dim - tr = tr[None,...] - tr_normalized = _normalize_transform(tr, (W,H), (new_w, new_h)) - tr_normalized = tr_normalized.inv().tensor() - if antialias: - # A little bit of Anti-aliasing - # WARNING: this is inefficient as the filter size gets larger - scaling = tr.scales - sampling_distance = 1./scaling - ks = 0.4*sampling_distance - intks = max(3,int(ks*3)) - intks = intks if (intks&1)==1 else (intks+1) - tmp = kornia.filters.gaussian_blur2d(tmp, (intks,intks), (ks,ks), border_type='constant', separable=True) - grid = F.affine_grid( - tr_normalized.to(device=tmp.device), - [1, C, new_h, new_w], - align_corners=False) - if 0: # Debugging - from matplotlib import pyplot - pyplot.imshow(tmp[0,0], extent=[-1.,1.,1.,-1.]) - pyplot.scatter(grid[0,:,:,0].ravel(), grid[0,:,:,1].ravel()) - pyplot.show() - tmp = F.grid_sample(tmp, grid, align_corners=False, mode='bilinear', padding_mode='zeros') - tmp = tmp[0] # Remove batch dim - tmp = restore_dtype(tmp) - return tmp - - def handle_backtransform_insertion(sample : dict, W : int, H : int, tr : Affine2d, type : str = 'tensor'): assert type in ('tensor','ndarray') if (prev_tr := sample.get('image_backtransform', None)) is not None: diff --git a/trackertraincode/datatransformation/image_geometric_cv2.py b/trackertraincode/datatransformation/image_geometric_cv2.py new file mode 100644 index 0000000..882856f --- /dev/null +++ b/trackertraincode/datatransformation/image_geometric_cv2.py @@ -0,0 +1,137 @@ +import numpy as np +from numpy.typing import NDArray +from typing import Literal, Any, Tuple, Union, Optional + +#from trackertraincode.datatransformation.image_geometric_cv2 import _numpy_extract_roi +from trackertraincode.datatransformation.core import _ensure_image_nchw, _ensure_image_nhwc +from trackertraincode.neuralnets.affine2d import Affine2d + +import scipy.signal.windows +import torch +from torch import Tensor +import cv2 + + +DownFilters = Literal['gaussian','hamming','area'] +UpFilters = Literal['linear','cubic','lanczos'] + + +def _extract_size_tuple(new_size : Union[int, Tuple[int,int]]): + try: + new_w, new_h = new_size + except TypeError: + assert int(new_size), "Must be convertible to single integer" + new_w = new_h = new_size + return new_w, new_h + + +def _numpy_extract_roi(img : NDArray, roi : Tensor): + # TODO: use OpenCV's copy border function? + h, w, c = img.shape + x0, y0, x1, y1 = tuple(map(int,roi)) + xmin = min(x0,0) + ymin = min(y0,0) + xmax = max(x1,w) + ymax = max(y1,h) + canvas_size = (ymax-ymin,xmax-xmin,c) + canvas = np.zeros(canvas_size, dtype=img.dtype) + x0 -= xmin + x1 -= xmin + y0 -= ymin + y1 -= ymin + canvas[0-ymin:h-ymin,0-xmin:w-xmin,:] = img + img = np.ascontiguousarray(canvas[y0:y1,x0:x1,:]) + return img + + +def _apply_antialias_filter(img : NDArray[Any], scale_factor : float, filter : str) -> NDArray[Any]: + if filter == 'gaussian': + ks = 0.5 / scale_factor + return cv2.GaussianBlur(img, (0,0), ks, ks, cv2.BORDER_REPLICATE) + elif filter == 'hamming': + ks = 1.0 / scale_factor + intks = max(1,round(ks*2+1)) + intks = intks if (intks&1) else (intks+1) # Make it odd + # kern becomes a 1d array containing the hamming window + kern = scipy.signal.windows.hamming(intks) + kern /= np.sum(kern) + # Pretend it was seperable. But it actually isn't. This function applies the filter + # first along rows then along columns. + return cv2.sepFilter2D(img, -1, kern, kern) + else: + raise NotImplementedError(f"Filter: {filter}") + + +def _resize(img, new_w, new_h, downfilter : DownFilters, upfilter : UpFilters) -> NDArray[Any]: + old_h, old_w = img.shape if len(img.shape)==2 else img.shape[-3:-1] + scale_factor = 0.5*(new_w/old_w + new_h/old_h) + filter = downfilter if scale_factor<1. else upfilter + if filter not in ('gaussian','hamming'): + interp = { + 'linear' : cv2.INTER_LINEAR, + 'cubic' : cv2.INTER_CUBIC, + 'lanczos' : cv2.INTER_LANCZOS4, + 'area' : cv2.INTER_AREA + }[filter] + return cv2.resize(img, dsize=(new_w, new_h), interpolation=interp) + else: + return cv2.resize(_apply_antialias_filter(img, scale_factor, filter), dsize=(new_w, new_h), interpolation=cv2.INTER_LINEAR) + + +def affine_transform_image_cv2(img : Tensor, tr : Affine2d, new_size : Union[int, Tuple[int,int]], downfilter : Optional[DownFilters] = None, upfilter : Optional[UpFilters] = None): + '''Anti-aliased warpAffine. + Args: + img: Input in hwc format + tr: Affine transformation that can be provided to cv2.warpAffine + new_size: Output size that can be provided to cv2.warpAffine + downfilter: Filter when downsampling + upfilter: Filter when upsampling + ''' + upfilter = 'linear' if upfilter is None else upfilter + downfilter = 'area' if downfilter is None else downfilter + new_w,new_h = _extract_size_tuple(new_size) + img = _ensure_image_nhwc(img) + scale_factor = float(tr.scales.numpy()) + if scale_factor > 1.: + upscale_warp_interp = { + 'linear' : cv2.INTER_LINEAR, + 'cubic' : cv2.INTER_CUBIC, + 'lanczos' : cv2.INTER_LANCZOS4 + }[upfilter] + tr = tr @ Affine2d.trs(translations=torch.tensor([0.5,0.5])) + img = cv2.warpAffine( + img.numpy(), + M=tr.tensor().numpy(), + dsize=(new_w,new_h), + flags=upscale_warp_interp, + borderMode=cv2.BORDER_CONSTANT, + borderValue=None) + else: + rot_w, rot_h = round(new_w/scale_factor), round(new_h/scale_factor) + scale_compensation = rot_h / new_h + scaletr = Affine2d.trs(scales=torch.tensor(scale_compensation)) + rotated = cv2.warpAffine( + img.numpy(), + M=(scaletr @ tr).tensor().numpy(), + dsize=(rot_w,rot_h), + flags=cv2.INTER_LINEAR, + borderMode=cv2.BORDER_CONSTANT, + borderValue=None) + img = _resize(rotated, new_w, new_h, downfilter, upfilter) + if img.ndim == 2: # Add back channel dimension which might have been removed by opencv + img = img[...,None] + return torch.from_numpy(_ensure_image_nchw(img)) + + +def croprescale_image_cv2(img : Tensor, roi : Tensor, new_size : Union[int,Tuple[int,int]], downfilter : Optional[DownFilters] = None, upfilter : Optional[UpFilters] = None): + upfilter = 'linear' if upfilter is None else upfilter + downfilter = 'area' if downfilter is None else downfilter + new_w,new_h = _extract_size_tuple(new_size) + img = _ensure_image_nhwc(img) + img = _numpy_extract_roi(img.numpy(), roi) + img = _resize(img, new_w, new_h, downfilter, upfilter) + img = torch.from_numpy(img) + if img.ndim == 2: # Add back channel dimension which might have been removed by opencv + img = img[...,None] + img = _ensure_image_nchw(img) + return img \ No newline at end of file diff --git a/trackertraincode/datatransformation/image_geometric_torch.py b/trackertraincode/datatransformation/image_geometric_torch.py new file mode 100644 index 0000000..2009bdd --- /dev/null +++ b/trackertraincode/datatransformation/image_geometric_torch.py @@ -0,0 +1,89 @@ +from typing import Tuple, Union + +import torch.nn.functional as F +import torch +from torch import Tensor +from torchvision.transforms.functional import crop, resize + +import kornia.filters + +from trackertraincode.datatransformation.image_geometric_cv2 import _extract_size_tuple +from trackertraincode.neuralnets.affine2d import Affine2d + + +def _fixup_image_format_for_resample(img : Tensor): + original_dtype = img.dtype + assert original_dtype in (torch.uint8, torch.float32, torch.float16) + if img.device == torch.device('cpu'): + new_dtype = torch.float32 # Float16 and Uint8 are not supported + else: + new_dtype = torch.float16 if original_dtype==torch.uint8 else original_dtype + img = img.to(new_dtype) + def restore(x : Tensor): + if original_dtype == torch.uint8: + x = x.clip(0., 255.).to(dtype=torch.uint8) + return x + return img, restore + + +def croprescale_image_torch(img : Tensor, roi : Tensor, new_size : Union[int, Tuple[int,int]]): + assert roi.dtype == torch.int32 + assert img.ndim == 3 + assert roi.ndim == 1 + img, restore_dtype = _fixup_image_format_for_resample(img) + new_w, new_h = _extract_size_tuple(new_size) + lt = roi[:2] + wh = roi[2:]-roi[:2] + img = crop(img, lt[1], lt[0], wh[1], wh[0]) + img = resize(img, (new_h, new_w), antialias=True) + assert img.shape[-2:] == (new_h, new_w), f"Torchvision resize failed. Expected shape ({new_h,new_w}). Got {img.shape[-2:]}." + img = restore_dtype(img) + return img + + +def _normalize_transform(tr : Affine2d, wh : Tuple[int,int], new_wh : Tuple[int,int]): + '''Normalize an affine image transform so that the input/output domain is [-1,1] instead of pixel ranges. + Image size is given as (width,height) tuples. + ''' + w, h = wh + new_w, new_h = new_wh + m1= Affine2d.range_remap_2d([-1,-1], [1,1], [0, 0], [w, h])[None,...] + m2 = Affine2d.range_remap_2d([0, 0], [new_w, new_h], [-1,-1], [1, 1])[None,...] + return m2 @ tr @ m1 + + +def affine_transform_image_torch(tmp : Tensor, tr : Affine2d, new_size : Union[int, Tuple[int,int]], antialias=False): + '''Basically torch.grid_sample. + + Args: + tr The image transform. It is defined w.r.t. pixel ranges, not [-1,1] + ''' + tmp, restore_dtype = _fixup_image_format_for_resample(tmp) + new_w, new_h = _extract_size_tuple(new_size) + C, H, W = tmp.shape + tmp = tmp[None,...] # Add batch dim + tr = tr[None,...] + tr_normalized = _normalize_transform(tr, (W,H), (new_w, new_h)) + tr_normalized = tr_normalized.inv().tensor() + if antialias: + # A little bit of Anti-aliasing + # WARNING: this is inefficient as the filter size gets larger + scaling = tr.scales + sampling_distance = 1./scaling + ks = 0.4*sampling_distance + intks = max(3,int(ks*3)) + intks = intks if (intks&1)==1 else (intks+1) + tmp = kornia.filters.gaussian_blur2d(tmp, (intks,intks), (ks,ks), border_type='constant', separable=True) + grid = F.affine_grid( + tr_normalized.to(device=tmp.device), + [1, C, new_h, new_w], + align_corners=False) + if 0: # Debugging + from matplotlib import pyplot + pyplot.imshow(tmp[0,0], extent=[-1.,1.,1.,-1.]) + pyplot.scatter(grid[0,:,:,0].ravel(), grid[0,:,:,1].ravel()) + pyplot.show() + tmp = F.grid_sample(tmp, grid, align_corners=False, mode='bilinear', padding_mode='zeros') + tmp = tmp[0] # Remove batch dim + tmp = restore_dtype(tmp) + return tmp \ No newline at end of file diff --git a/trackertraincode/datatransformation/imageaugment.py b/trackertraincode/datatransformation/image_intensity.py similarity index 100% rename from trackertraincode/datatransformation/imageaugment.py rename to trackertraincode/datatransformation/image_intensity.py diff --git a/trackertraincode/datatransformation/loader.py b/trackertraincode/datatransformation/loader.py index 52d4991..46b6832 100644 --- a/trackertraincode/datatransformation/loader.py +++ b/trackertraincode/datatransformation/loader.py @@ -117,11 +117,4 @@ def __call__(self, sample): for k in list(sample.keys()): if not k in self.keys: del sample[k] - return sample - -# TODO move to core -def to_device(device : str, batch : Batch): - batch = copy(batch) - for k, v in batch.items(): - batch[k] = v.to(device, non_blocking=True) - return batch \ No newline at end of file + return sample \ No newline at end of file diff --git a/trackertraincode/datatransformation/misc.py b/trackertraincode/datatransformation/misc.py new file mode 100644 index 0000000..e95ba34 --- /dev/null +++ b/trackertraincode/datatransformation/misc.py @@ -0,0 +1,55 @@ +from trackertraincode.facemodel.bfm import BFMModel, ScaledBfmModule +from trackertraincode.neuralnets.modelcomponents import PosedDeformableHead +from trackertraincode.pipelines import Batch + + +import torch + + +class PutRoiFromLandmarks(object): + def __init__(self, extend_to_forehead = False): + self.extend_to_forehead = extend_to_forehead + self.headmodel = PosedDeformableHead(ScaledBfmModule(BFMModel())) + + def _create_roi(self, landmarks3d, sample): + if self.extend_to_forehead: + vertices = self.headmodel( + sample['coord'], + sample['pose'], + sample['shapeparam']) + min_ = torch.amin(vertices[...,:2], dim=-2) + max_ = torch.amax(vertices[...,:2], dim=-2) + else: + min_ = torch.amin(landmarks3d[...,:2], dim=-2) + max_ = torch.amax(landmarks3d[...,:2], dim=-2) + roi = torch.cat([min_, max_], dim=0).to(torch.float32) + return roi + + def __call__(self, sample : Batch): + if 'pt3d_68' in sample: + sample['roi'] = self._create_roi(sample['pt3d_68'], sample) + return sample + + +class StabilizeRoi(object): + def __init__(self, alpha=0.01, destination='roi'): + self.roi_filter_alpha = alpha + self.last_roi = None + self.last_id = None + self.destination = destination + + def filter_roi(self, sample): + roi = sample['roi'] + id_ = sample['individual'] if 'individual' in sample else None + if id_ == self.last_id and self.last_roi is not None: + roi = self.roi_filter_alpha*roi + (1.-self.roi_filter_alpha)*self.last_roi + # print (f"Filt: {id_}") + # else: + # print (f"Raw: {id_}") + self.last_roi = roi + self.last_id = id_ + return roi + + def __call__(self, batch): + batch[self.destination] = self.filter_roi(batch) + return batch \ No newline at end of file diff --git a/trackertraincode/datatransformation/otheraugment.py b/trackertraincode/datatransformation/sample_geometric.py similarity index 69% rename from trackertraincode/datatransformation/otheraugment.py rename to trackertraincode/datatransformation/sample_geometric.py index 2790605..fbf5e5d 100644 --- a/trackertraincode/datatransformation/otheraugment.py +++ b/trackertraincode/datatransformation/sample_geometric.py @@ -1,8 +1,6 @@ import numpy as np from copy import copy from typing import Callable, Set, Sequence, Union, List, Tuple, Dict, Optional, NamedTuple -from functools import partial -import enum import torch from torch import Tensor @@ -10,27 +8,24 @@ from trackertraincode.pipelines import Batch from trackertraincode.datasets.batch import Metadata -from trackertraincode.datasets import preprocessing from trackertraincode.neuralnets.affine2d import Affine2d from trackertraincode.datasets.dshdf5pose import FieldCategory, imagelike_categories -from trackertraincode.neuralnets.math import random_choice, random_uniform from trackertraincode.datatransformation.core import get_category +from trackertraincode.datatransformation.image_geometric_cv2 import ( + affine_transform_image_cv2, croprescale_image_cv2, DownFilters, UpFilters) from trackertraincode.datatransformation.affinetrafo import ( apply_affine2d, - croprescale_image_torch, - croprescale_image_cv2, - affine_transform_image_cv2, - affine_transform_image_torch, position_normalization, position_unnormalization) -from trackertraincode.facemodel.bfm import ScaledBfmModule, BFMModel -from trackertraincode.neuralnets.modelcomponents import PosedDeformableHead + class RoiFocusRandomizationParameters(NamedTuple): scales : torch.Tensor # Shape B angles : torch.Tensor # Shape B translations : torch.Tensor # Shape (B, 2) + upfilter : Optional[UpFilters] = None + downfilter : Optional[DownFilters] = None def RandomFocusRoi(new_size, roi_variable='roi', rotation_aug_angle : float = 30., extension_factor = 1.1, insert_backtransform=False): @@ -48,7 +43,6 @@ def FocusRoi(new_size, extent_factor, roi_variable='roi', insert_backtransform=F roi_variable, insert_backtransform) - class MakeRoiRandomizationParameters(object): def __init__(self, rotation_aug_angle, extension_factor): self.rotation_aug_angle = rotation_aug_angle @@ -58,10 +52,13 @@ def __call__(self, B : tuple) -> RoiFocusRandomizationParameters: scales = torch.randn(size=B).mul(0.1).clip(-0.5,0.5).add(self.extension_factor) translations = torch.randn(size=B+(2,)).mul(0.5).clip(-1., 1.) angles = self._pick_angles(B, self.rotation_aug_angle) if self.rotation_aug_angle else torch.zeros(size=B) + return RoiFocusRandomizationParameters( scales = scales, angles = angles, - translations = translations) + translations = translations, + upfilter = 'linear', + downfilter = 'area') def _pick_angles(self, B : tuple, angle : float): angles = torch.full(B, fill_value=np.pi*angle/180.) @@ -89,30 +86,46 @@ def __init__(self, make_randomization_parameters, new_size, roi_variable, insert self._max_beyond_border_shift = 0.3 self.make_randomization_parameters = make_randomization_parameters - - def _compute_view_roi(self, face_roi : torch.Tensor, enlargement_factor : torch.Tensor, translation_distribution : torch.Tensor, beyond_border_shift : float): + + @staticmethod + def _compute_view_roi(face_bbox : torch.Tensor, enlargement_factor : torch.Tensor, translation_factor : torch.Tensor, beyond_border_shift : float): ''' - enlargement_factor: By how much the current roi is scaled up - translation_distribution: Random number between -1 and 1 indicating the movement of the face roi within the expanded roi - beyond_border_shift: Fraction of the original roi by which the face can be moved beyond the border of the expanded roi + Computes the expanded and shifted ROI based on the face bounding box. + + Case 1: small roi + |--- bbox ----| + |-roi-| + <-> At most [beyond_border_shift] of ROI sidelength + Case 2: large roi + |--- bbox ----| + |-------- roi -------| + <--> At most [beyond_border_shift] of bounding box side lenght + + Args: + enlargement_factor: By how much the face bounding box is scaled up + translation_factor: Random number between -1 and 1 indicating the movement of the face roi within the expanded roi + beyond_border_shift: Controls the length up to which which ROI and original BBOX may not intersect. ''' - x0, y0, x1, y1 = torch.moveaxis(face_roi, -1, 0) - rx, ry = translation_distribution.moveaxis(-1,0) - roi_w = x1-x0 - roi_h = y1-y0 + assert face_bbox.shape[:-1] == enlargement_factor.shape + assert face_bbox.shape[:-1] == translation_factor.shape[:-1] + x0, y0, x1, y1 = face_bbox.unbind(-1) + rx, ry = translation_factor.unbind(-1) + # Size and center of the BBox. + bbox_w = x1-x0 + bbox_h = y1-y0 cx = 0.5*(x1+x0) cy = 0.5*(y1+y0) - size = torch.maximum(roi_w, roi_h)*enlargement_factor - wiggle_room_x = F.relu(size-roi_w) - wiggle_room_y = F.relu(size-roi_h) - tx = (wiggle_room_x * 0.5 + roi_w * beyond_border_shift) * rx - ty = (wiggle_room_y * 0.5 + roi_h * beyond_border_shift) * ry + # Size of the expanded ROI. + size = torch.maximum(bbox_w, bbox_h)*enlargement_factor + wiggle_room_x = 0.5*torch.abs(size-bbox_w) + beyond_border_shift*torch.minimum(size, bbox_w) + wiggle_room_y = 0.5*torch.abs(size-bbox_h) + beyond_border_shift*torch.minimum(size, bbox_h) + tx = wiggle_room_x * rx + ty = wiggle_room_y * ry x0 = cx - size*0.5 + tx x1 = cx + size*0.5 + tx y0 = cy - size*0.5 + ty y1 = cy + size*0.5 + ty new_roi = torch.stack([x0, y0, x1, y1], dim=-1) - new_roi = torch.round(new_roi).to(torch.int32) return new_roi @@ -156,13 +169,15 @@ def __call__(self, sample : Batch): self._maybe_account_for_video(sample.meta, params) view_roi = self._compute_view_roi(roi, params.scales, params.translations, self._max_beyond_border_shift) + view_roi = torch.round(view_roi).to(torch.int32) tr = self._compute_point_transform_from_roi(B, view_roi, self.new_size) tr = self._center_rotation_tr(params.angles) @ tr + # TODO: this won't work for videos ... if params.angles.item() != 0.: - image_transform_function = lambda img: affine_transform_image_cv2(img, tr, self.new_size) + image_transform_function = lambda img: affine_transform_image_cv2(img, tr, self.new_size, downfilter=params.downfilter, upfilter=params.upfilter) else: - image_transform_function = lambda img: croprescale_image_cv2(img, view_roi, self.new_size) + image_transform_function = lambda img: croprescale_image_cv2(img, view_roi, self.new_size, downfilter=params.downfilter, upfilter=params.upfilter) for k, v in sample.items(): c = get_category(sample, k) @@ -179,55 +194,6 @@ def __call__(self, sample : Batch): return sample -class PutRoiFromLandmarks(object): - def __init__(self, extend_to_forehead = False): - self.extend_to_forehead = extend_to_forehead - self.headmodel = PosedDeformableHead(ScaledBfmModule(BFMModel())) - - def _create_roi(self, landmarks3d, sample): - if self.extend_to_forehead: - vertices = self.headmodel( - sample['coord'], - sample['pose'], - sample['shapeparam']) - min_ = torch.amin(vertices[...,:2], dim=-2) - max_ = torch.amax(vertices[...,:2], dim=-2) - else: - min_ = torch.amin(landmarks3d[...,:2], dim=-2) - max_ = torch.amax(landmarks3d[...,:2], dim=-2) - roi = torch.cat([min_, max_], dim=0).to(torch.float32) - return roi - - def __call__(self, sample : Batch): - if 'pt3d_68' in sample: - sample['roi'] = self._create_roi(sample['pt3d_68'], sample) - return sample - - -class StabilizeRoi(object): - def __init__(self, alpha=0.01, destination='roi'): - self.roi_filter_alpha = alpha - self.last_roi = None - self.last_id = None - self.destination = destination - - def filter_roi(self, sample): - roi = sample['roi'] - id_ = sample['individual'] if 'individual' in sample else None - if id_ == self.last_id and self.last_roi is not None: - roi = self.roi_filter_alpha*roi + (1.-self.roi_filter_alpha)*self.last_roi - # print (f"Filt: {id_}") - # else: - # print (f"Raw: {id_}") - self.last_roi = roi - self.last_id = id_ - return roi - - def __call__(self, batch): - batch[self.destination] = self.filter_roi(batch) - return batch - - def horizontal_flip_and_rot_90(p_rot : float, sample : Batch): assert sample.meta.batchsize == 0 do_flip = np.random.randint(0,2) == 0 diff --git a/trackertraincode/eval.py b/trackertraincode/eval.py index a54bbc7..4c4d060 100644 --- a/trackertraincode/eval.py +++ b/trackertraincode/eval.py @@ -4,6 +4,8 @@ from typing import Iterable, Optional, Tuple, Dict, Any, Union, List, NamedTuple from copy import copy from numpy.typing import NDArray +from scipy.spatial.transform import Rotation +import math import torch from torch import nn @@ -13,6 +15,7 @@ from trackertraincode.neuralnets.affine2d import Affine2d import trackertraincode.datatransformation as dtr +import trackertraincode.neuralnets.torchquaternion as torchquaternion import trackertraincode.utils as utils from trackertraincode.pipelines import whiten_image @@ -118,14 +121,7 @@ def __call__(self, batch): class PytorchPoseNetwork(InferenceNetwork): def __init__(self, modelfile, device): - # The checkpoints are not self-describing. So we must have a matching network in code. - net = trackertraincode.neuralnets.models.NetworkWithPointHead( - enable_point_head=True, - enable_face_detector=False, - config='mobilenetv1', - enable_uncertainty=True) - state_dict = torch.load(modelfile) - net.load_state_dict(state_dict) + net = trackertraincode.neuralnets.models.load_model(modelfile) net.eval() net.to(device) self._net = net @@ -158,8 +154,11 @@ def _apply_backtrafo(backtrafo : Affine2d, batch : Batch): @torch.no_grad() def predict(net : InferenceNetwork, images : List[Tensor], rois : Optional[Tensor] = None, focus_roi_expansion_factor : float = 1.2) -> Batch: ''' - Unnormalized uint8 images - rois according to standard conventions + Args: + net : Inference function + images : Unnormalized uint8 images in HWC format + rois: Tensor in x0,y0,x1,y1 format + focus_roi_expansion_factor: Factor by which to enlarge the cropping region. Normally it's the squareified ROI. ''' B = len(images) H,W,C = images[-1].shape @@ -201,7 +200,7 @@ def create_batch(image, roi): preds = dtr.unnormalize_batch(preds) if net.device_for_input != input_device: - preds = dtr.to_device(input_device, preds) + preds = preds.to(input_device) if roi_focus is not None: batch = dtr.unnormalize_batch(batch) @@ -335,3 +334,103 @@ def _compute_bin_masks(self, pose_gt : Tensor): ((a <= abs_yaw_deg) & (abs_yaw_deg < b)) for (a,b) in bounds_list ] return masks + + +def _compute_displacement(mean_rot : Rotation, rots : Rotation): + return (mean_rot.inv() * rots).as_rotvec() + + +def _compute_mean_rotation(rots : Rotation, tol=0.0001, max_iter=100000): + # Adapted from https://github.com/pcr-upm/opal23_headpose/blob/main/test/evaluator.py#L111C1-L126C27 + # Exclude samples outside the sphere of radius pi/2 for convergence + rots = rots[rots.magnitude() < np.pi/2] + mean_rot = rots[0] + for _ in range(max_iter): + displacement = _compute_displacement(mean_rot, rots) + displacement = np.mean(displacement, axis=0) + d_norm = np.linalg.norm(displacement) + if d_norm < tol: + break + mean_rot = mean_rot * Rotation.from_rotvec(displacement) + return mean_rot + + +def compute_opal_paper_alignment(pose_pred : Tensor, pose_target : Tensor, cluster_ids : NDArray[np.int32]): + assert pose_pred.get_device() == -1 # CPU + assert pose_target.get_device() == -1 # CPU + clusters = np.unique(cluster_ids) + out = torch.empty_like(pose_pred) + print ("Aligning clusters", clusters) + for id_ in clusters: + mask = cluster_ids == id_ + pred_rot = Rotation.from_quat(pose_pred[mask].numpy()) + target_rot = Rotation.from_quat(pose_target[mask].numpy()) + align_rot = _compute_mean_rotation(target_rot.inv()*pred_rot) + #print (f"id = {id_}, align = {align_rot.magnitude()*180./np.pi}, {np.count_nonzero(mask)} items") + # (P (T^-1 * P)^-1 )^-1 T + # ----+---- + # align_rot + # => (P P^-1 T)^-1 T = Identity + pred_rot = pred_rot * align_rot.inv() + out[mask] = torch.from_numpy(pred_rot.as_quat()).to(pose_pred.dtype) + return out + + +class PerspectiveCorrector: + def __init__(self, fov): + self._fov = fov + self.f = 1. / math.tan(fov*math.pi/180.*0.5) + + def corrected_rotation(self, image_sizes : Tensor, coord : Tensor, pose : Tensor): + ''' + Explanation though top view + ^ face-local z-axis + z-axis ^ | ^ direction under which the CNN "sees" the face through it's crop + | _|__/ + | / \ + | | face | + | \ __ / + | / Note: <----> marks the face crop + | / + -----------------------<-x->-------------- screen + | / xy_normalized + f | / + |/ + camera x ------> x-axis + + Thus, it is apparent that the CNN sees the face approximately under an angle spanned by the forward + direction and the 3d position of the face. The more wide-angle the lense is the stronger the effect. + As usual perspective distortion within the crop is neglected. + Hence, we assume that the detected rotation is given w.r.t to a coordinate system whose z-axis is + aligned with the position vector as illustrated. Consequently, the resulting pose is simply the + cnn-output transformed into the world coordinate system. + + Beware, position correction is handled in the evaluation scripts. It's much simpler as we only have + to consider the offset and scaling due to the cropping and resizing to the CNN input size. + + Args: + image_size: B x [Width, Height] + ''' + xy_image = coord[...,:2] + half_image_size_tensor = 0.5*image_sizes + xy_normalized = (xy_image - half_image_size_tensor) / half_image_size_tensor[0] + fs = torch.as_tensor(self.f, device=xy_image.device).expand_as(xy_normalized[...,:-1]) + xyz = torch.cat([xy_normalized, fs],dim=-1) + m = PerspectiveCorrector.make_look_at_matrix(xyz) + out = torchquaternion.mult(torchquaternion.from_matrix(m), pose) + return out + + def make_look_at_matrix(pos : Tensor): + '''Computes a rotation matrix where the z axes is aligned with the argument vector. + + This leaves a degree of rotation around the this axis. This is resolved by constraining + the x axis to the horizonal plane (perpendicular to the global y-axis). + ''' + z = pos / torch.norm(pos, dim=-1, keepdim=True) + x = torch.cross(*torch.broadcast_tensors(pos.new_tensor([0.,1.,0.]),z),dim=-1) + x = x / torch.norm(x, dim=-1, keepdim=True) + y = torch.cross(z, x, dim=-1) + y = y / torch.norm(x, dim=-1, keepdim=True) + M = torch.stack([x,y,z],dim=-1) + return M + \ No newline at end of file diff --git a/trackertraincode/neuralnets/bnfusion.py b/trackertraincode/neuralnets/bnfusion.py new file mode 100644 index 0000000..cc80d62 --- /dev/null +++ b/trackertraincode/neuralnets/bnfusion.py @@ -0,0 +1,58 @@ +from typing import Tuple, Dict, Any +import copy + +import torch +import torch.nn as nn +import torch.fx as fx + + +def _split_name(target : str) -> Tuple[str, str]: + """ + Splits a ``qualname`` into parent path and last atom. + For example, `foo.bar.baz` -> (`foo.bar`, `baz`) + """ + *parent, name = target.rsplit('.', 1) + return parent[0] if parent else '', name + + +def replace_node_module(node: fx.Node, modules: Dict[str, Any], new_module: torch.nn.Module): + assert(isinstance(node.target, str)) + parent_name, name = _split_name(node.target) + setattr(modules[parent_name], name, new_module) + + +def fuse_convbn(net : fx.GraphModule): + '''From https://pytorch.org/tutorials/intermediate/fx_conv_bn_fuser.html''' + net = copy.deepcopy(net) + modules = dict(net.named_modules()) + for node in net.graph.nodes: + # The FX IR contains several types of nodes, which generally represent + # call sites to modules, functions, or methods. The type of node is + # determined by `Node.op`. + if node.op != 'call_module': # If our current node isn't calling a Module then we can ignore it. + continue + # For call sites, `Node.target` represents the module/function/method + # that's being called. Here, we check `Node.target` to see if it's a + # batch norm module, and then check `Node.args[0].target` to see if the + # input `Node` is a convolution. + if type(modules[node.target]) is nn.BatchNorm2d and type(modules[node.args[0].target]) is nn.Conv2d: + if len(node.args[0].users) > 1: # Output of conv is used by other nodes + continue + print ("FUSING: ", node.target) + conv = modules[node.args[0].target] + bn = modules[node.target] + fused_conv = torch.nn.utils.fuse_conv_bn_eval(conv, bn) + replace_node_module(node.args[0], modules, fused_conv) + # As we've folded the batch nor into the conv, we need to replace all uses + # of the batch norm with the conv. + node.replace_all_uses_with(node.args[0]) + # Now that all uses of the batch norm have been replaced, we can + # safely remove the batch norm. + net.graph.erase_node(node) + net.graph.lint() + # After we've modified our graph, we need to recompile our graph in order + # to keep the generated code in sync. + net.recompile() + #print ("FUSION RESULT: ") + #net.graph.print_tabular() + return net \ No newline at end of file diff --git a/trackertraincode/neuralnets/io.py b/trackertraincode/neuralnets/io.py new file mode 100644 index 0000000..b03dafc --- /dev/null +++ b/trackertraincode/neuralnets/io.py @@ -0,0 +1,44 @@ +from os.path import splitext +import argparse +import numpy as np +import os +import torch +import copy +from typing import Any, Protocol, Container + +import torch.onnx +import torch.nn as nn + + +class SavableModel(Protocol): + def state_dict() -> dict[str,Any]: + ... + def get_config() -> dict[str,Any]: + ... + def load_state_dict(d : dict[str,Any], strict : bool) -> None: + ... + + +def save_model(model : SavableModel, filename : str): + contents = { + 'state_dict' : model.state_dict(), + 'class_name' : model.__class__.__name__, + 'config' : model.get_config() + } + torch.save(contents, filename) + + +class InvalidFileFormatError(Exception): + def __init__(self,msg): + super.__init__(msg) + + +def load_model(filename : str, class_candidates : Container[type]): + contents = torch.load(filename, weights_only=True) + if not all(x in contents for x in ['state_dict','class_name','config']): + raise InvalidFileFormatError(f'Bad dict contents. Got {list(contents.keys())}') + class_name = contents['class_name'] + class_ = { c.__name__:c for c in class_candidates }[class_name] + instance : SavableModel = class_(**contents['config']) + instance.load_state_dict(contents['state_dict'], strict=True) + return instance \ No newline at end of file diff --git a/trackertraincode/neuralnets/losses.py b/trackertraincode/neuralnets/losses.py index e208f99..c2d6ec5 100644 --- a/trackertraincode/neuralnets/losses.py +++ b/trackertraincode/neuralnets/losses.py @@ -5,6 +5,7 @@ import torch import torch.nn as nn import torch.nn.functional as F +import pickle import trackertraincode.facemodel.keypoints68 as kpts68 from trackertraincode.neuralnets.gmm import unpickle_scipy_gmm @@ -75,6 +76,25 @@ def __call__(self, pred, sample): return self.eval_on_params(pred['shapeparam'], sample['shapeparam']) +class ShapePlausibilityLoss(nn.Module): + def __init__(self): + super().__init__() + with open(join(dirname(__file__),'../facemodel/shapeparams_gmm.pkl'), 'rb') as f: + scipy_gmm = pickle.load(f) + self.register_buffer('weights', torch.from_numpy(scipy_gmm.weights_))#.to(torch.float32)) + self.register_buffer('scales', torch.from_numpy(scipy_gmm.covariances_).rsqrt())#.to(torch.float32)) + self.register_buffer('means',torch.from_numpy(scipy_gmm.means_))#.to(torch.float32)) + self.register_buffer('fudge_factor', torch.as_tensor(0.001/scipy_gmm.n_components)) #,dtype=torch.float32)) + + def _eval(self, x): + return -torch.logsumexp(-0.5*((x - self.means) * self.scales).square().sum(dim=-1) + torch.log(self.weights) + torch.log(self.scales).sum(dim=-1),dim=-1)*self.fudge_factor + + def __call__(self, pred, sample): + mean_nll = self._eval(pred['shapeparam'][:,None,:].to(torch.float64)).to(torch.float32) + assert len(mean_nll.shape) == 1 + return mean_nll + + class QuaternionNormalizationSoftConstraint(object): def __init__(self, prefix=''): self._prefix = prefix diff --git a/trackertraincode/neuralnets/modelcomponents.py b/trackertraincode/neuralnets/modelcomponents.py index 179c290..5cfd09a 100644 --- a/trackertraincode/neuralnets/modelcomponents.py +++ b/trackertraincode/neuralnets/modelcomponents.py @@ -1,7 +1,7 @@ from __future__ import annotations from os.path import join, dirname -from typing import Union, Optional +from typing import Tuple, Union, Optional import numpy as np import torch import torch.nn as nn @@ -67,10 +67,6 @@ def __init__(self, num_shape=40, num_expr=10): keyeigvecs = torch.from_numpy(full.scaled_bases[:,full.keypoints,:]).contiguous() self.register_buffer('keypts', keypts) self.register_buffer('keyeigvecs', keyeigvecs) - self.eye_left_top = torch.tensor([ 37, 38 ], dtype=torch.long) - self.eye_left_bottom = torch.tensor([ 41, 40 ], dtype=torch.long) - self.eye_right_top = torch.tensor([ 43, 44 ], dtype=torch.long) - self.eye_right_bottom = torch.tensor([ 47, 46 ], dtype=torch.long) def _deformvector(self, shapeparams): # (..., num_eigvecs, 68, 3) x (... , B, num_eigvecs). -> Need for broadcasting and unsqueezing. @@ -154,10 +150,7 @@ def _compute_correction_quat(self): return torch.cat([torch.sin(c), c.new_zeros((2,)), torch.cos(c) ]) def _compute_correction_offset(self): - q = self.p.new_empty((3,)) - q[0] = 0. - q[1:] = self.p[1:3] - return q + return torch.cat([self.p.new_zeros((1,)), self.p[1:3]]) def _compute_correction_scale(self): return smoothclip0(self.p[3]) @@ -220,6 +213,21 @@ def freeze_norm_stats(m): p.requires_grad = False +def quaternion_from_features(z : Tensor): + ''' + Returns: + (quaternions, unnormalized quaternions) + ''' + assert torchquaternion.iw == 3 + # The real component can be positive because -q is the same rotation as q. + # Seems easier to learn like so. + quats_unnormalized = torch.cat([ + z[...,torchquaternion.iijk], + smoothclip0(z[...,torchquaternion.iw:])], dim=-1) + quats = torchquaternion.normalized(quats_unnormalized) + return quats, quats_unnormalized + + # TODO: proper test def _test_local_to_global_transform_offset(): from scipy.spatial.transform import Rotation @@ -240,19 +248,6 @@ def _test_local_to_global_transform_offset(): print (pred_c, expect_c, expect_scale) -def quaternion_from_features(z : Tensor): - ''' - Returns: - (quaternions, unnormalized quaternions) - ''' - quats_unnormalized = torch.empty_like(z) - # The real component can be positive because -q is the same rotation as q. - # Seems easier to learn like so. - quats_unnormalized[...,torchquaternion.iw] = smoothclip0(z[...,torchquaternion.iw]) - quats_unnormalized[...,torchquaternion.iijk] = z[...,torchquaternion.iijk] - quats = torchquaternion.normalized(quats_unnormalized) - return quats, quats_unnormalized - if __name__ == '__main__': _test_local_to_global_transform_offset() \ No newline at end of file diff --git a/trackertraincode/neuralnets/models.py b/trackertraincode/neuralnets/models.py index 0d875b5..5c725e7 100644 --- a/trackertraincode/neuralnets/models.py +++ b/trackertraincode/neuralnets/models.py @@ -7,6 +7,7 @@ from torch import Tensor import torchvision.models from trackertraincode.neuralnets.math import inv_smoothclip0, smoothclip0 +import trackertraincode.neuralnets.io from trackertraincode.neuralnets.modelcomponents import ( freeze_norm_stats, @@ -109,8 +110,8 @@ def __init__(self, num_features, enable_uncertainty=False): if self.enable_uncertainty: # pointscales = NLL.FeaturesAsUncorrelatedVariance(num_features, 68, torch.full((68,))) # shapescales = NLL.FeaturesAsUncorrelatedVariance(num_features, 50, torch.full((50,))) - self.point_distrib_scales = NLL.UncorrelatedVarianceParameter(68) - self.shape_distrib_scales = NLL.UncorrelatedVarianceParameter(50) + self.point_distrib_scales = NLL.DiagonalScaleParameter(68) + self.shape_distrib_scales = NLL.DiagonalScaleParameter(50) def forward(self, z, quats, coords) -> Dict[str, Tensor]: shapeparam = self.shapenet(z) @@ -136,7 +137,7 @@ def __init__(self, num_features, enable_uncertainty = False): self.linear = nn.Linear(num_features, 4, bias=True) self.linear.bias.data[torchquaternion.iw] = inv_smoothclip0(torch.as_tensor(0.1)) if enable_uncertainty: - self.uncertainty_net = NLL.FeaturesAsTriangularCovFactor( + self.uncertainty_net = NLL.FeaturesAsTriangularScale( num_features, 3) def forward(self, x) -> Dict[str, Tensor]: @@ -161,7 +162,7 @@ def __init__(self, num_features, enable_uncertainty = False): self.linear = nn.Linear(num_features, 4) self.linear.bias.data[...] = torch.tensor([0.0, 0.0, 0.5, 0.5]) if enable_uncertainty: - self.scales = NLL.UncorrelatedVarianceParameter(4) + self.scales = NLL.DiagonalScaleParameter(4) def forward(self, x : Tensor) -> Dict[str, Tensor]: z = self.linear(x) @@ -187,8 +188,7 @@ def __init__(self, num_features, enable_uncertainty = False): self.linear_size = nn.Linear(num_features, 1) self.linear_size.bias.data.fill_(0.5) if enable_uncertainty: - #self.scales = NLL.FeaturesAsUncorrelatedVariance(num_features, 3) - self.scales = NLL.FeaturesAsTriangularCovFactor(num_features, 3) + self.scales = NLL.FeaturesAsTriangularScale(num_features, 3) def forward(self, x : Tensor): coord = torch.cat([ @@ -216,11 +216,13 @@ def create_pose_estimator_backbone(config : str, args : Dict[str,Any]): class NetworkWithPointHead(nn.Module): def __init__( - self, enable_point_head=True, + self, + enable_point_head=True, enable_face_detector=False, config='mobilenetv1', enable_uncertainty=False, dropout_prob = 0.5, + use_local_pose_offset = True, backbone_args = None): super(NetworkWithPointHead, self).__init__() self.enable_point_head = enable_point_head @@ -228,22 +230,35 @@ def __init__( self.finetune = False self.config = config self.enable_uncertainty = enable_uncertainty - if backbone_args is None: - backbone_args = {} + self.use_local_pose_offset = use_local_pose_offset + self._backbone_args = {} if (backbone_args is None) else backbone_args self._input_resolution = (129, 97) - self.convnet = create_pose_estimator_backbone(config, backbone_args) + self.convnet = create_pose_estimator_backbone(config, self._backbone_args) num_features = self.convnet.num_features self.dropout = nn.Dropout(dropout_prob) self.boxnet = BoundingBox(num_features, enable_uncertainty) self.posnet = PositionSizeOutput(num_features, enable_uncertainty) self.quatnet = DirectQuaternionWithNormalization(num_features, enable_uncertainty) + self.local_pose_offset = LocalToGlobalCoordinateOffset() + self.local_pose_offset_kpts = LocalToGlobalCoordinateOffset() if enable_point_head: self.landmarks = Landmarks3dOutput(num_features, enable_uncertainty) if enable_face_detector: self.face_detector = nn.Linear(num_features, 1, bias=True) + def get_config(self): + return { + 'enable_point_head' : self.enable_point_head, + 'enable_face_detector' : self.enable_face_detector, + 'config' : self.config, + 'enable_uncertainty' : self.enable_uncertainty, + 'dropout_prob' : self.dropout.p, + 'use_local_pose_offset' : self.use_local_pose_offset, + 'backbone_args' : self._backbone_args + } + @property def input_resolutions(self) -> Tuple[int]: return self._input_resolution if isinstance(self._input_resolution,tuple) else (self._input_resolution,) @@ -256,14 +271,6 @@ def input_resolution(self) -> int: def name(self) -> str: return type(self).__name__+'_'+self.config - - def load_partial(self, state_dict): - mine = self.state_dict() - assert (not frozenset(state_dict.keys()).difference(frozenset(mine.keys()))), f"Failed to load model dict. Keys {frozenset(state_dict.keys()).difference(frozenset(mine.keys()))} not found in present model" - mine.update(state_dict) - self.load_state_dict(mine) - - def forward(self, x): assert x.shape[2] in self.input_resolutions and \ x.shape[3] == x.shape[2] @@ -275,10 +282,23 @@ def forward(self, x): out.update(self.posnet(x)) out.update(self.quatnet(x)) + + if self.use_local_pose_offset: + out.update({ + 'hidden_pose' : out['pose'], + 'hidden_coord' : out['coord'] + }) + quats, coords = self.local_pose_offset(out.pop('pose'), out.pop('coord')) + out.update({ + 'pose' : quats, + 'coord' : coords + }) quats, coords = out['pose'], out['coord'] if self.enable_point_head: + if self.use_local_pose_offset: + quats, coords = self.local_pose_offset_kpts(out['hidden_pose'], out['hidden_coord']) out.update(self.landmarks(x, quats, coords)) if self.enable_face_detector: @@ -308,3 +328,21 @@ def train(self, mode=True): self.convnet.apply(freeze_norm_stats) +save_model = trackertraincode.neuralnets.io.save_model + + +def load_model(filename : str): + def load_legacy(filename : str): + sd = torch.load(filename) + net = NetworkWithPointHead( + enable_point_head=True, + enable_face_detector=False, + config='resnet18', + enable_uncertainty=True, + backbone_args = {'use_blurpool' : False} + ) + net.load_state_dict(sd, strict=True) + try: + return trackertraincode.neuralnets.io.load_model(filename, [NetworkWithPointHead]) + except trackertraincode.neuralnets.io.InvalidFileFormatError: + return load_legacy(filename) \ No newline at end of file diff --git a/trackertraincode/neuralnets/negloglikelihood.py b/trackertraincode/neuralnets/negloglikelihood.py index 30a9176..ebeb7d6 100644 --- a/trackertraincode/neuralnets/negloglikelihood.py +++ b/trackertraincode/neuralnets/negloglikelihood.py @@ -1,4 +1,5 @@ from typing import Dict, NamedTuple, Optional, Literal +import sys import torch import numpy as np from os.path import join, dirname @@ -16,39 +17,47 @@ inv_make_positive = inv_smoothclip0 -class FeaturesAsUncorrelatedVariance(nn.Module): +class Neck(nn.Module): def __init__(self, num_in_features, num_out_features): super().__init__() self.num_out_features = num_out_features self.num_in_features = num_in_features - self.lin = nn.Linear(num_in_features, num_out_features, bias=False) - self.bn = nn.BatchNorm1d(num_out_features) - self.bn.weight.data[...] *= 1.e-4 - self.bn.bias.data[...] = 1. - self.eps = torch.tensor(1.e-2) # Prevent numerical problems due to too small variance + self.lin = nn.Linear(num_in_features, num_out_features+1) + self.lin.bias.data[...] = inv_make_positive(torch.ones((num_out_features+1))) + + def set_biases(self, x : Tensor): + self.lin.bias.data[...,1:] = x + + def forward(self, x : Tensor): + x = self.lin(x) + return x[...,1:], make_positive(x[...,:1]) + + +class FeaturesAsDiagonalScale(nn.Module): + def __init__(self, num_in_features, num_out_features): + super().__init__() + self.neck = Neck(num_in_features, num_out_features) + self.eps = torch.tensor(1.e-6) # Prevent numerical problems due to too small variance def forward(self, x : Tensor): - x = self.bn(self.lin(x)) - x = torch.square(x) + self.eps + x, multiplier = self.neck(x) + x = make_positive(x) * multiplier + self.eps return x -class UncorrelatedVarianceParameter(nn.Module): +class DiagonalScaleParameter(nn.Module): ''' Provides a trainable, input-independent scale parameter which starts off as 1 and is guaranteed to be always positive. ''' def __init__(self, num_out_features): super().__init__() - self.num_out_features = num_out_features - self.num_in_features = num_out_features - self.hidden_scale = nn.Parameter( - torch.ones((self.num_in_features)).requires_grad_(True), - requires_grad=True) - self.eps = torch.tensor(1.e-2) + initial_values = inv_make_positive(torch.ones((num_out_features+1,))) + self.hidden_scale = nn.Parameter(initial_values.requires_grad_(True), requires_grad=True) + self.eps = torch.tensor(1.e-6) def forward(self): - return torch.square(self.hidden_scale) + self.eps + return make_positive(self.hidden_scale[:1]) * make_positive(self.hidden_scale[1:]) + self.eps SimpleDistributionSwitch = Literal['gaussian','laplace'] @@ -70,15 +79,29 @@ def __call__(self, preds, sample): return -self.distribution_class(pred, scale).log_prob(target).mul(self.weights[None,:]).mean(dim=-1) +class MixWithUniformProbability(nn.Module): + def __init__(self, state_space_volume): + super().__init__() + self.register_buffer("log_uniform_prob", -torch.as_tensor([state_space_volume]).log()) + self.register_buffer("log_weights", torch.as_tensor([[ 0.999, 0.001 ]]).log()) + + def __call__(self, log_prob): + log_uniform = torch.broadcast_to(self.log_uniform_prob, log_prob.shape) + return torch.logsumexp(torch.stack([ log_prob, log_uniform ], dim=-1) + self.log_weights, dim=-1) + + class CorrelatedCoordPoseNLLLoss(nn.Module): def __init__(self): super().__init__() + # Space volume = [-1,1]x[-1,1]x[0,1] + self.uniform_mixing = MixWithUniformProbability(4.) def __call__(self, preds, sample): target = sample['coord'] pred = preds['coord'] scale : Tensor = preds['coord_scales'] - return -MultivariateNormal(pred, scale_tril=scale, validate_args=False).log_prob(target) + log_prob = MultivariateNormal(pred, scale_tril=scale, validate_args=not sys.flags.optimize).log_prob(target) + return -self.uniform_mixing(log_prob) class BoxNLLLoss(nn.Module): @@ -127,25 +150,29 @@ def __call__(self, preds, sample): ## There is the little problem that this distribution ## is not normalized over SO3 ... -@torch.jit.script def _fill_triangular_matrix(dim : int, z : Tensor): ''' dim: Matrix dimension z: Tensor with values to fill into the lower triangular part. First the diagonal amounting to `dim` values. Then offdiagonals. ''' - m = z.new_zeros(z.shape[:-1]+(dim,dim)) + if dim == 3: # Special case for our application because ONNX does not support tril_indices - m[:,0,0] = z[:,0] - m[:,1,1] = z[:,1] - m[:,2,2] = z[:,2] - m[:,1,0] = z[:,3] - m[:,2,0] = z[:,4] - m[:,2,1] = z[:,5] + m =z[...,( + 0, 0, 0, + 3, 1, 0, + 4, 5, 2, + )].view(*z.shape[:-1],3,3) + m = m * z.new_tensor([ + [ 1., 0., 0. ], + [ 1., 1., 0.], + [ 1., 1., 1.] + ]) return m else: # General case + m = z.new_zeros(z.shape[:-1]+(dim,dim)) idx = torch.tril_indices(dim, dim, -1, device=z.device) irow, icol = idx[0], idx[1] # Ellipsis for the batch dimensions is not supported by the script compiler. @@ -156,51 +183,33 @@ def _fill_triangular_matrix(dim : int, z : Tensor): return m -def _mult_cols_to_make_diag_positive(m: Tensor): - if m.size(-1) == 3 and not m.requires_grad: # Fix for ONNX export - f1 = torch.sign(m[...,0,0]) - f2 = torch.sign(m[...,1,1]) - f3 = torch.sign(m[...,2,2]) - m = m.clone() - m[...,:,0] *= f1[...,None] - m[...,:,1] *= f2[...,None] - m[...,:,2] *= f3[...,None] - return m - else: - return m*torch.sign(torch.diagonal(m, dim1=-2, dim2=-1))[...,None,:] - - -class FeaturesAsTriangularCovFactor(nn.Module): +class FeaturesAsTriangularScale(nn.Module): def __init__(self, num_in_features, dim): super().__init__() self.dim = dim self.num_matrix_params = (dim*(dim+1))//2 - self.num_features = self.num_matrix_params - self.lin = nn.Linear(num_in_features, self.num_features, bias=False) - self.bn = nn.BatchNorm1d(self.num_features) - self.bn.weight.data[...] *= 1.e-4 - self.bn.bias.data[...] = 1. - self.bn.bias.data[:self.dim] = 1. - self.bn.bias.data[self.dim:] = 0. - self.min_diag = 1.e-2 + self.neck = Neck(num_in_features, self.num_matrix_params) + bias_init = inv_make_positive(torch.ones((self.num_matrix_params))) + bias_init[self.dim:] = 0. # Offdiagonals + self.neck.set_biases(bias_init) + min_diag = torch.full((self.num_matrix_params,), 1.e-6) + min_diag[self.dim:] = 0. # Offdiagonals + self.register_buffer("min_diag", min_diag) + def forward(self, x : Tensor): - x = self.bn(self.lin(x)) - x_diags = x[...,:self.dim] - x_offdiags = x[...,self.dim:] - z = x.new_empty(tuple(x.shape[:-1])+(self.num_matrix_params,)) - z[...,:self.dim] = x_diags - z[...,self.dim:] = x_offdiags - m = _fill_triangular_matrix(self.dim, z) - # Equivalent to cholesky(m @ m.mT) - m = _mult_cols_to_make_diag_positive(m) - m += self.min_diag*torch.eye(self.dim, device=x.device).expand(*x.shape[:-1],3,3) - return m + x, multiplier = self.neck(x) + z = torch.cat([ + make_positive(x[...,:self.dim]), + x[...,self.dim:]], + dim=-1) + z = multiplier * z + self.min_diag + return _fill_triangular_matrix(self.dim, z) class TangentSpaceRotationDistribution(object): def __init__(self, quat : Tensor, scale_tril : Optional[Tensor] = None, precision : Optional[Tensor] = None): - self.dist = MultivariateNormal(quat.new_zeros(quat.shape[:-1]+(3,)), scale_tril=scale_tril, precision_matrix=precision, validate_args=False) + self.dist = MultivariateNormal(quat.new_zeros(quat.shape[:-1]+(3,)), scale_tril=scale_tril, precision_matrix=precision, validate_args=not sys.flags.optimize) self.quat = quat def log_prob(self, otherquat : Tensor): @@ -209,143 +218,17 @@ def log_prob(self, otherquat : Tensor): class QuatPoseNLLLoss(nn.Module): + def __init__(self): + super().__init__() + r = torch.pi + v = r*r*r*torch.pi*4./3. + self.uniform_mixing = MixWithUniformProbability(v) + def __call__(self, preds, sample): target = sample['pose'] quat = preds['pose'] cov = preds['pose_scales_tril'] - return -TangentSpaceRotationDistribution(quat, cov).log_prob(target) - - - -########################################################### -## Rotation Laplace Distribution -########################################################### - -# Reimplemented from https://github.com/yd-yin/RotationLaplace/blob/master/rotation_laplace.py -# Grids file taken unchanged from that repo. - -class SO3DistributionParams(NamedTuple): - mode : Tensor # Where the peak of the distribution is - cholesky_factor : Tensor # Cholesky decomposition of V S Vt - + log_prob = TangentSpaceRotationDistribution(quat, cov).log_prob(target) + return -self.uniform_mixing(log_prob) -class RotationLaplaceLoss(nn.Module): - def __init__(self): - super().__init__() - grids = torch.from_numpy(np.load(join(dirname(__file__), 'rotation-laplace-grids3.npy'))) - self.register_buffer('grids', grids) - - - def power_function(self, matrix_r : Tensor, cov_factor_tril : Tensor): - '''sqrt(tr(S - At R)) = - - sqrt(tr(cov - cov R0_t R)) - ''' - - # a = (LLt)^-1 = Lt^-1 L^-1 - # a_t = a_t - # out = Lt^-1 L^-1 R - # -> solve L Z1 = R - # -> solve Lt out = Z1 - - # m_z = torch.linalg.solve_triangular(cov_factor_tril, matrix_r, upper=False) - # m_z = torch.linalg.solve_triangular(cov_factor_tril.transpose(-1,-2), m_z, upper=True) - #m_z = torch.linalg.inv(torch.matmul(cov_factor_tril, cov_factor_tril.transpose(-1,-2))) - m_z = torch.cholesky_inverse(cov_factor_tril) - m_cov_diags = torch.diagonal(m_z, dim1=-2, dim2=-1) - m_z = torch.matmul(m_z, matrix_r) - trace_quantity = (m_cov_diags - torch.diagonal(m_z, dim1=-2, dim2=-1)).sum(-1) - if trace_quantity.min() < -1.e-6: - print (f"Warning: Rotation Laplace failure. Trace negative: {trace_quantity.min()}") - print (f"cov factor = ", repr(cov_factor_tril.detach().cpu().numpy())) - power = torch.sqrt(torch.clamp_min(trace_quantity, 1.e-8)) - return power - - - def log_normalization(self, grids : Tensor, cov_factor_tril : Tensor): - # Integral over rotations of exp(-P)/P - # Numerically: log (sum 1/N exp(-P_i)/P_i) - # = log [ 1/N sum exp(-P_i)/P_i ] - - # log [ 1/N sum exp(-P_i -c + c)/P_i ] = - # log [ 1/N sum exp(-P_i +c)*exp(-c)/P_i ] = - # log [ 1/N *exp(-c)* sum exp(-P_i +c)/P_i ] - - grids : Tensor = grids[None,:,:,:] # (1, N, 3, 3) - cov_factor_tril = cov_factor_tril[:, None, :,:] # (B, 1, 3, 3) - N = grids.size(1) - - inv_log_weight = torch.log(torch.tensor(N,dtype=torch.float32)) - # Shape B x N - powers = self.power_function(grids, cov_factor_tril) - stability = torch.amin(powers, dim=-1) - powers = powers.to(torch.float64) - log_exp_sum = torch.log((torch.exp(-powers+stability[:,None])/powers).sum(dim=1)).to(torch.float32) - logF = log_exp_sum - inv_log_weight - stability - return logF - - - def compute_nll(self, gt_quat : Tensor, dist_params : SO3DistributionParams): - # The NLL is log(F(A)) + P + log(P) - # where P = sqrt(tr(S - At R)) - matrix_r = Q.tomatrix(gt_quat) - - # At = V S Vt R0t. - # Therefore - # A = R0 V S Vt - # The choleksy factor is L, and LLt = V S Vt - mr0 = Q.tomatrix(dist_params.mode) - cov_factor = dist_params.cholesky_factor - # Beware of correct transpose axis order! - matrix_r = torch.matmul(mr0.transpose(-1,-2), matrix_r) - power = self.power_function(matrix_r, cov_factor) - logF = self.log_normalization(self.grids, cov_factor) - nll = logF + power + torch.log(power) - return nll - - def forward(self, preds, sample): - target = sample['pose'] - - dist_params = SO3DistributionParams( - mode = preds['pose'], - cholesky_factor = preds['pose_scales_tril']) - - return self.compute_nll(target, dist_params) - - -########################################################### -## Tests -########################################################### -# TODO: proper tests -def test_tangent_space_rotation_distribution(): - B = 5 - S = 7 - q = torch.rand((B, 4), requires_grad=True) - cov_features = torch.rand((B, 6), requires_grad=True) - r = torch.rand((B, 4)) - cov_converter = FeaturesAsTriangularCovFactor(3) - dist = TangentSpaceRotationDistribution(q, cov_converter(cov_features)) - dist.rsample((S,)) - val = dist.log_prob(r).sum() - val.backward() - assert cov_converter.scales.grad is not None - assert q.grad is not None - assert cov_features.grad is not None - - -def test_feature_to_variance_mapping(): - B = 5 - N = 7 - q = torch.rand((B, N), requires_grad=True) - m = FeaturesAsUncorrelatedVariance(N) - v = m(q) - val = v.sum() - val.backward() - assert m.scales.grad is not None - assert q.grad is not None - -if __name__ == '__main__': - with torch.autograd.set_detect_anomaly(True): - test_tangent_space_rotation_distribution() - test_feature_to_variance_mapping() \ No newline at end of file diff --git a/trackertraincode/neuralnets/rotation-laplace-grids3.npy b/trackertraincode/neuralnets/rotation-laplace-grids3.npy deleted file mode 100644 index 461d2c8..0000000 Binary files a/trackertraincode/neuralnets/rotation-laplace-grids3.npy and /dev/null differ diff --git a/trackertraincode/neuralnets/torchquaternion.py b/trackertraincode/neuralnets/torchquaternion.py index a082b27..c5b9452 100644 --- a/trackertraincode/neuralnets/torchquaternion.py +++ b/trackertraincode/neuralnets/torchquaternion.py @@ -19,7 +19,22 @@ iijk : Final[slice] = slice(0,3) -def mult(u, v): +def _mat_repr(u : Tensor, indices=(iw,ii,ij,ik,ii,iw,ik,ij,ij,ik,iw,ii,ik,ij,ii,iw)): + umat = u[...,indices] + umat = umat * umat.new_tensor([1.,-1.,-1.,-1.,1.,1.,-1.,1.,1.,1.,1.,-1.,1.,-1.,1.,1.]) + umat = umat.view(*u.shape,4) + return umat + + +def _vec_repr(v : Tensor): + return v[...,[iw,ii,ij,ik]].view(*v.shape,1) + + +def _quat_repr(vec : Tensor): + return vec.view(*vec.shape[:-1])[...,[1,2,3,0]] + + +def mult(u : Tensor, v : Tensor): """ Multiplication of two quaternions. @@ -27,12 +42,7 @@ def mult(u, v): are ordered as (i,j,k,w), i.e. real component last. The other dimension have to match. """ - out = torch.empty_like(u) - out[...,iw] = u[...,iw]*v[...,iw] - u[...,ii]*v[...,ii] - u[...,ij]*v[...,ij] - u[...,ik]*v[...,ik] - out[...,ii] = u[...,iw]*v[...,ii] + u[...,ii]*v[...,iw] + u[...,ij]*v[...,ik] - u[...,ik]*v[...,ij] - out[...,ij] = u[...,iw]*v[...,ij] - u[...,ii]*v[...,ik] + u[...,ij]*v[...,iw] + u[...,ik]*v[...,ii] - out[...,ik] = u[...,iw]*v[...,ik] + u[...,ii]*v[...,ij] - u[...,ij]*v[...,ii] + u[...,ik]*v[...,iw] - return out + return _quat_repr(torch.matmul(_mat_repr(u), _vec_repr(v))) def rotate(q, p): @@ -43,28 +53,18 @@ def rotate(q, p): are ordered as (i,j,k,w), i.e. real component last. The other dimensions follow the default broadcasting rules. """ - shape = torch.broadcast_shapes(p.shape[:-1], q.shape[:-1]) - qi = q[...,ii] - qj = q[...,ij] - qk = q[...,ik] - qw = q[...,iw] - pi = p[...,ii] - pj = p[...,ij] - pk = p[...,ik] - tmp = q.new_empty(shape+(4,)) - out = p.new_empty(shape+(3,)) - # Compute tmp = q*p, identifying p with a purly imaginary quaternion. - tmp[...,iw] = - qi*pi - qj*pj - qk*pk - tmp[...,ii] = qw*pi + qj*pk - qk*pj - tmp[...,ij] = qw*pj - qi*pk + qk*pi - tmp[...,ik] = qw*pk + qi*pj - qj*pi + qmat = _mat_repr(q) + pvec = p[...,None] + tmp = torch.matmul(qmat[...,:,1:], pvec) # Compute tmp*q^-1. - out[...,ii] = -tmp[...,iw]*qi + tmp[...,ii]*qw - tmp[...,ij]*qk + tmp[...,ik]*qj - out[...,ij] = -tmp[...,iw]*qj + tmp[...,ii]*qk + tmp[...,ij]*qw - tmp[...,ik]*qi - out[...,ik] = -tmp[...,iw]*qk - tmp[...,ii]*qj + tmp[...,ij]*qi + tmp[...,ik]*qw - return out + tmpmat = _mat_repr(tmp.view(tmp.shape[:-1]), (0,1,2,3, + 1,0,3,2, + 2,3,0,1, + 3,2,1,0)) + out = torch.matmul(tmpmat[...,1:,:], _vec_repr(conjugate(q))) + return out.view(out.shape[:-1]) def tomatrix(q): @@ -91,6 +91,61 @@ def tomatrix(q): return out +def from_matrix(m : Tensor): + # See https://en.wikipedia.org/wiki/Rotation_matrix#Quaternion + # Also inspired by + # https://pytorch3d.readthedocs.io/en/latest/_modules/pytorch3d/transforms/rotation_conversions.html#matrix_to_quaternion + shape = m.shape[:-2] + # Prefix with "batch" dimension. Flatten if multiple leading dimensions. + m = m = m[None,:] if shape == () else m.flatten(0,-3) + + # 4 possibilties to compute the quaternion. Unstable computation with divisions + # by zero or close to zero can occur. Further down, the best conditioned solution + # is picked. + qk_from_k = 0.5*torch.sqrt(m[:,2,2] - m[:,1,1] - m[:,0,0] + 1.0) + qj_from_j = 0.5*torch.sqrt(m[:,1,1] - m[:,2,2] - m[:,0,0] + 1.0) + qi_from_i = 0.5*torch.sqrt(m[:,0,0] - m[:,1,1] - m[:,2,2] + 1.0) + qw_from_w = 0.5*torch.sqrt(m[:,0,0] + m[:,1,1] + m[:,2,2] + 1.0) # Using that qj*qj + qk*qk + qi*qi = qw*qw - 1 + + qw_from_k = 0.25 * (m[:,1,0] - m[:,0,1]) / qk_from_k + qi_from_k = 0.25 * (m[:,2,0] + m[:,0,2]) / qk_from_k + qj_from_k = 0.25 * (m[:,1,2] + m[:,2,1]) / qk_from_k + + qw_from_j = 0.25 * (m[:,0,2] - m[:,2,0]) / qj_from_j + qi_from_j = 0.25 * (m[:,1,0] + m[:,0,1]) / qj_from_j + qk_from_j = 0.25 * (m[:,1,2] + m[:,2,1]) / qj_from_j + + qw_from_i = 0.25 * (m[:,2,1] - m[:,1,2]) / qi_from_i + qj_from_i = 0.25 * (m[:,1,0] + m[:,0,1]) / qi_from_i + qk_from_i = 0.25 * (m[:,0,2] + m[:,2,0]) / qi_from_i + + qi_from_w = 0.25 * (m[:,2,1] - m[:,1,2]) / qw_from_w + qj_from_w = 0.25 * (m[:,0,2] - m[:,2,0]) / qw_from_w + qk_from_w = 0.25 * (m[:,1,0] - m[:,0,1]) / qw_from_w + + quat_candidates = torch.stack([ + torch.stack([qi_from_i, qj_from_i, qk_from_i, qw_from_i], dim=-1), + torch.stack([qi_from_j, qj_from_j, qk_from_j, qw_from_j], dim=-1), + torch.stack([qi_from_k, qj_from_k, qk_from_k, qw_from_k], dim=-1), + torch.stack([qi_from_w, qj_from_w, qk_from_w, qw_from_w], dim=-1), + ], dim=1) + + quat_pick = torch.argmax(torch.nan_to_num(torch.stack([ + qi_from_i,qj_from_j,qk_from_k,qw_from_w + ],dim=-1),-1000.),dim=-1) + + mask = torch.nn.functional.one_hot(quat_pick,4)==1 + quat = quat_candidates[mask] + quat = positivereal(quat) + quat = quat.view(*shape,4) + return quat + + +def conjugate(q : Tensor): + assert iw == 3 + return q * q.new_tensor([-1.,-1.,-1.,1.]) + + def from_rotvec(r, eps=1.e-12): shape = r.shape[:-1] q = r.new_empty(shape+(4,)) diff --git a/trackertraincode/pipelines.py b/trackertraincode/pipelines.py index 86e88fa..90ed5f9 100644 --- a/trackertraincode/pipelines.py +++ b/trackertraincode/pipelines.py @@ -1,7 +1,7 @@ #!/usr/bin/env python # coding: utf-8 -from typing import List, Dict, Set, Sequence, Any, Type +from typing import List, Dict, Set, Sequence, Any, Type, Optional from os.path import join, dirname import numpy as np import os @@ -83,7 +83,7 @@ def whiten_batch(batch : Batch): def make_biwi_datasest(transform=None): - filename = join(os.environ['DATADIR'],'biwi.h5') + filename = join(os.environ['DATADIR'],'biwi-v3.h5') return Hdf5PoseDataset(filename, transform=transform, dataclass=Tag.ONLY_POSE) @@ -254,7 +254,8 @@ def make_pose_estimation_loaders( use_weights_as_sampling_frequency : bool = True, enable_image_aug : bool = True, rotation_aug_angle : float = 30., - roi_override : str = True + roi_override : str = True, + device : Optional[str] = 'cuda', ): C = transforms.Compose @@ -367,10 +368,9 @@ def make_pose_estimation_loaders( dataset = ds_train, weights = train_sets_frequencies) - loader_trafo_test = [ - partial(dtr.to_device, 'cuda'), - whiten_batch, - ] + loader_trafo_test = [ whiten_batch ] + if device is not None: + loader_trafo_test = [ lambda b: b.to('cuda') ] + loader_trafo_test if enable_image_aug: image_augs = [ @@ -389,7 +389,9 @@ def make_pose_estimation_loaders( ] else: image_augs = [] - loader_trafo_train = [ partial(dtr.to_device, 'cuda') ] + image_augs + [ whiten_batch ] + loader_trafo_train = [ lambda b: b.to('cuda') ] if device is not None else [] + loader_trafo_train += image_augs + [ whiten_batch ] + train_loader = dtr.PostprocessingDataLoader(ds_train, unroll_list_of_batches = False, @@ -412,7 +414,7 @@ def make_pose_estimation_loaders( return train_loader, test_loader, len(ds_train) -def make_validation_loader(name, order = None, use_head_roi = True, num_workers=None): +def make_validation_loader(name, order = None, use_head_roi = True, num_workers=None, batch_size = 32): test_trafo = transforms.Compose([ dtr.offset_points_by_half_pixel, # For when pixels are considered grid cell centers dtr.PutRoiFromLandmarks(extend_to_forehead=use_head_roi) @@ -445,7 +447,7 @@ def make_validation_loader(name, order = None, use_head_roi = True, num_workers= return dtr.PostprocessingDataLoader( ds, - batch_size=32, + batch_size=batch_size, shuffle=False, num_workers = num_workers, postprocess = None, diff --git a/trackertraincode/train.py b/trackertraincode/train.py index e619bb5..463f819 100644 --- a/trackertraincode/train.py +++ b/trackertraincode/train.py @@ -1,3 +1,4 @@ +import itertools from matplotlib import pyplot from collections import namedtuple, defaultdict import numpy as np @@ -7,9 +8,11 @@ import multiprocessing import queue import time +import pickle import dataclasses import copy import os +import math from torch import Tensor import torch @@ -18,13 +21,74 @@ from trackertraincode.datasets.batch import Batch +import trackertraincode.neuralnets.io import trackertraincode.utils as utils def weighted_mean(x : Tensor, w : Tensor, dim) -> Tensor: return torch.sum(x*w, dim).div(torch.sum(w,dim)) -Criterion = namedtuple('Criterion', 'name f w', defaults=(None,None,1.)) + +class LossVal(NamedTuple): + val : Tensor + weight : float + name : str + + +def concatenated_lossvals_by_name(vals : list[LossVal]): + '''Sorts by name and concatenates. + + Assumes that names can occur multiple times. Then corresponding weights and + values are concatenated. Useful for concatenating the loss terms from different + sub-batches. + + Return: + Dict[name,(values,weights)] + ''' + value_lists = defaultdict(list) + weight_lists = defaultdict(list) + for v in vals: + value_lists[v.name].append(v.val) + weight_lists[v.name].append(v.weight) + return { + k:(torch.concat(value_lists[k]),torch.concat(weight_lists[k])) for k in value_lists + } + + +class Criterion(NamedTuple): + name : str + f : Callable[[Batch,Batch],Tensor] + w : Union[float,Callable[[int],float]] + + def evaluate(self, pred, batch, step) -> List[LossVal]: + val = self.f(pred,batch) + w = self._eval_weight(step) + return [ LossVal(val, w, self.name) ] + + def _eval_weight(self, step): + if isinstance(self.w, float): + return self.w + else: + return self.w(step) + + +class CriterionGroup(NamedTuple): + criterions : List[Union['CriterionGroup',Criterion]] + name : str = '' + w : Union[float,Callable[[int],float]] = 1.0 + + def _eval_weight(self, step): + if isinstance(self.w, float): + return self.w + else: + return self.w(step) + + def evaluate(self, pred, batch, step) -> List[LossVal]: + w = self._eval_weight(step) + lossvals = sum((c.evaluate(pred, batch, step) for c in self.criterions), start=[]) + lossvals = [ LossVal(v.val,v.weight*w,self.name+v.name) for v in lossvals ] + return lossvals + @dataclasses.dataclass class History: @@ -34,11 +98,6 @@ class History: logplot : bool = True -class LossVal(NamedTuple): - val : Tensor - weight : float - name : str - # From https://stackoverflow.com/questions/15411967/how-can-i-check-if-code-is-executed-in-the-ipython-notebook def in_notebook(): @@ -154,9 +213,14 @@ def summarize_single_train_history(k, h : History): return epochs, values = zip(*h.current_train_buffer) try: + if next(iter(values)).shape != (): + values = np.concatenate(values) + else: + values = np.stack(values) h.train.append((np.average(epochs), np.average(values), np.std(values))) except FloatingPointError: - print (f"Floating point error at {k} in epochs {np.average(epochs)} with values:\n {str(values)}\n") + with np.printoptions(precision=4, suppress=True, threshold=20000): + print (f"Floating point error at {k} in epochs {np.average(epochs)} with values:\n {str(values)} of which there are {len(values)}\n") h.train.append((np.average(epochs), np.nan, np.nan)) h.current_train_buffer = [] @@ -281,10 +345,9 @@ def run_the_training( net.train() for batch in train_iter: - lossvals = update_func(net, batch, optimizer, state) - state.lossvals = lossvals - for name, val in lossvals: - plotter.add_train_point(epoch, state.step, name, val.detach().to('cpu',non_blocking=True)) + trainlossvals = update_func(net, batch, optimizer, state) + for name, (val, _) in concatenated_lossvals_by_name(itertools.chain.from_iterable(trainlossvals)).items(): + plotter.add_train_point(epoch, state.step, name, val) state.step += 1 if state.grad_norm is not None: plotter.add_train_point(epoch, state.step, '|grad L|', state.grad_norm) @@ -333,7 +396,75 @@ def filename(self): def save(self): os.makedirs(self.model_dir, exist_ok=True) - torch.save(self.net.state_dict(), self.filename) + trackertraincode.neuralnets.io.save_model(self.net, self.filename) + + +class DebugData(NamedTuple): + parameters : dict[str,Tensor] + batches : list[Batch] + preds : dict[str,Tensor] + lossvals : list[list[LossVal]] + + def is_bad(self): + '''Checks data for badness. + + Currently NANs and input value range. + + Return: + True if so. + ''' + #TODO: decouple for name of input tensor + for k,v in self.parameters.items(): + if torch.any(torch.isnan(v)): + print(f"{k} is NAN") + return True + for b in self.batches: + for k, v in b.items(): + if torch.any(torch.isnan(v)): + print(f"{k} is NAN") + return True + inputs = b['image'] + if torch.amin(inputs)<-2. or torch.amax(inputs)>2.: + print(f"Input image {inputs.shape} exceeds value limits with {torch.amin(inputs)} to {torch.amax(inputs)}") + return True + for k,v in self.preds.items(): + if torch.any(torch.isnan(v)): + print(f"{k} is NAN") + return True + for lv_list in self.lossvals: + for lv in lv_list: + if torch.any(torch.isnan(lv.val)): + print(f"{lv.name} is NAN") + return True + return False + +class DebugCallback(): + '''For dumping a history of stuff when problems are detected.''' + def __init__(self): + self.history_length = 3 + self.debug_data : List[DebugData] = [] + self.filename = '/tmp/notgood.pkl' + + def observe(self, net_pre_update : nn.Module, batches : list[Batch], preds : dict[str,Tensor], lossvals : list[list[LossVal]]): + '''Record and check. + Args: + batches: Actually sub-batches + lossvals: One list of loss terms per sub-batch + ''' + dd = DebugData( + {k:v.detach().to('cpu', non_blocking=True,copy=True) for k,v in net_pre_update.state_dict().items()}, + [b.to('cpu', non_blocking=True,copy=True) for b in batches ], + {k:v.detach().to('cpu', non_blocking=True,copy=True) for k,v in preds.items()}, + lossvals + ) + if len(self.debug_data) >= self.history_length: + self.debug_data.pop(0) + self.debug_data.append(dd) + torch.cuda.current_stream().synchronize() + if dd.is_bad(): + with open(self.filename, 'wb') as f: + pickle.dump(self.debug_data, f) + raise RuntimeError("Bad state detected") class SaveBestCallback(SaveCallback): @@ -402,28 +533,6 @@ def __call__(self, state : State): self.save() -def _check_loss(loss, pred, batch, name): - if not torch.isfinite(loss).all(): - import pickle - with open('/tmp/pred.pkl', 'wb') as f: - pickle.dump(pred,f) - with open('/tmp/batch.pkl', 'wb') as f: - pickle.dump(batch, f) - raise RuntimeError(f"Non-finite value created by loss {name}") - - -def checked_criterion_eval(lossfunc : Callable, pred : Dict, batch : Dict) -> List[LossVal]: - loss = lossfunc(pred, batch) - if isinstance(loss, Tensor): - # Only enable for debugging: - # _check_loss(loss, pred, batch, type(lossfunc).__name__) - return [LossVal(loss,1.,'')] - elif isinstance(loss, LossVal): - return [loss] - else: - return loss - - def compute_inf_norm_of_grad(net : nn.Module): device = next(iter(net.parameters())).device result = torch.zeros((), device=device, dtype=torch.float32, requires_grad=False) @@ -434,107 +543,70 @@ def compute_inf_norm_of_grad(net : nn.Module): return result -def _convert_multi_task_loss_list(multi_task_terms: Dict[str,List[Tuple[Tensor,float,int]]], device : str) -> Dict[str,Tuple[Tensor,Tensor,Tensor]]: - # Convert list of list of tuples to list of tuples of tensors - def _cvt_item(k, vals_weights_idx): - vals, weights, idxs = zip(*vals_weights_idx) - #print (f"CVT {k}: v {[v.shape for v in vals]}, w {[w.shape for w in weights]}") - vals = [ (val*w).mean() for val, w in zip(vals, weights) ] - vals = torch.stack(vals) - #weights = torch.as_tensor(weights, dtype=torch.float32).to(device, non_blocking=True) - #weights = torch.stack(weights) - weights = torch.stack([w.mean() for w in weights]) - idxs = torch.as_tensor(idxs) - return vals, weights, idxs - return { k:_cvt_item(k,v) for k,v in multi_task_terms.items() } - - -def _accumulate_losses_over_batches(multi_task_terms: Sequence[Tuple[Tensor,Tensor,Tensor]], batchsizes : Tensor): - all_lossvals = 0. - for vals, weights, idxs in multi_task_terms: - all_lossvals = all_lossvals + torch.sum(vals*batchsizes[idxs]) - all_lossvals = all_lossvals / torch.sum(batchsizes) - return all_lossvals +# g_debug = DebugCallback() -def default_update_fun(net, batch : List[Batch], optimizer : torch.optim.Optimizer, state : State, loss): - assert isinstance(batch, list) - +def default_update_fun(net, batch : List[Batch], optimizer : torch.optim.Optimizer, state : State, loss : dict[Any, Criterion | CriterionGroup] | Criterion | CriterionGroup): + # global g_debug + optimizer.zero_grad() inputs = torch.concat([b['image'] for b in batch], dim=0) - - assert torch.amin(inputs)>=-2., f"Input out of normal image bounds: {torch.amin(inputs)}" - assert torch.amax(inputs)<= 2., f"Input out of normal image bounds: {torch.amax(inputs)}" preds = net(inputs) - all_multi_task_terms = defaultdict(list) - batchsizes = torch.tensor([ subset.meta.batchsize for subset in batch ], dtype=torch.float32).to(inputs.device, non_blocking=True) + lossvals_by_name = defaultdict(list) + all_lossvals : list[list[LossVal]] = [] + # Iterate over different datasets / loss configurations offset = 0 - for subset_idx, subset in enumerate(batch): + for subset in batch: frames_in_subset, = subset.meta.prefixshape subpreds = { k:v[offset:offset+frames_in_subset,...] for k,v in preds.items() } - loss_func_of_subset = loss[subset.meta.tag] if isinstance(loss, dict) else loss - multi_task_terms = checked_criterion_eval(loss_func_of_subset, subpreds, subset) + # Get loss function and evaluate + loss_func_of_subset : Union[Criterion,CriterionGroup] = loss[subset.meta.tag] if isinstance(loss, dict) else loss + multi_task_terms : List[LossVal] = loss_func_of_subset.evaluate(subpreds, subset, state.epoch) + # Support loss weighting by datasets if 'dataset_weight' in subset: - dataset_weights = subset['dataset_weight'] - assert dataset_weights.size(0) == subset.meta.batchsize + dataset_weight = subset['dataset_weight'] + assert dataset_weight.size(0) == subset.meta.batchsize + multi_task_terms = [ v._replace(weight=v.weight*dataset_weight) for v in multi_task_terms ] else: - dataset_weights = torch.ones((frames_in_subset,), device=inputs.device) + # Else, make the weight member a tensor the same shape as the loss values + multi_task_terms = [ v._replace(weight=v.val.new_full(size=v.val.shape,fill_value=v.weight)) for v in multi_task_terms ] - for elem in multi_task_terms: - weight = dataset_weights * elem.weight - assert weight.shape == elem.val.shape, f"Bad loss {elem.name}" - all_multi_task_terms[elem.name].append((elem.val, weight)) - if state.num_samples_per_loss is not None: - state.num_samples_per_loss[elem.name] += frames_in_subset - + all_lossvals.append(multi_task_terms) del multi_task_terms, loss_func_of_subset offset += frames_in_subset - def _concat_over_subsets(items : List[Tuple[Tensor,Tensor]]): - values, weights = zip(*items) - return ( - torch.concat(values), - torch.concat(weights)) - all_multi_task_terms = { k:_concat_over_subsets(v) for k,v in all_multi_task_terms.items() } + batchsize = sum(subset.meta.batchsize for subset in batch) + # Concatenate the loss values over the sub-batches. + lossvals_by_name = concatenated_lossvals_by_name(itertools.chain.from_iterable(all_lossvals)) + # Compute weighted average, dividing by the batch size which is equivalent to substituting missing losses by 0. + loss_sum = torch.concat([ (values*weights) for values,weights in lossvals_by_name.values() ]).sum() / batchsize + + # Transfer to CPU + for loss_list in all_lossvals: + for i, v in enumerate(loss_list): + loss_list[i] = v._replace(val = v.val.detach().to('cpu', non_blocking=True)) - loss_sum = torch.concat([ (values*weights) for values,weights in all_multi_task_terms.values() ]).sum() / batchsizes.sum() loss_sum.backward() - + + # g_debug.observe(net, batch, preds, all_lossvals) + if 1: state.grad_norm = compute_inf_norm_of_grad(net).to('cpu', non_blocking=True) # Gradients get very large more often than looks healthy ... Loss spikes a lot. # Gradient magnitudes below 0.1 seem to be normal. Initially gradients are larger, - nn.utils.clip_grad_norm_(net.parameters(), 0.1, norm_type=float('inf')) + nn.utils.clip_grad_norm_(net.parameters(), 1.0, norm_type=float('inf')) optimizer.step() - # This is only for logging - for k, (vals, weights) in all_multi_task_terms.items(): - all_multi_task_terms[k] = weighted_mean(vals, weights, 0) - - return list(all_multi_task_terms.items()) - - -class MultiTaskLoss(object): - def __init__(self, criterions : Sequence[Criterion]): - self.criterions = criterions - - def __iadd__(self, crit : Criterion): - self.criterions += crit - return self - - def __call__(self, pred, batch): - def _eval_crit(crit : Criterion): - return [ - LossVal(lv.val, lv.weight*crit.w, crit.name+lv.name) for lv in checked_criterion_eval(crit.f, pred, batch) ] - return sum((_eval_crit(c) for c in self.criterions), start=[]) + torch.cuda.current_stream().synchronize() + return all_lossvals class DefaultTestFunc(object): @@ -596,4 +668,23 @@ def lr_func(i): else: step_index = [j for j,step in enumerate(steps) if i>step][-1] return gamma**step_index + return LambdaLR(optimizer, lr_func) + + +def ExponentialUpThenSteps(optimizer, num_up, gamma, steps): + steps = [0] + steps + def lr_func(i): + eps = 1.e-2 + scale = math.log(eps) + if i < num_up: + f = ((i+1)/num_up) + #return torch.sigmoid((f - 0.5) * 15.) + # a * exp(f / l) | f=1 == 1. + # a * exp(f / l) | f=0 ~= eps + # => a = eps + # => ln(1./eps) = 1./l + return eps * math.exp(-scale*f) + else: + step_index = [j for j,step in enumerate(steps) if i>step][-1] + return gamma**step_index return LambdaLR(optimizer, lr_func) \ No newline at end of file diff --git a/trackertraincode/utils.py b/trackertraincode/utils.py index 3289253..d1de63b 100644 --- a/trackertraincode/utils.py +++ b/trackertraincode/utils.py @@ -13,10 +13,18 @@ def identity(arg): return arg def as_hpb(rot): + '''This uses an aeronautic-like convention. + + Rotation are applied (in terms of extrinsic rotations) as follows in the given order: + Roll - around the forward direction. + Pitch - around the world lateral direction + Heading - around the world vertical direction + ''' return rot.as_euler('YXZ') def from_hpb(hpb): + '''See "as_hpb"''' return Rotation.from_euler('YXZ', hpb) diff --git a/trackertraincode/vis.py b/trackertraincode/vis.py index 5c3a469..59ab474 100644 --- a/trackertraincode/vis.py +++ b/trackertraincode/vis.py @@ -54,8 +54,10 @@ def draw_axis(img, rot, tdx=None, tdy=None, size = 100, brgt = 255, lw=3, color return img -def draw_points3d(img, pt3d, size=3, color=(255,255,255), labels=False): +def draw_points3d(img, pt3d, size=3, color = None, labels=False): assert pt3d.shape[-1] in (2,3) + if color is None: + color = (255,255,255) r,g,b = color for i, p in enumerate(pt3d[:,:2]): p = tuple(p.astype(int)) @@ -129,9 +131,8 @@ def draw_semseg_logits(semseg : np.ndarray): return colored -def _draw_sample(img : np.ndarray, sample : Union[Batch,dict], is_prediction : bool, labels : bool = True): +def _draw_sample(img : np.ndarray, sample : Union[Batch,dict], labels : bool = True, color : Optional[tuple[int,int,int]] = None): linewidth = 2 - color = PRED_COLOR if is_prediction else GT_COLOR if 'seg_image' in sample: semseg = draw_semseg_class_indices(sample['seg_image']) img //= 2 @@ -156,15 +157,15 @@ def _draw_sample(img : np.ndarray, sample : Union[Batch,dict], is_prediction : b def draw_prediction(sample_pred : Tuple[Batch,dict]): sample, pred = sample_pred img = np.ascontiguousarray(_with3channels_hwc(sample['image'].copy())) - _draw_sample(img, sample, False, False) - _draw_sample(img, pred, True, False) + _draw_sample(img, sample, False, GT_COLOR) + _draw_sample(img, pred, False, PRED_COLOR) return img def draw_dataset_sample(sample : Batch, label=False): sample = dict(sample.items()) img = np.ascontiguousarray(_with3channels_hwc(sample['image'].copy())) - _draw_sample(img, sample, False, label) + _draw_sample(img, sample, label, None) return img