You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Do you want to keep your interference going while those long training jobs are running? The multi-stream-multi-model-multi-GPU version of TrainYourOwnYOLO (now available here) lets you do just that. If you only have one GPU, limit the memory used by your interference streams so that Train_YOLO.py has enough GPU RAM to work with (experiment!). Training will commence at reduced speed. If you have two GPUs in your machine, move the interference jobs to the 2nd GPU (run_on_gpu: 1 in MultiDetect.conf). Training will grab all memory on GPU #0 and run at full speed, while interference runs at full speed on GPU #1. Training doesn’t seem to be smart enough to grab GPU #1 when its available, and when GPU #0 is busy.
The text was updated successfully, but these errors were encountered:
2021-03-01 13:49:58.768272: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1
Traceback (most recent call last):
File "Detector.py", line 21, in
from keras_yolo3.yolo import YOLO, detect_video, detect_webcam
File "/content/TrainYourOwnYOLO/2_Training/src/keras_yolo3/yolo.py", line 18, in
from keras.utils import multi_gpu_model
ImportError: cannot import name 'multi_gpu_model' from 'keras.utils' (/usr/local/lib/python3.7/dist-packages/keras/utils/init.py)
how to solve this?
Do you want to keep your interference going while those long training jobs are running? The multi-stream-multi-model-multi-GPU version of TrainYourOwnYOLO (now available here) lets you do just that. If you only have one GPU, limit the memory used by your interference streams so that Train_YOLO.py has enough GPU RAM to work with (experiment!). Training will commence at reduced speed. If you have two GPUs in your machine, move the interference jobs to the 2nd GPU (
run_on_gpu: 1
in MultiDetect.conf). Training will grab all memory on GPU #0 and run at full speed, while interference runs at full speed on GPU #1. Training doesn’t seem to be smart enough to grab GPU #1 when its available, and when GPU #0 is busy.The text was updated successfully, but these errors were encountered: