Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

预训练的重要性? #25

Open
tian0810 opened this issue Dec 21, 2020 · 3 comments
Open

预训练的重要性? #25

tian0810 opened this issue Dec 21, 2020 · 3 comments

Comments

@tian0810
Copy link

很好的一篇论文!但是如果我选择从头开始训练这个网络,那么这些local,part模块的裁剪还能train的成功吗?就是不使用预训练,对实验结果的影响有多大的影响呢?期待您的回复!

@ZF4444
Copy link
Owner

ZF4444 commented Dec 21, 2020

谢谢!
通常都会使用基于imagenet预训练的权重,可以得到更好的性能。
对于论文中的方法,如果不使用预训练权重的话,local,part模块在训练前期提出的结果将产生副作用,至于后期能否收敛没有验证过。
不过可以先使用原图只训练第一个分支,待网络具有一定的拟合能力后再加入后续两个分支到训练的过程中,应该能实现比原图只训练第一个分支更好的性能,具体好多少这个没有试验验证过。

@tian0810
Copy link
Author

感谢您的回复!

@yunmi02
Copy link

yunmi02 commented May 31, 2022

感觉本篇论文论文精度的提升全靠预训练权重。。。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants