You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 11, 2020. It is now read-only.
Hello! I use your network to train Chinese and English.Char_vector length is 1000+,like '®·、。〇《》一七万三上下不专且世业东丝两个中丰串临丶主丽举久义乐乒乔九也习书买了事' and so on.
The training process is as follows:
---- 50 ----
GT: 水果捞
PREDICT:
---- 50 ----
GT: 品牌电脑
PREDICT:
---- 50 ----
[50] Iteration loss: 482.9037551879883 Error rate: 1.0
step: 51
PREDICT is Blank space.
Where is the problem?
The text was updated successfully, but these errors were encountered:
To be completely honest, I never tried to train this model with Chinese, and while I think it should work with minor modifications, I doubt that the code as-is can.
I'll have a look after my midterms if possible, but in the meantime, you might want to redirect your attention to other implementation that were designed with Chinese in mind.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hello! I use your network to train Chinese and English.Char_vector length is 1000+,like '®·、。〇《》一七万三上下不专且世业东丝两个中丰串临丶主丽举久义乐乒乔九也习书买了事' and so on.
The training process is as follows:
---- 50 ----
GT: 水果捞
PREDICT:
---- 50 ----
GT: 品牌电脑
PREDICT:
---- 50 ----
[50] Iteration loss: 482.9037551879883 Error rate: 1.0
step: 51
PREDICT is Blank space.
Where is the problem?
The text was updated successfully, but these errors were encountered: