-
Notifications
You must be signed in to change notification settings - Fork 134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Historical word embeddings #12
Comments
@piskvorky can you be more concrete, which embeddings need to be added (there are many)? |
All, preferably (and the non-English ones are particularly interesting). |
@piskvorky got it! |
@piskvorky problem: each "zip" contains many models named like It is probably worth closing this issue (because it does not apply to us) |
I don't understand. What is the problem? |
@piskvorky for example, This archive contains many files (pairs of matrix + vocab)
i.e. this file contains 20 distinct models (same situation for other links). To use these models for their intended purpose, they are needed all at once (they do not make sense separately). In our case, adding 20 models (which are useless apart from each other) is a very bad idea (moreover, it is extremely inconvenient for the user how to use all at once). |
I see what you mean, but don't see it as a problem. Why couldn't the dataset loader just return a dictionary of models? |
You suggest to join all of this to one large pickle (dict of KeyedVectors) and return it to the user, am I right? |
No, I mean a dictionary where the key is a particular model name string (year?) and value the relevant Python object (Word2Vec or whatever). If, as you say, the models are worthless in isolation, then we should return them all in bulk. |
We can store only one |
Aha, I see. Yes, that is a possibility -- if the models are sufficiently small, we could pickle everything as a single |
Sorry for exhuming an old issue, but I was wondering if adding these pre-trained historical word embeddings is still under consideration. These would be very valuable to research I am conducting. Thank you. |
…by Stanford, https://nlp.stanford.edu/projects/histwords/
We released pre-trained historical word embeddings (spanning all decades from 1800 to 2000) for multiple languages (English, French, German, and Chinese). Embeddings constructed from many different corpora and using different embedding approaches are included.
Paper: Diachronic Word Embeddings Reveal Statistical Laws of Semantic Change
Code: Github
License: Public Domain Dedication and License
The text was updated successfully, but these errors were encountered: