The dataset is an effort for evaluating to what extent word embeddings of identifiers represent semantic relatedness and similarity. The dataset contains identifier pairs annotated with similarity, relatedness and contextual similarity ratings tagged by developers. IdBench is an effort to provide a gold standard to guide the development of novel embeddings.
The repository contains following files.
- Tagged dataset.
- Trained identifier embedding models.
- Cosine similarity scores for each model.
For each of the three tasks, we provide a small, medium, and large benchmark, which differ by the thresholds used during data cleaning. The smaller benchmarks use stricter thresholds and hence provide higher agreements between the participants, whereas the larger benchmarks offer more pairs.
size | Relatedn. | Simil. | Context. simil. |
---|---|---|---|
Small | 167 | 167 | 115 |
Medium | 247 | 247 | 145 |
Large | 291 | 291 | 176 |
We evaluate the existing embedding methods on how they represent relatedness and similarity of identifiers, we evaluate five vector representations
- continuous bag of words and the skip-gram variants of Word2vec (“w2v-cbow” and “w2v-sg”)
- sub-word extension of word2vec FastText (“FT-cbow” and “FT-sg”)
- embeddings trained using tree-based representation of code (“path-based”)
In addition to neural embeddings of identifiers, we also evaluate two string distance functions: Levenshtein’s edit distance and Needleman-Wunsch distance. For further investigation, we report all results for identifier pairs in pair_wise_similarity_score.csv