You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Firstly, thank you for your great work, "Making Text Embedder Few-Shot Learners". It was very interesting to see how you improved the performance of text embedding by leveraging the intrinsic capabilities of LLMs!
While studying for the paper, I had a question. In Section 3.1, you note that embedding models have a limited ability to follow unseen embedding task instructions and conduct complex retrieval tasks. then you explore whether embedding models can be enhanced by leveraging ICL.
When doing the few-shot contrastive training, it was my understanding that the same instruction was applied to “task_definition” for each specific dataset. In this respect, I think it is not much different from previous studies that the model was trained with limited instructions for the dataset used in the training process, so I would like to know more about the difference between previous studies and this paper that utilizes ICL.
Thank you for your time and for contributing such valuable research to the community.
The text was updated successfully, but these errors were encountered:
Thank you for your attention to our work. ICL does not provide additional semantic information in the task definition part. Instead, it introduces in-context examples to help better understand the task intent. By using in-context examples, ICL can achieve better results on out-of-domain tasks by simply providing task-relevant examples, compared to using only the task definition.
Dear Authors,
Firstly, thank you for your great work, "Making Text Embedder Few-Shot Learners". It was very interesting to see how you improved the performance of text embedding by leveraging the intrinsic capabilities of LLMs!
While studying for the paper, I had a question. In Section 3.1, you note that embedding models have a limited ability to follow unseen embedding task instructions and conduct complex retrieval tasks. then you explore whether embedding models can be enhanced by leveraging ICL.
When doing the few-shot contrastive training, it was my understanding that the same instruction was applied to “task_definition” for each specific dataset. In this respect, I think it is not much different from previous studies that the model was trained with limited instructions for the dataset used in the training process, so I would like to know more about the difference between previous studies and this paper that utilizes ICL.
Thank you for your time and for contributing such valuable research to the community.
The text was updated successfully, but these errors were encountered: