Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Overlapping between meta-training classes and meta-testing classes #55

Open
tuananhbui89 opened this issue Dec 21, 2021 · 1 comment
Open

Comments

@tuananhbui89
Copy link

Hi. Thank you for publishing the code. I am trying to do an experiment with Few-shot learning benchmark datasets (i.e., CIFAR-FS and FC100) with the data loader. While as I understand, there is no overlapping between meta-training classes and meta-testing classes (i.e., as the description of the FC100 dataset from the link here https://paperswithcode.com/dataset/fc100).

However, when I check the label sets of train, validation, and test set, there is still overlapping (very seriously) as below :

print('** Labels set: TRAIN **')
print(np.unique(dataset_train.labels))
print(np.shape(dataset_train.data))
print('** Labels set: VAL **')
print(np.unique(dataset_val.labels))
print(np.shape(dataset_val.data))
print('** Labels set: TEST **')
print(np.unique(dataset_test.labels))
print(np.shape(dataset_test.data))

Output:

** Labels set: TRAIN **                                                                                                                                             
[ 0 1 5 8 9 10 12 13 16 17 20 22 23 25 27 28 29 32 33 37 39 40 41 44 47 48 49 51 52 53 54 56 57 58 59 60 61 62 67 68 69 70 71 73 76 78 81 82 83 84 85 86 87 89 90 91 92 93 94 96] 
(36000, 32, 32, 3)
** Labels set: VAL **                                                                                                                                               
[ 0 1 3 5 8 9 10 12 13 15 16 17 19 20 21 22 23 25 26 27 28 29 31 32 33 36 37 38 39 40 41 42 43 44 45 47 48 49 50 51 52 53 54 56 57 58 59 60  61 62 65 67 68 69 70 71 73 74 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 96 97 99]                                                                                                                                           
(48000, 32, 32, 3)                                                                                                                                                  
** Labels set: TEST **                                                                                                                                              
[ 0 1 2 4 5 6 7 8 9 10 11 12 13 14 16 17 18 20 22 23 24 25 27 28 29 30 32 33 34 35 37 39 40 41 44 46 47 48 49 51 52 53 54 55 56 57 58 59 60 61 62 63 64 66 67 68 69 70 71 72 73 75 76 78 81 82 83 84 85 86 87 89 90 91 92 93 94 95 96 98]                                                                                                                                           
(48000, 32, 32, 3)

Because I am a newbie with this few-shot learning setting therefore it should be my miss understanding in some parts of the few-shot learning setting but I don’t know what it is?
Could you please help?
Thanks a lot.

@woreom
Copy link

woreom commented May 31, 2024

Hi, I have found the same problem in MiniImageNet both validation and test sets have 64 additional classes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants