You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Instantiating LlamaAttention without passing layer_idx is not recommended and will to errors during the forward call, if caching is used. Please make sure to provide a layer_idx when creating this class.
Loading checkpoint shards: 100%|██████████| 33/33 [00:22<00:00, 1.47it/s]
Some weights of MPLUGOwl2LlamaForCausalLM were not initialized from the model checkpoint at D:\桌面\mplug_owl2_7b_448_qinstruct_preview_v0.1 and are newly initialized: ['model.visual_abstractor.encoder.layers.4.crossattention.attention.k_pos_embed', 'model.visual_abstractor.encoder.layers.0.crossattention.attention.q_pos_embed', 'model.visual_abstractor.encoder.layers.5.crossattention.attention.k_pos_embed', 'model.visual_abstractor.encoder.layers.5.crossattention.attention.q_pos_embed', 'model.visual_abstractor.encoder.layers.2.crossattention.attention.k_pos_embed', 'model.visual_abstractor.encoder.layers.1.crossattention.attention.q_pos_embed', 'model.visual_abstractor.encoder.layers.3.crossattention.attention.q_pos_embed', 'model.visual_abstractor.encoder.layers.4.crossattention.attention.q_pos_embed', 'model.visual_abstractor.encoder.layers.0.crossattention.attention.k_pos_embed', 'model.visual_abstractor.encoder.layers.2.crossattention.attention.q_pos_embed', 'model.visual_abstractor.encoder.layers.3.crossattention.attention.k_pos_embed', 'model.visual_abstractor.encoder.layers.1.crossattention.attention.k_pos_embed']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
WARNING:root:Some parameters are on the meta device device because they were offloaded to the cpu.
Evaluating [E:\Project\Q-Align-main\playground\data\test_jsons\AGIQA-3K.json]: 100%|██████████| 2982/2982 [24:42<00:00, 2.01it/s]
The text was updated successfully, but these errors were encountered:
Instantiating LlamaAttention without passing
layer_idx
is not recommended and will to errors during the forward call, if caching is used. Please make sure to provide alayer_idx
when creating this class.Loading checkpoint shards: 100%|██████████| 33/33 [00:22<00:00, 1.47it/s]
Some weights of MPLUGOwl2LlamaForCausalLM were not initialized from the model checkpoint at D:\桌面\mplug_owl2_7b_448_qinstruct_preview_v0.1 and are newly initialized: ['model.visual_abstractor.encoder.layers.4.crossattention.attention.k_pos_embed', 'model.visual_abstractor.encoder.layers.0.crossattention.attention.q_pos_embed', 'model.visual_abstractor.encoder.layers.5.crossattention.attention.k_pos_embed', 'model.visual_abstractor.encoder.layers.5.crossattention.attention.q_pos_embed', 'model.visual_abstractor.encoder.layers.2.crossattention.attention.k_pos_embed', 'model.visual_abstractor.encoder.layers.1.crossattention.attention.q_pos_embed', 'model.visual_abstractor.encoder.layers.3.crossattention.attention.q_pos_embed', 'model.visual_abstractor.encoder.layers.4.crossattention.attention.q_pos_embed', 'model.visual_abstractor.encoder.layers.0.crossattention.attention.k_pos_embed', 'model.visual_abstractor.encoder.layers.2.crossattention.attention.q_pos_embed', 'model.visual_abstractor.encoder.layers.3.crossattention.attention.k_pos_embed', 'model.visual_abstractor.encoder.layers.1.crossattention.attention.k_pos_embed']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
WARNING:root:Some parameters are on the meta device device because they were offloaded to the cpu.
Evaluating [E:\Project\Q-Align-main\playground\data\test_jsons\AGIQA-3K.json]: 100%|██████████| 2982/2982 [24:42<00:00, 2.01it/s]
The text was updated successfully, but these errors were encountered: