You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Traceback (most recent call last):
File "/model/irqlora.py", line 858, in
train()
File "/model/irqlora.py", line 721, in train
model, tokenizer = get_accelerate_model(args, checkpoint_dir)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/model/irqlora.py", line 409, in get_accelerate_model
model = get_my_model(model, model_fp, args.blocksize2, args.tau_lambda, args.tau_n)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/model/utils.py", line 18, in get_my_model
model.model = _replace_with_ours_lora_4bit_linear(model.model, model_fp=model_fp, blocksize2=blocksize2, tau_range=tau_range, tau_n=tau_n)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/model/utils.py", line 171, in _replace_with_ours_lora_4bit_linear
_ = _replace_with_ours_lora_4bit_linear(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/model/utils.py", line 171, in _replace_with_ours_lora_4bit_linear
_ = _replace_with_ours_lora_4bit_linear(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/model/utils.py", line 171, in _replace_with_ours_lora_4bit_linear
_ = _replace_with_ours_lora_4bit_linear(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[Previous line repeated 1 more time]
File "/model/utils.py", line 167, in _replace_with_ours_lora_4bit_linear
model._modules[name] = IRQLoraLinear4bit(model._modules[name], model_fp=model_fp._modules[name], blocksize2=blocksize2, tau_range=tau_range, tau_n=tau_n)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/model/utils.py", line 95, in init
compress_statistics, quant_type, device = self.base_layer.weight.compress_statistics, self.base_layer.weight.quant_type, self.base_layer.weight.device
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1695, in getattr
raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'")
AttributeError: 'IRQLoraLinear4bit' object has no attribute 'base_layer'. Did you mean: 'update_layer'?
The text was updated successfully, but these errors were encountered:
self.base_layer.weight is used to extract the weights $W$ of the LoRA layer.
The issue you mentioned is likely caused by different versions of the peft library. We are using version 0.6.2. You can either use the same library version as ours or simply modify the code to match this part with the latest library version.
As a supplement, we have uploaded our complete requirements.txt, hoping it will be helpful to you.
有 requirements.txt 吗
Traceback (most recent call last):
File "/model/irqlora.py", line 858, in
train()
File "/model/irqlora.py", line 721, in train
model, tokenizer = get_accelerate_model(args, checkpoint_dir)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/model/irqlora.py", line 409, in get_accelerate_model
model = get_my_model(model, model_fp, args.blocksize2, args.tau_lambda, args.tau_n)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/model/utils.py", line 18, in get_my_model
model.model = _replace_with_ours_lora_4bit_linear(model.model, model_fp=model_fp, blocksize2=blocksize2, tau_range=tau_range, tau_n=tau_n)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/model/utils.py", line 171, in _replace_with_ours_lora_4bit_linear
_ = _replace_with_ours_lora_4bit_linear(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/model/utils.py", line 171, in _replace_with_ours_lora_4bit_linear
_ = _replace_with_ours_lora_4bit_linear(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/model/utils.py", line 171, in _replace_with_ours_lora_4bit_linear
_ = _replace_with_ours_lora_4bit_linear(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[Previous line repeated 1 more time]
File "/model/utils.py", line 167, in _replace_with_ours_lora_4bit_linear
model._modules[name] = IRQLoraLinear4bit(model._modules[name], model_fp=model_fp._modules[name], blocksize2=blocksize2, tau_range=tau_range, tau_n=tau_n)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/model/utils.py", line 95, in init
compress_statistics, quant_type, device = self.base_layer.weight.compress_statistics, self.base_layer.weight.quant_type, self.base_layer.weight.device
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1695, in getattr
raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'")
AttributeError: 'IRQLoraLinear4bit' object has no attribute 'base_layer'. Did you mean: 'update_layer'?
The text was updated successfully, but these errors were encountered: