Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'IRQLoraLinear4bit' object has no attribute 'base_layer'. Did you mean: 'update_layer' #3

Open
xiaoleyang2018 opened this issue Mar 22, 2024 · 1 comment

Comments

@xiaoleyang2018
Copy link

有 requirements.txt 吗

Traceback (most recent call last):
File "/model/irqlora.py", line 858, in
train()
File "/model/irqlora.py", line 721, in train
model, tokenizer = get_accelerate_model(args, checkpoint_dir)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/model/irqlora.py", line 409, in get_accelerate_model
model = get_my_model(model, model_fp, args.blocksize2, args.tau_lambda, args.tau_n)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/model/utils.py", line 18, in get_my_model
model.model = _replace_with_ours_lora_4bit_linear(model.model, model_fp=model_fp, blocksize2=blocksize2, tau_range=tau_range, tau_n=tau_n)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/model/utils.py", line 171, in _replace_with_ours_lora_4bit_linear
_ = _replace_with_ours_lora_4bit_linear(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/model/utils.py", line 171, in _replace_with_ours_lora_4bit_linear
_ = _replace_with_ours_lora_4bit_linear(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/model/utils.py", line 171, in _replace_with_ours_lora_4bit_linear
_ = _replace_with_ours_lora_4bit_linear(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[Previous line repeated 1 more time]
File "/model/utils.py", line 167, in _replace_with_ours_lora_4bit_linear
model._modules[name] = IRQLoraLinear4bit(model._modules[name], model_fp=model_fp._modules[name], blocksize2=blocksize2, tau_range=tau_range, tau_n=tau_n)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/model/utils.py", line 95, in init
compress_statistics, quant_type, device = self.base_layer.weight.compress_statistics, self.base_layer.weight.quant_type, self.base_layer.weight.device
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1695, in getattr
raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'")
AttributeError: 'IRQLoraLinear4bit' object has no attribute 'base_layer'. Did you mean: 'update_layer'?

@Xingyu-Zheng
Copy link
Collaborator

self.base_layer.weight is used to extract the weights $W$ of the LoRA layer.
The issue you mentioned is likely caused by different versions of the peft library. We are using version 0.6.2. You can either use the same library version as ours or simply modify the code to match this part with the latest library version.

As a supplement, we have uploaded our complete requirements.txt, hoping it will be helpful to you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants