-
Notifications
You must be signed in to change notification settings - Fork 148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(docs): add half-precision training section in using_simulator docs #678
base: master
Are you sure you want to change the base?
Conversation
Signed-off-by: Pablo Carmona Gonzalez <pablocarmonagonzalez@gmail.com>
Signed-off-by: Pablo Carmona Gonzalez <pablocarmonagonzalez@gmail.com>
e017345
to
0877c23
Compare
@coreylammie @anu-pub Take a look at it and let me know what you think, thanks! |
Compilation is currently not working for the half precision type. To reproduce:
Error:
|
@maljoras I think this is something that you could maybe easily fix. Or you could tell me how I could resolve it. |
dataset = datasets.MNIST("data", train=True, download=True, transform=transform) | ||
train_loader = torch.utils.data.DataLoader(dataset, batch_size=32) | ||
|
||
model = model.to(device=device, dtype=torch.bfloat16) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this also work with other tiles that TorchInference? I had added a HALF type, like in the above: rpu_config.runtime.data_type = RPUDataType.HALF
because the HALF could either be bfloat16 of float16 depending on the compilation options.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There also a runtime.data_type.as_torch()
or something function to convert it to the corresponding torch type, see https://github.com/IBM/aihwkit/blob/master/src/aihwkit/simulator/parameters/enums.py#L23
Yes, one has to enable the FP16 auto-conversion during the compilation if I remember correctly. You have to make sure that |
You could try to set additionally |
You might also want to use bfloat as default FP16 type (with |
Actually, could also be related to the rather old compiler you are using? One way to go about it might to change each conversion |
I suspect the compiler version to be the problem. Do you know what compiler version worked? On the system I using to check this it is not working.
Regarding the other things you mentioned: I tried |
@jubueche have you tried a different compiler version? what is the status on this testing? |
Related issues
#623
#677
Description