Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Security doc update #503

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open

Security doc update #503

wants to merge 7 commits into from

Conversation

MGeorgie
Copy link

Update Security documentation

update the section about delta-exact CKKS
update Recommendation for applicative countermeasures
add Multiparty FHE section
small modifs
@MGeorgie MGeorgie added the documentation Improvements or additions to documentation label Oct 15, 2024
@Pro7ech
Copy link
Collaborator

Pro7ech commented Oct 15, 2024

I have a few comments on the updated SECURITY.md. I usually would refrain to do that here, but this is an open PR on the topic and the fastest communication channel.

  1. It says there are two approaches to mitigate the attack: the one currently implemented in Lattigo (specifying the bit-precision during the decoding step), and the one proposed by Bossuat et al. (rounding off the noise). They actually are the same since they both round the noise of (should it be in the canonical embedding or in the ring, is another question).

buffCmplx[i] = complex(math.Round(real(buffCmplx[i])*scale)/scale, math.Round(imag(buffCmplx[i])*scale)/scale)

The contribution of Bossuat et al. is go give tight bounds on the noise to perform this rounding as efficiently as possible, limiting the loss in precision/efficiency.

  1. Although rounding off the noise is sufficient to thwart the attack in the case where values would be disclosed by a honest client to third parties (which is the vast majority of cases), it unfortunately doesn't enable IND-CPA-D yet as it doesn't protects against an adversary that can craft arbitrary plaintexts or ask for the evaluation of a circuit that would map a plaintext to carefully chosen values. The reason is that since the error is mixed with the message, it cannot be cleanly rounded off as there will always exist messages where doing so will trigger a carry that will propagate and flip one ore bits above the rounding, enabling at the minimum an efficient distinguisher.

@MGeorgie MGeorgie self-assigned this Oct 15, 2024
@MGeorgie
Copy link
Author

1. It says there are two approaches to mitigate the attack: the one currently implemented in Lattigo (specifying the bit-precision during the decoding step), and the one proposed by [Bossuat et al.](https://eprint.iacr.org/2024/853) (rounding off the noise). They actually are the same since they both round the noise of (should it be in the canonical embedding or in the ring, is another question).

buffCmplx[i] = complex(math.Round(real(buffCmplx[i])*scale)/scale, math.Round(imag(buffCmplx[i])*scale)/scale)

The contribution of Bossuat et al. is go give tight bounds on the noise to perform this rounding as efficiently as possible, limiting the loss in precision/efficiency.

Thanks for the clarification, I will update according.

2. Although rounding off the noise is sufficient to thwart the attack in the case where values would be disclosed by a honest client to third parties (which is the vast majority of cases), it unfortunately doesn't enable IND-CPA-D yet as it doesn't protects against an adversary that can craft arbitrary plaintexts or ask for the evaluation of a circuit that would map a plaintext to carefully chosen values. The reason is that since the error is mixed with the message, it cannot be cleanly rounded off as there will always exist messages where doing so will trigger a carry that will propagate and flip one ore bits above the rounding, enabling at the minimum an efficient distinguisher.

If I understand well, the rounding is not enough and we need to add exponential amount of noise (noise flooding) to the decrypted value?

update CKKS section
@Pro7ech
Copy link
Collaborator

Pro7ech commented Oct 16, 2024

1. It says there are two approaches to mitigate the attack: the one currently implemented in Lattigo (specifying the bit-precision during the decoding step), and the one proposed by [Bossuat et al.](https://eprint.iacr.org/2024/853) (rounding off the noise). They actually are the same since they both round the noise of (should it be in the canonical embedding or in the ring, is another question).

buffCmplx[i] = complex(math.Round(real(buffCmplx[i])*scale)/scale, math.Round(imag(buffCmplx[i])*scale)/scale)

The contribution of Bossuat et al. is go give tight bounds on the noise to perform this rounding as efficiently as possible, limiting the loss in precision/efficiency.

Thanks for the clarification, I will update according.

2. Although rounding off the noise is sufficient to thwart the attack in the case where values would be disclosed by a honest client to third parties (which is the vast majority of cases), it unfortunately doesn't enable IND-CPA-D yet as it doesn't protects against an adversary that can craft arbitrary plaintexts or ask for the evaluation of a circuit that would map a plaintext to carefully chosen values. The reason is that since the error is mixed with the message, it cannot be cleanly rounded off as there will always exist messages where doing so will trigger a carry that will propagate and flip one ore bits above the rounding, enabling at the minimum an efficient distinguisher.

If I understand well, the rounding is not enough and we need to add exponential amount of noise (noise flooding) to the decrypted value?

Yes, that but it is not very practical depending on the number of queries that are allowed, or ensuring that the error does not touch the plaintext so that it can be removed without triggering a carry that propagates to the plaintext, for example by adopting a BFV style encoding on top of the CKKS encoding, so by setting the x lower bits of all plaintext polynomial (in the ring) to zero, leaving enough space for the error to grow without touching the plaintext. How much to quantize can be derived by using these precise noise estimation. But doing so would require non trivial changes in the library, so I would leave it for another PR. Alternatively, a homomorphic sanitization can be done by doing homomorphic bit/byte-extraction and bit/byte-cleaning, which also enables a clean removal of the error at decryption (or flooding in the case of MHE).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants