Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR closes #292. We now allow for different marginalisation masks to be applied for each entry in a batch.
At the same time, we also allow a single scope to be broadcasted across all entries in a batch, so this change should be backward compatible.
More concretely, the input to
IntegrateQuery
can now be:Due to 1., each entry in the batch can have a scope over a different number of variables - and this can be an issue when using pytorch, since pytorch tensors have fixed size for each dimension. The solution at the moment is to use a boolean mask of size (
batch_size
,num_variables
), wherenum_variables
is an upper bound on the number of variables in the scope of the circuit (see below).Assumptions
We assume the size of the scope is <= max(scope), i.e. the maximum int in the scope. We need this since the actual number of variables may change - i.e. some ids may be dropped and
len(scope)
may be invalid, as highlighted by @loreloc.Future work
Deal with sparsity
We currently expand the list of scopes into a dense boolean tensor mask. If there is a very large number of variables and the integration mask is sparse, it would make sense to replace the dense implementation with a sparse one, e.g. see Sparse Coo Tensor.