Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TSVD() as compression algorithm produces method error #73

Closed
RmMetsch opened this issue Aug 19, 2024 · 7 comments · Fixed by #74
Closed

TSVD() as compression algorithm produces method error #73

RmMetsch opened this issue Aug 19, 2024 · 7 comments · Fixed by #74
Assignees

Comments

@RmMetsch
Copy link

Hello,

I'm trying to use HMatrices to compress a Matrix using TSVD. I followed the tutorial from the docs page and swapped out the compression algorithm PartialACA() for TSVD() but get the following error :

MethodError: no method matching (::TSVD)(::HMatrices.PermutedMatrix{…}, ::ClusterTree{…}, ::ClusterTree{…}, ::HMatrices.ACABuffer{…})

not sure what to do, any help is appreciated.

using HMatrices, StaticArrays

const Point2D = SVector{2,Float64}
# points on a circle
m = n = 10_000
X = Y = [Point2D(sin(i*2π/n),cos(i*2π/n)) for i in 0:n-1]

struct LaplaceMatrix <: AbstractMatrix{Float64}
    X::Vector{Point2D}
    Y::Vector{Point2D}
end

Base.getindex(K::LaplaceMatrix,i::Int,j::Int) = -1/2π*log(norm(K.X[i] - K.Y[j]) + 1e-10)
Base.size(K::LaplaceMatrix) = length(K.X), length(K.Y)

# create the abstract matrix
K = LaplaceMatrix(X,Y)

Xclt = Yclt = ClusterTree(X)
adm = StrongAdmissibilityStd()
comp = TSVD()

H = assemble_hmatrix(K,Xclt,Yclt;adm,comp,threads=false,distributed=false)
@RmMetsch RmMetsch changed the title TSVD() as compression algorithm produces method error TSVD() as compression algorithm produces method error Aug 19, 2024
@maltezfaria maltezfaria self-assigned this Sep 15, 2024
@maltezfaria
Copy link
Member

Hi @RmMetsch,

Thanks for pointing out the issue, and sorry for the long delay in responding: I was on vacation and then got super busy with work when I returned two weeks ago.

I "fixed" the issue, and your example should work now. Let me know if that is not the case.

Keep in mind that TSVD is likely to be very slow as a compression method since it will first assemble the dense block, compute its svd, and then truncate the resulting factorization a posteriori. I don't know what your use case is, but it is unlikely doing an svd for each low-rank block is a good idea.

@RmMetsch
Copy link
Author

Hey @maltezfaria, no worries we are all busy nowadays :)

I ran ] update HMatrices and the code above and it still produces the same error :/

I'm using it to compress a mutual inductance matrix of a electrical cable. The geometry is rather fixed so I only have to compute it once. A colleague of mine insisted I use this method for better accuracy of the compression.

Cheers,
Roel

@maltezfaria
Copy link
Member

You need to be on the main branch since I have not (yet) tagged a new version with the fix. ] update will only update to a stable version.

@maltezfaria maltezfaria reopened this Sep 26, 2024
@maltezfaria
Copy link
Member

New tagged version is up. Running ] update should work now. Let me know if that is not the case.

Regarding your application: why not use the adaptive cross approximation (ACA) compression method in HMatrices (the default)? It is usually much faster than e.g. SVD, at the cost of being less robust...

@RmMetsch
Copy link
Author

RmMetsch commented Oct 3, 2024

Thanks for updating, I did some testing and while ACA compression is indeed much faster the TSVD compression has superior accuracy. As I said, I only have to compute it once so its worth the wait.

@maltezfaria
Copy link
Member

Fair enough, but keep in mind you can pass a tolerance parameter to all of these methods, so you can probably get good results from ACA by passing a more strict tolerance. But if TSVD suffices for you, great 👍 . I just never optimized it since I use that mostly for validation purposes.

@maltezfaria
Copy link
Member

One last thing in defense of TSVD I guess: it is the optimal low rank approximation (in Frobenius or spectral norm) and the only guaranteed method in HMatrices.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants