-
Notifications
You must be signed in to change notification settings - Fork 104
Home
MTJ supplies much of the functionality required for matrix computations, roughly including:
- Datastructures for dense and structured sparse matrices in the following formats:
- Dense, column major.
- Banded matrices, which store only a few diagonals.
- Packed matrices, storing only half the matrices (for triangular or symmetric matrices).
- Tridiagonal and symmetric tridiagonal matrices.
- Transparent support for symmetric and triangular storage.
- Datastructures for unstructured sparse matrices in these formats:
- Compressed row or column storage (CRS/CCS).
- Flexible CRS/CCS, using growable sparse vectors.
- Compressed diagonal storage (CDS).
- The dense and structured sparse matrices are built on top of BLAS and LAPACK, and include the following intrinsic operations:
- Matrix/vector multiplication.
- Matrix/matrix multiplication.
- Rank updates by matrices or vectors.
- Direct matrix solvers.
- The unstructured sparse matrices supports the same operations as the structured ones, except they do not have direct solvers. However, their matrix/vector multiplication methods are optimised for use in iterative solvers.
- Matrix decompositions of dense and structured sparse matrices:
- LU and Cholesky.
- Eigenvalue decompositions for unsymmetrical dense matrices.
- Singular value decompositions for unsymmetrical dense matrices.
- Eigenvalue decompositions for symmetrical matrices (tridiagonal, banded, packed and dense).
- Orthogonal matrix decompositions for dense matrices (QR, RQ, LQ, and QL).
- Iterative solvers for unstructured sparse matrices from the Templates project:
- BiConjugate gradients.
- BiConjugate gradients stabilized.
- Conjugate gradients.
- Conjugate gradients squared.
- Chebyshev iteration.
- Generalized minimal residual (GMRES).
- Iterative refinement (Richardson's method).
- Quasi-minimal residual.
- A selection of algebraic preconditioners:
- Diagonal preconditioning.
- Symmetrical sucessive overrelaxation.
- Incomplete Cholesky.
- Incomplete LU.
- Incomplete LU with fill-in using thresholding.
- Algebraic multigrid by smoothed aggregation.
Using the three methods, set, get and add, the entries of a matrix can easily be set, modified, and retrieved. This works for all matrix formats independently of the underlying storage. All indices are 0-offset (typical for Java and C), and not 1-offset (as in Fortran).
The storage layout of all the dense and structured sparse matrices follow the layout of LAPACK, see the user manual. The following are some general usage notes.
If the matrix is created as implicitly symmetrical, ie. LowerSymmDenseMatrix, then you only have to set entries in the lower triangular part. Entries set in the upper triangular part are ignored, since they are already known by symmetry. Conversely for an upper symmetrical matrix, where the lower triangular part is the symmetrical image of the upper triangular part which the user sets.
The matrices which are symmetrical, positive definite cannot enforce this property beyond just symmetry. Thus it is up to you to ensure that it is indeed positive definite. If your matrix is indeed positive definite, the direct solver can take advantage of this and you'll get an answer quicker than by just using a standard symmetrical or general matrix.
For the band matrices, the number of sub- and super-diagonal bands must be specified when they are constructed, and only entries within the bands can be modified. The rest are implicitly zero.
The format of the unstructured sparse matrices is described in the Templates book. MTJ implements the CRS, CCS, CDS and some flexible variants.
When constructing a compressed row or column matrix from scratch, it is necessary to specify the exact sparsity pattern. This is the column or row indices on each column or row. If this is not known, you may instead use the flexible variants which perform internal memory management. Unless you have a very large matrix, the performance difference will not be large.
The compressed diagonal storage allocates new diagonals on demand, and pre-allocation is optional. However, the CDS format is only suitable for topologically structured meshes and similar applications.
Iterators allow you to traverse the entries of a matrix by using a for-each loop:
for (MatrixEntry e : A)
For matrices with sparsity, traversing the matrix like this is advantageous since entries outside of the matrix structure are omitted. It is thus a convenient way to add new generic algorithms acting on matrices:
double sum = 0;
for (MatrixEntry e : A)
sum += e.get();
The matrix entry has a method to get the current row and column index. Finally, for completeness, vectors also have iterators.
The most common use of matrix factorizations is to solve linear systems. Therefore all the dense and structured sparse matrices include a solve method which computes the most suitable factorization, and invert it onto the given right hand side vector or matrix.
For somewhat better performance, you can use a LU or Cholesky decomposition. These allow you to perform the factorizations in-place are reduces memory copying.
Direct solver or factorizations are not available for the unstructured sparse matrices. For those you can use iterative solvers.
Large-scale problems can be difficult to store efficiently in dense or structured sparse matrices, hence it becomes necessary to use an unstructured sparse format. Memory efficient solvers for such matrices are based on the iterative Krylov subspace methods (conjugate gradients, residual minimalization, etc). MTJ includes the most common iterative solvers along with some common preconditioners. Here is a code-fragment on how to use such a solver in conjunction with a preconditioner:
CompRowMatrix A;
DenseVector x, b;
// Allocate storage for Conjugate Gradients
IterativeSolver solver = new CG(x);
// Create a Cholesky preconditioner
Preconditioner M = new ICC(A.copy());
// Set up the preconditioner, and attach it
M.setMatrix(A);
solver.setPreconditioner(M);
// Add a convergence monitor
solver.getIterationMonitor().setIterationReporter(new OutputIterationReporter());
// Start the solver, and check for problems
try {
solver.solve(A, b, x);
} catch (IterativeSolverNotConvergedException e) {
System.err.println("Iterative solver failed to converge");
}
For testing purposes, it is useful to export and import matrices. MTJ supports matrices in coordinate format, and every matrix has a toString
method which outputs the matrix in the sparse coordinate format. Only the non-zero entries of the matrix are printed, and indices are 1-offset for compatibility.
Some matrix formats can be automatically constructed from a MatrixVectorReader
. These are: DenseMatrix
, CompRowMatrix
, CompColMatrix
, and CompDiagMatrix
.
MTJ has the possibility to use a native BLAS instead of JLAPACK. This gives much higher performance on dense matrix operations, sometimes even the theoretical maximum amount of flops. To access the native BLAS, it is necessary to compile the Java Native Interface (JNI) of netlib-java. Note that this should not be done unless profiling shows that the majority of the time is spent performing the matrix algebra.