Solvers

FiPy requires either PETSc, pyamgx, Pysparse, SciPy, or Trilinos solver suites to be installed in order to solve linear systems. From our experiences, FiPy runs most efficiently in serial when Pysparse is the linear solver. PETSc and Trilinos are the most complete of the solvers due to their numerous preconditioning and solver capabilities and they also allow FiPy to run in parallel. Although less efficient than Pysparse and less capable than PETSc or Trilinos, SciPy is a very popular package, widely available and easy to install. For this reason, SciPy may be the best linear solver choice when first installing and testing FiPy. pyamgx offers the possibility of solving sparse sparse linear systems on the GPU; be aware that both hardware and software configuration is non-trivial.

FiPy chooses the solver suite based on system availability or based on the user supplied Command-line Flags and Environment Variables. For example, passing --no-pysparse:

$ python -c "from fipy import *; print DefaultSolver" --no-pysparse
<class 'fipy.solvers.trilinos.linearGMRESSolver.LinearGMRESSolver'>

uses a Trilinos solver. Setting FIPY_SOLVERS to scipy:

$ FIPY_SOLVERS=scipy
$ python -c "from fipy import *; print DefaultSolver"
<class 'fipy.solvers.scipy.linearLUSolver.LinearLUSolver'>

uses a SciPy solver. Suite-specific solver classes can also be imported and instantiated overriding any other directives. For example:

$ python -c "from fipy.solvers.scipy import DefaultSolver; \
>   print DefaultSolver" --no-pysparse
<class 'fipy.solvers.scipy.linearLUSolver.LinearLUSolver'>

uses a SciPy solver regardless of the command line argument. In the absence of Command-line Flags and Environment Variables, FiPy’s order of precedence when choosing the solver suite for generic solvers is PySparse followed by PETSc, Trilinos, SciPy, PyAMG, and pyamgx.

PETSc

https://www.mcs.anl.gov/petsc

PETSc (the Portable, Extensible Toolkit for Scientific Computation) is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations. It employs the MPI standard for all message-passing communication (see Solving in Parallel for more details).

Attention

PETSc requires the petsc4py and mpi4py interfaces.

Note

While, for consistency with other solver suites, FiPy does implement some precoditioner objects for PETSc, you can also simply pass one of the PCType strings in the precon= argument when declaring the solver.

Pysparse

http://pysparse.sourceforge.net

Pysparse is a fast serial sparse matrix library for Python. It provides several sparse matrix storage formats and conversion methods. It also implements a number of iterative solvers, preconditioners, and interfaces to efficient factorization packages. The only requirement to install and use Pysparse is NumPy.

Warning

Pysparse is archaic and limited to Running under Python 2.

Warning

FiPy requires version 1.0 or higher of Pysparse.

SciPy

http://www.scipy.org/

The scipy.sparse module provides a basic set of serial Krylov solvers, but no preconditioners.

PyAMG

http://code.google.com/p/pyamg/

The PyAMG package provides adaptive multigrid preconditioners that can be used in conjunction with the SciPy solvers.

pyamgx

https://pyamgx.readthedocs.io/

The pyamgx package is a Python interface to the NVIDIA AMGX library. pyamgx can be used to construct complex solvers and preconditioners to solve sparse sparse linear systems on the GPU.

Trilinos

http://trilinos.sandia.gov

Trilinos provides a more complete set of solvers and preconditioners than either Pysparse or SciPy. Trilinos preconditioning allows for iterative solutions to some difficult problems that Pysparse and SciPy cannot solve, and it enables parallel execution of FiPy (see Solving in Parallel for more details).

Attention

Be sure to build or install the PyTrilinos interface to Trilinos.

Attention

FiPy runs more efficiently when Pysparse is installed alongside Trilinos.

Attention

Trilinos is a large software suite with its own set of prerequisites, and can be difficult to set up. It is not necessary for most problems, and is not recommended for a basic install of FiPy.

Attention

Trilinos must be compiled with MPI support for Solving in Parallel.

Tip

Trilinos parallel efficiency is greatly improved by also installing Pysparse. If Pysparse is not installed, be sure to use the --no-pysparse flag.

Note

Trilinos solvers frequently give intermediate output that FiPy cannot suppress. The most commonly encountered messages are

Gen_Prolongator warning : Max eigen <= 0.0

which is not significant to FiPy.

Aztec status AZ_loss: loss of precision

which indicates that there was some difficulty in solving the problem to the requested tolerance due to precision limitations, but usually does not prevent the solver from finding an adequate solution.

Aztec status AZ_ill_cond: GMRES hessenberg ill-conditioned

which indicates that GMRES is having trouble with the problem, and may indicate that trying a different solver or preconditioner may give more accurate results if GMRES fails.

Aztec status AZ_breakdown: numerical breakdown

which usually indicates serious problems solving the equation which forced the solver to stop before reaching an adequate solution. Different solvers, different preconditioners, or a less restrictive tolerance may help.

Convergence

Different solver suites take different approaches to testing convergence. We endeavor to harmonize this behavior by allowing the strings in the “criterion” column to be passed as an argument when instantiating a Solver. Convergence is detected if residual < tolerance * scale.

Residual Criteria

criterion

residual

scale

PETSc [1]

pyamgx [2]

PySparse

SciPy [3]

Trilinos [4]

unscaled

\(\|\mathsf{L}\vec{x} - \vec{b}\|_2\)

\(1\)

[5]

ABSOLUTE

[5]

[5]

AZ_noscaled

RHS

\(\|\mathsf{L}\vec{x} - \vec{b}\|_2\)

\(\|\vec{b}\|_2\)

KSP_NORM_UNPRECONDITIONED

[5]

cgs, pcg, qmres, or [5]

default

AZ_rhs

matrix

\(\|\mathsf{L}\vec{x} - \vec{b}\|_2\)

\(\|\mathsf{L}\|_\infty\)

[5]

[5]

[5]

[5]

AZ_Anorm

initial

\(\|\mathsf{L}\vec{x} - \vec{b}\|_2\)

\(\|\mathsf{L}\vec{x} - \vec{b}\|_2^{(0)}\)

[5]

RELATIVE_INI_CORE

bicgstab, gmres, minres, or [5]

[5]

AZ_r0

solution

\(\|\mathsf{L}\vec{x} - \vec{b}\|_\infty\)

\(\|\mathsf{L}\|_\infty * \|\vec{x}\|_1 + \|\vec{b}\|_\infty\)

AZ_sol

preconditioned

\(\left\|\mathsf{P}^{-1}(\mathsf{L}\vec{x} - \vec{b})\right\|_2\)

\(\left\|\vec{b}\right\|_2\)

KSP_NORM_PRECONDITIONED

natural

\(\sqrt{(\mathsf{L}\vec{x} - \vec{b})\mathsf{P}^{-1}(\mathsf{L}\vec{x} - \vec{b})}\)

\(\left\|\vec{b}\right\|_2\)

KSP_NORM_NATURAL

legacy

KSP_NORM_DEFAULT (RHS or preconditioned)

initial

RHS or initial

RHS

initial

default

RHS

RHS

RHS

RHS

RHS

Note

PyAMG is a set of preconditioners applied on top of SciPy, so is not explicitly included in these tables.

default

The setting criterion="default" applies the same scaling (RHS) to all solvers. This behavior is new in version 3.4.5+307.g2c7ac213b; prior to that, the default behavior was the same as criterion="legacy".

legacy

The setting criterion="legacy" restores the behavior of FiPy prior to version 3.4.5+307.g2c7ac213b and is equivalent to what the particular suite and solver does if not specifically configured. The legacy row of the table is a best effort at documenting what will happen.

Note

absolute_tolerance

PETSc and SciPy Krylov solvers accept an additional absolute_tolerance parameter, such that convergence is detected if residual < max(tolerance * scale, absolute_tolerance).

divergence_tolerance

PETSc Krylov solvers accept a third divergence_tolerance parameter, such that a divergence is detected if residual > divergence_tolerance * scale. Because of the way the convergence test is coded, if the initial residual is much larger than the norm of the right-hand-side vector, PETSc will abort with KSP_DIVERGED_DTOL without ever trying to solve. If this occurs, either divergence_tolerance should be increased or another convergence criterion should be used.

Note

divergence_tolerance never caused a problem in previous versions of FiPy because the default behavior of PETSc is to zero out the initial guess before trying to solve and then never do a test against divergence_tolerance. This resulted in behavior (number of iterations and ultimate residual) that was very different from the other solver suites and so FiPy now directs PETSc to use the initial guess.

Reporting

Different solver suites also report different levels of detail about why they succed or fail. This information is captured as a Convergence or Divergence property of the Solver after calling solve() or sweep().

Convergence Status Codes

PETSc

pyamgx

PySparse

SciPy

Trilinos

Convergence

Convergence criteria met.

AMGX_SOLVE_SUCCESS

0

AZ_normal

IterationConvergence

Requested iterations complete (and no residual calculated).

KSP_CONVERGED_ITS

AbsoluteToleranceConvergence

Converged, residual is as small as seems reasonable on this machine.

KSP_CONVERGED_ATOL

2

RHSZeroConvergence

Converged, \(\mathbf{b} = 0\), so the exact solution is \(\mathbf{x} = 0\).

1

RelativeToleranceConvergence

Converged, relative error appears to be less than tolerance.

KSP_CONVERGED_RTOL

0

HappyBreakdownConvergence

“Exact” solution found and more iterations will just make things worse.

KSP_CONVERGED_HAPPY_BREAKDOWN

LossOfAccuracyConvergence

The iterative solver has terminated due to a lack of accuracy in the recursive residual (caused by rounding errors).

AZ_loss

IteratingConvergence

Solve still in progress.

KSP_CONVERGED_ITERATING

Divergence Status Codes

PETSc

pyamgx

PySparse

SciPy

Trilinos

BreakdownDivergence

Illegal input or the iterative solver has broken down.

KSP_DIVERGED_BREAKDOWN

AMGX_SOLVE_FAILED

<0

AZ_breakdown

IterationDivergence

Maximum number of iterations was reached.

KSP_DIVERGED_ITS

AMGX_SOLVE_DIVERGED

-1

>0

AZ_maxits

PreconditioningDivergence

The system involving the preconditioner was ill-conditioned.

KSP_DIVERGED_PC_FAILED

-2

IllConditionedPreconditionerDivergence

An inner product of the form \(\mathbf{x}^T \mathsf{P}^{-1} \mathbf{x}\) was not positive, so the preconditioning matrix \(\mathsf{P}\) does not appear to be positive definite.

KSP_DIVERGED_INDEFINITE_PC

-3

IllConditionedDivergence

The matrix \(\mathsf{L}\) appears to be ill-conditioned.

KSP_DIVERGED_INDEFINITE_MAT

-4

AZ_ill_cond

StagnatedDivergence

The method stagnated.

-5

OutOfRangeDivergence

A scalar quantity became too small or too large to continue computing.

KSP_DIVERGED_NANORINF

-6

NullDivergence

Breakdown when solving the Hessenberg system within GMRES.

KSP_DIVERGED_NULL

ToleranceDivergence

The residual norm increased by a factor of divtol.

KSP_DIVERGED_DTOL

Last updated on Jun 26, 2024. Created using Sphinx 7.1.2.