Solvers¶
FiPy requires either PETSc, pyamgx, Pysparse, SciPy, or Trilinos solver suites to be installed in order to solve linear systems. From our experiences, FiPy runs most efficiently in serial when Pysparse is the linear solver. PETSc and Trilinos are the most complete of the solvers due to their numerous preconditioning and solver capabilities and they also allow FiPy to run in parallel. Although less efficient than Pysparse and less capable than PETSc or Trilinos, SciPy is a very popular package, widely available and easy to install. For this reason, SciPy may be the best linear solver choice when first installing and testing FiPy. pyamgx offers the possibility of solving sparse sparse linear systems on the GPU; be aware that both hardware and software configuration is non-trivial.
FiPy chooses the solver suite based on system availability or based
on the user supplied Command-line Flags and Environment Variables. For example,
passing --no-pysparse
:
$ python -c "from fipy import *; print DefaultSolver" --no-pysparse
<class 'fipy.solvers.trilinos.linearGMRESSolver.LinearGMRESSolver'>
uses a Trilinos solver. Setting FIPY_SOLVERS
to scipy
:
$ FIPY_SOLVERS=scipy
$ python -c "from fipy import *; print DefaultSolver"
<class 'fipy.solvers.scipy.linearLUSolver.LinearLUSolver'>
uses a SciPy solver. Suite-specific solver classes can also be imported and instantiated overriding any other directives. For example:
$ python -c "from fipy.solvers.scipy import DefaultSolver; \
> print DefaultSolver" --no-pysparse
<class 'fipy.solvers.scipy.linearLUSolver.LinearLUSolver'>
uses a SciPy solver regardless of the command line argument. In the absence of Command-line Flags and Environment Variables, FiPy’s order of precedence when choosing the solver suite for generic solvers is PySparse followed by PETSc, Trilinos, SciPy, PyAMG, and pyamgx.
PETSc¶
PETSc (the Portable, Extensible Toolkit for Scientific Computation) is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations. It employs the MPI standard for all message-passing communication (see Solving in Parallel for more details).
Note
While, for consistency with other solver suites, FiPy does implement some precoditioner objects for PETSc, you can also simply pass one of the PCType strings in the precon= argument when declaring the solver.
Pysparse¶
http://pysparse.sourceforge.net
Pysparse is a fast serial sparse matrix library for Python. It provides several sparse matrix storage formats and conversion methods. It also implements a number of iterative solvers, preconditioners, and interfaces to efficient factorization packages. The only requirement to install and use Pysparse is NumPy.
Warning
Pysparse is archaic and limited to Running under Python 2.
SciPy¶
The scipy.sparse
module provides a basic set of serial Krylov
solvers, but no preconditioners.
PyAMG¶
http://code.google.com/p/pyamg/
The PyAMG package provides adaptive multigrid preconditioners that can be used in conjunction with the SciPy solvers.
pyamgx¶
https://pyamgx.readthedocs.io/
The pyamgx package is a Python interface to the NVIDIA AMGX library. pyamgx can be used to construct complex solvers and preconditioners to solve sparse sparse linear systems on the GPU.
Trilinos¶
Trilinos provides a more complete set of solvers and preconditioners than either Pysparse or SciPy. Trilinos preconditioning allows for iterative solutions to some difficult problems that Pysparse and SciPy cannot solve, and it enables parallel execution of FiPy (see Solving in Parallel for more details).
Attention
Be sure to build or install the PyTrilinos interface to Trilinos.
Attention
Trilinos is a large software suite with its own set of prerequisites, and can be difficult to set up. It is not necessary for most problems, and is not recommended for a basic install of FiPy.
Attention
Trilinos must be compiled with MPI support for Solving in Parallel.
Tip
Trilinos parallel efficiency is greatly improved by also
installing Pysparse. If Pysparse is not installed, be
sure to use the --no-pysparse
flag.
Note
Trilinos solvers frequently give intermediate output that FiPy cannot suppress. The most commonly encountered messages are
Gen_Prolongator warning : Max eigen <= 0.0
which is not significant to FiPy.
Aztec status AZ_loss: loss of precision
which indicates that there was some difficulty in solving the problem to the requested tolerance due to precision limitations, but usually does not prevent the solver from finding an adequate solution.
Aztec status AZ_ill_cond: GMRES hessenberg ill-conditioned
which indicates that GMRES is having trouble with the problem, and may indicate that trying a different solver or preconditioner may give more accurate results if GMRES fails.
Aztec status AZ_breakdown: numerical breakdown
which usually indicates serious problems solving the equation which forced the solver to stop before reaching an adequate solution. Different solvers, different preconditioners, or a less restrictive tolerance may help.
Convergence¶
Different solver suites take different approaches to testing convergence.
We endeavor to harmonize this behavior by allowing the strings in the
“criterion” column to be passed as an argument when instantiating a
Solver
. Convergence is detected if
residual < tolerance * scale
.
criterion |
residual |
scale |
|||||
---|---|---|---|---|---|---|---|
|
\(\|\mathsf{L}\vec{x} - \vec{b}\|_2\) |
\(1\) |
|
|
|||
|
\(\|\mathsf{L}\vec{x} - \vec{b}\|_2\) |
\(\|\vec{b}\|_2\) |
|
|
|||
|
\(\|\mathsf{L}\vec{x} - \vec{b}\|_2\) |
\(\|\mathsf{L}\|_\infty\) |
|
||||
|
\(\|\mathsf{L}\vec{x} - \vec{b}\|_2\) |
\(\|\mathsf{L}\vec{x} - \vec{b}\|_2^{(0)}\) |
|
|
|||
|
\(\|\mathsf{L}\vec{x} - \vec{b}\|_\infty\) |
\(\|\mathsf{L}\|_\infty * \|\vec{x}\|_1 + \|\vec{b}\|_\infty\) |
|
||||
|
\(\left\|\mathsf{P}^{-1}(\mathsf{L}\vec{x} - \vec{b})\right\|_2\) |
\(\left\|\vec{b}\right\|_2\) |
|||||
|
\(\sqrt{(\mathsf{L}\vec{x} - \vec{b})\mathsf{P}^{-1}(\mathsf{L}\vec{x} - \vec{b})}\) |
\(\left\|\vec{b}\right\|_2\) |
|||||
|
|
|
|
|
|
||
|
|
|
|
|
|
Note
PyAMG is a set of preconditioners applied on top of SciPy, so is not explicitly included in these tables.
default
¶
The setting criterion="default"
applies the same scaling (RHS
) to
all solvers. This behavior is new in version 3.4.5+308.gbf598029e; prior to that, the
default behavior was the same as criterion="legacy"
.
legacy
¶
The setting criterion="legacy"
restores the behavior of FiPy prior to
version 3.4.5+308.gbf598029e and is equivalent to what the particular suite and solver
does if not specifically configured. The legacy
row of the table is a
best effort at documenting what will happen.
Note
All LU solvers use
"initial"
scaling.PySparse has two different groups of solvers, with different scaling.
PETSc accepts
KSP_NORM_DEFAULT
in order to “use the default for the currentKSPType
”. Discerning the actual behavior would require burning the code in a bowl of chicken entrails. (It is reasonable to assumeKSP_NORM_PRECONDITIONED
for left-preconditioned solvers andKSP_NORM_UNPRECONDITIONED
otherwise; even the PETSc documentation says thatKSP_NORM_NATURAL
is “weird”).
absolute_tolerance
¶
PETSc and SciPy Krylov solvers accept an additional
absolute_tolerance
parameter, such that convergence is detected if
residual < max(tolerance * scale, absolute_tolerance
).
divergence_tolerance
¶
PETSc Krylov solvers accept a third divergence_tolerance
parameter,
such that a divergence is detected if residual > divergence_tolerance *
scale
. Because of the way the convergence test is coded,
if the initial residual is much larger than the norm of the right-hand-side
vector, PETSc will abort with KSP_DIVERGED_DTOL
without ever trying to
solve. If this occurs, either divergence_tolerance
should be increased
or another convergence criterion should be used.
Note
See examples.diffusion.mesh1D
,
examples.diffusion.steadyState.mesh1D.inputPeriodic
,
examples.elphf.diffusion.mesh1D
,
examples.elphf.phaseDiffusion
, examples.phase.binary
,
examples.phase.quaternary
, and
examples.reactiveWetting.liquidVapor1D
for several examples where
criterion="initial"
is used to address this situation.
Note
divergence_tolerance
never caused a problem in previous versions of
FiPy because the default behavior of PETSc is to zero out the
initial guess before trying to solve and then never do a test against
divergence_tolerance
. This resulted in behavior (number of
iterations and ultimate residual) that was very different from the other
solver suites and so FiPy now directs PETSc to use the initial
guess.
Reporting¶
Different solver suites also report different levels of detail about why
they succed or fail. This information is captured as a
Convergence
or
Divergence
property of the
Solver
after calling
solve()
or
sweep()
.
Convergence criteria met. |
||||||
Requested iterations complete (and no residual calculated). |
||||||
Converged, residual is as small as seems reasonable on this machine. |
||||||
Converged, \(\mathbf{b} = 0\), so the exact solution is \(\mathbf{x} = 0\). |
||||||
Converged, relative error appears to be less than tolerance. |
||||||
“Exact” solution found and more iterations will just make things worse. |
||||||
The iterative solver has terminated due to a lack of accuracy in the recursive residual (caused by rounding errors). |
||||||
Solve still in progress. |
Illegal input or the iterative solver has broken down. |
||||||
Maximum number of iterations was reached. |
||||||
The system involving the preconditioner was ill-conditioned. |
||||||
An inner product of the form \(\mathbf{x}^T \mathsf{P}^{-1} \mathbf{x}\) was not positive, so the preconditioning matrix \(\mathsf{P}\) does not appear to be positive definite. |
||||||
The matrix \(\mathsf{L}\) appears to be ill-conditioned. |
||||||
The method stagnated. |
||||||
A scalar quantity became too small or too large to continue computing. |
||||||
Breakdown when solving the Hessenberg system within GMRES. |
||||||
The residual norm increased by a factor of |