|View source on GitHub|
Backward differentiation formula (BDF) solver for stiff ODEs.
tfp.math.ode.BDF( rtol=0.001, atol=1e-06, first_step_size=None, safety_factor=0.9, min_step_size_factor=0.1, max_step_size_factor=10.0, max_num_steps=None, max_order=bdf_util.MAX_ORDER, max_num_newton_iters=4, newton_tol_factor=0.1, newton_step_size_factor=0.5, bdf_coefficients=(-0.185, -1.0 / 9.0, -0.0823, -0.0415, 0.0), evaluate_jacobian_lazily=False, use_pfor_to_compute_jacobian=True, validate_args=False, name='bdf' )
Implements the solver described in [Shampine and Reichelt (1997)], a variable step size, variable order (VSVO) BDF integrator with order varying between 1 and 5.
Each step involves solving the following nonlinear equation by Newton's method:
0 = 1/1 * BDF(1, y)[n+1] + ... + 1/k * BDF(k, y)[n+1] - h ode_fn(t[n+1], y[n+1]) - bdf_coefficients[k-1] * (1/1 + ... + 1/k) * (y[n+1] - y[n] - BDF(1, y)[n] - ... - BDF(k, y)[n])
k >= 1 is the current order of the integrator,
h is the current step
bdf_coefficients is a list of numbers that parameterizes the method,
BDF(m, y) is the
m-th order backward difference of the vector
BDF(0, y)[n] = y[n] and
BDF(m + 1, y)[n] = BDF(m, y)[n] - BDF(m, y)[n - 1] for
m >= 0.
Newton's method can fail because
- the method has exceeded the maximum number of iterations,
- the method is converging too slowly, or
- the method is not expected to converge (the last two conditions are determined by approximating the Lipschitz constant associated with the iteration).
True, the solver avoids evaluating the
Jacobian of the dynamics function as much as possible. In particular, Newton's
method will try to use the Jacobian from a previous integration step. If
Newton's method fails with an out-of-date Jacobian, the Jacobian is
re-evaluated and Newton's method is restarted. If Newton's method fails and
the Jacobian is already up-to-date, then the step size is decreased and
Newton's method is restarted.
Even if Newton's method converges, the solution it generates can still be rejected if it exceeds the specified error tolerance due to truncation error. In this case, the step size is decreased and Newton's method is restarted.
: Lawrence F. Shampine and Mark W. Reichelt. The MATLAB ODE Suite. SIAM Journal on Scientific Computing 18(1):1-22, 1997.
rtol: Optional float
Tensorspecifying an upper bound on relative error, per element of the dependent variable. The error tolerance for the next step is
tol = atol + rtol * abs(state)where
stateis the computed state at the current step (see also
atol). The next step is rejected if it incurs a local truncation error larger than
tol. Default value:
atol: Optional float
Tensorspecifying an upper bound on absolute error, per element of the dependent variable (see also
rtol). Default value:
first_step_size: Optional scalar float
Tensorspecifying the size of the first step. If unspecified, the size is chosen automatically. Default value:
safety_factor: Scalar positive float
Tensor. When Newton's method converges, the solver may choose to update the step size by applying a multiplicative factor to the current step size. This factor is
factor = clamp(factor_unclamped, min_step_size_factor, max_step_size_factor)where
factor_unclamped = error_ratio**(-1. / (order + 1)) * safety_factor(see also
max_step_size_factor). A small (respectively, large) value for the safety factor causes the solver to take smaller (respectively, larger) step sizes. A value larger than one, though not explicitly prohibited, is discouraged. Default value:
min_step_size_factor: Scalar float
safety_factor). Default value:
max_step_size_factor: Scalar float
safety_factor). Default value:
max_num_steps: Optional scalar integer
Tensorspecifying the maximum number of steps allowed (including rejected steps). If unspecified, there is no upper bound on the number of steps. Default value:
max_order: Scalar integer
Tensortaking values between 1 and 5 (inclusive) specifying the maximum BDF order. Default value:
max_num_newton_iters: Optional scalar integer
Tensorspecifying the maximum number of iterations per invocation of Newton's method. If unspecified, there is no upper bound on the number iterations. Default value:
newton_tol_factor: Scalar float
Tensorused to determine the stopping condition for Newton's method. In particular, Newton's method terminates when the distance to the root is estimated to be less than
newton_tol_factor * norm(atol + rtol * abs(state))where
stateis the computed state at the current step. Default value:
newton_step_size_factor: Scalar float
Tensorspecifying a multiplicative factor applied to the size of the integration step when Newton's method fails. Default value:
bdf_coefficients: 1-D float
Tensorwith 5 entries that parameterize the solver. The default values are those proposed in . Default value:
(-0.1850, -1. / 9., -0.0823, -0.0415, 0.).
evaluate_jacobian_lazily: Optional boolean specifying whether the Jacobian should be evaluated at each point in time or as needed (i.e., lazily). Default value:
use_pfor_to_compute_jacobian: Boolean specifying whether or not to use parallel for in computing the Jacobian when
jacobian_fnis not specified. Default value:
validate_args: Whether to validate input with asserts. If
Falseand the inputs are invalid, correct behavior is not guaranteed. Default value:
strname prefixed to Ops created by this function. Default value:
solve( ode_fn, initial_time, initial_state, solution_times, jacobian_fn=None, jacobian_sparsity=None, batch_ndims=None, previous_solver_internal_state=None )
Solves an initial value problem.
An initial value problem consists of a system of ODEs and an initial condition:
dy/dt(t) = ode_fn(t, y(t)) y(initial_time) = initial_state
t (also called time) is a scalar float
called the state at time
t) is an N-D float or complex
dy/dt(t) = dot(A, y(t)) is solved below.
t_init, t0, t1 = 0., 0.5, 1. y_init = tf.constant([1., 1.], dtype=tf.float64) A = tf.constant([[-1., -2.], [-3., -4.]], dtype=tf.float64) def ode_fn(t, y): return tf.linalg.matvec(A, y) results = tfp.math.ode.BDF().solve(ode_fn, t_init, y_init, solution_times=[t0, t1]) y0 = results.states # == dot(matrix_exp(A * t0), y_init) y1 = results.states # == dot(matrix_exp(A * t1), y_init)
yields the state at various times between
automatically by the solver. In this case,
results.states[i] is the state
The gradient of the result is computed using the adjoint sensitivity method described in [Chen et al. (2018)].
grad = tf.gradients(y1, y0) # == dot(e, J) # J is the Jacobian of y1 with respect to y0. In this case, J = exp(A * t1). # e = [1, ..., 1] is the row vector of ones.
: Chen, Tian Qi, et al. "Neural ordinary differential equations." Advances in Neural Information Processing Systems. 2018.
ode_fn: Function of the form
ode_fn(t, y). The input
tis a scalar float
Tensor. The input
yand output are both
Tensors with the same shape and
initial_time: Scalar float
Tensorspecifying the initial time.
initial_state: N-D float or complex
Tensorspecifying the initial state. The
initial_statemust be complex for problems with complex-valued states (even if the initial state is real).
solution_times: 1-D float
Tensorspecifying a list of times. The solver stores the computed state at each of these times in the returned
Resultsobject. Must satisfy
initial_time <= solution_timesand
solution_times[i] < solution_times[i+1]. Alternatively, the user can pass
final_timeis a scalar float
initial_time < final_time. Doing so requests that the solver automatically choose suitable times up to and including
final_timeat which to store the computed state.
jacobian_fn: Optional function of the form
jacobian_fn(t, y). The input
tis a scalar float
Tensor. The input
yhas the same shape and
initial_state. The output is a 2N-D
Tensorwhose shape is
initial_state.shape + initial_state.shapeand whose
dtypeis the same as
initial_state. In particular, the
(i1, ..., iN, j1, ..., jN)-th entry of
jacobian_fn(t, y)is the derivative of the
(i1, ..., iN)-th entry of
ode_fn(t, y)with respect to the
(j1, ..., jN)-th entry of
y. If this argument is left unspecified, the solver automatically computes the Jacobian if and when it is needed. Default value:
jacobian_sparsity: Optional 2N-D boolean
Tensorwhose shape is
initial_state.shape + initial_state.shapespecifying the sparsity pattern of the Jacobian. This argument is ignored if
jacobian_fnis specified. Default value:
batch_ndims: Optional nonnegative integer. When specified, the first
initial_stateare batch dimensions. Default value:
previous_solver_internal_state: Optional solver-specific argument used to warm-start this invocation of
solve. Default value:
Object of type