Solve a problem
This manual explains how to use the solve function to solve optimal control problems with OptimalControl.jl. The solve function provides a descriptive mode where you specify strategies using symbolic tokens, with automatic option routing and validation.
For advanced usage, see:
Quick start
Let us define a basic optimal control problem:
using OptimalControl
t0 = 0
tf = 1
x0 = [-1, 0]
ocp = @def begin
t ∈ [ t0, tf ], time
x = (q, v) ∈ R², state
u ∈ R, control
x(t0) == x0
x(tf) == [0, 0]
ẋ(t) == [v(t), u(t)]
0.5∫( u(t)^2 ) → min
endThe simplest way to solve it is:
using NLPModelsIpopt
sol = solve(ocp)▫ This is OptimalControl 1.3.3-beta, solving with: collocation → adnlp → ipopt (cpu)
📦 Configuration:
├─ Discretizer: collocation
├─ Modeler: adnlp
└─ Solver: ipopt
▫ This is Ipopt version 3.14.19, running with linear solver MUMPS 5.8.2.
Number of nonzeros in equality constraint Jacobian...: 1754
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian.............: 250
Total number of variables............................: 752
variables with only lower bounds: 0
variables with lower and upper bounds: 0
variables with only upper bounds: 0
Total number of equality constraints.................: 504
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 5.0000000e-03 1.10e+00 2.03e-14 0.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 6.0000960e+00 2.22e-16 1.78e-15 -11.0 6.08e+00 - 1.00e+00 1.00e+00h 1
Number of Iterations....: 1
(scaled) (unscaled)
Objective...............: 6.0000960015360247e+00 6.0000960015360247e+00
Dual infeasibility......: 1.7763568394002505e-15 1.7763568394002505e-15
Constraint violation....: 2.2204460492503131e-16 2.2204460492503131e-16
Variable bound violation: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity.........: 0.0000000000000000e+00 0.0000000000000000e+00
Overall NLP error.......: 1.7763568394002505e-15 1.7763568394002505e-15
Number of objective function evaluations = 2
Number of objective gradient evaluations = 2
Number of equality constraint evaluations = 2
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 2
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 1
Total seconds in IPOPT = 2.977
EXIT: Optimal Solution Found.This uses default strategies: collocation discretization, ADNLP modeler, and Ipopt solver, all running on CPU.
You must load a solver package (e.g., using NLPModelsIpopt) before calling solve. Otherwise, you'll get:
julia> solve(ocp)
ERROR: ExtensionError. Please make: julia> using NLPModelsIpoptDisplay
Control the configuration display with the display option:
# Suppress all output
sol = solve(ocp; display=false)Initial guess
Provide an initial guess using initial_guess (or the alias init):
# Using the @init macro
init = @init ocp begin
u = 0.5
end
sol = solve(ocp; initial_guess=init, grid_size=50, display=false)# Or using the alias
sol = solve(ocp; init=init, grid_size=50, display=false)For more details on initial guess specification, see Set an initial guess.
Available methods
OptimalControl.jl provides multiple solving strategies. To see all available combinations, call:
methods()(:collocation, :adnlp, :ipopt, :cpu)
(:collocation, :adnlp, :madnlp, :cpu)
(:collocation, :adnlp, :uno, :cpu)
(:collocation, :adnlp, :madncl, :cpu)
(:collocation, :adnlp, :knitro, :cpu)
(:collocation, :exa, :ipopt, :cpu)
(:collocation, :exa, :madnlp, :cpu)
(:collocation, :exa, :uno, :cpu)
(:collocation, :exa, :madncl, :cpu)
(:collocation, :exa, :knitro, :cpu)
(:collocation, :exa, :madnlp, :gpu)
(:collocation, :exa, :madncl, :gpu)Each method is a quadruplet (discretizer, modeler, solver, parameter):
Discretizer — how to discretize the continuous OCP:
:collocation: collocation method (currently the only option)
Modeler — how to build the NLP model:
:adnlp: usesADNLPModels.ADNLPModelwith automatic differentiation:exa: usesExaModels.ExaModelwith SIMD optimization (GPU-capable)
Solver — which NLP solver to use:
Parameter — execution backend:
:cpu: CPU execution (default):gpu: GPU execution (only for:examodeler with:madnlpor:madnclsolvers)
You can inspect which strategies use a given parameter:
describe(:cpu)CPU (parameter)
├─ id: :cpu
├─ hierarchy: CPU → AbstractStrategyParameter
├─ description: CPU-based computation
│
└─ used by strategies (7):
├─ :adnlp (AbstractNLPModeler) → ADNLP{CPU}
├─ :exa (AbstractNLPModeler) → Exa{CPU}
├─ :ipopt (AbstractNLPSolver) → Ipopt{CPU}
├─ :knitro (AbstractNLPSolver) → Knitro{CPU}
├─ :madncl (AbstractNLPSolver) → MadNCL{CPU}
├─ :madnlp (AbstractNLPSolver) → MadNLP{CPU}
└─ :uno (AbstractNLPSolver) → Uno{CPU}describe(:gpu)GPU (parameter)
├─ id: :gpu
├─ hierarchy: GPU → AbstractStrategyParameter
├─ description: GPU-based computation
│
└─ used by strategies (3):
├─ :exa (AbstractNLPModeler) → Exa{GPU}
├─ :madncl (AbstractNLPSolver) → MadNCL{GPU}
└─ :madnlp (AbstractNLPSolver) → MadNLP{GPU}The order of methods in the list above determines the priority for auto-completion. When you provide a partial description, the first matching method from top to bottom is selected. This is why the first method (:collocation, :adnlp, :ipopt, :cpu) is the default.
The first method in the list is the default, so:
solve(ocp)is equivalent to:
solve(ocp, :collocation, :adnlp, :ipopt, :cpu)Choosing a method
You can specify a complete method description:
using MadNLP
sol = solve(ocp, :collocation, :adnlp, :madnlp, :cpu)▫ This is OptimalControl 1.3.3-beta, solving with: collocation → adnlp → madnlp (cpu)
📦 Configuration:
├─ Discretizer: collocation
├─ Modeler: adnlp
└─ Solver: madnlp (linear_solver = MumpsSolver [cpu-dependent])
▫ This is MadNLP version v0.9.1, running with MUMPS v5.8.2
Number of nonzeros in constraint Jacobian............: 1754
Number of nonzeros in Lagrangian Hessian.............: 250
Total number of variables............................: 752
variables with only lower bounds: 0
variables with lower and upper bounds: 0
variables with only upper bounds: 0
Total number of equality constraints.................: 504
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du inf_compl lg(mu) lg(rg) alpha_pr ir ls
0 5.0000000e-03 1.10e+00 3.44e-16 0.00e+00 -1.0 - 0.00e+00 1 0
1 6.0000960e+00 1.50e-15 7.28e-12 0.00e+00 -1.0 - 1.00e+00 1 1h
Number of Iterations....: 1
(scaled) (unscaled)
Objective...............: 6.0000960015359874e+00 6.0000960015359874e+00
Dual infeasibility......: 7.2759576141834259e-12 7.2759576141834259e-12
Constraint violation....: 1.4988010832439613e-15 1.4988010832439613e-15
Complementarity.........: 0.0000000000000000e+00 0.0000000000000000e+00
Overall NLP error.......: 7.2759576141834259e-12 7.2759576141834259e-12
Number of objective function evaluations = 2
Number of objective gradient evaluations = 2
Number of constraint evaluations = 2
Number of constraint Jacobian evaluations = 2
Number of Lagrangian Hessian evaluations = 1
Number of KKT factorizations = 2
Number of KKT backsolves = 2
Total wall secs in initialization = 0.082 s
Total wall secs in linear solver = 0.001 s
Total wall secs in NLP function evaluations = 0.321 s
Total wall secs in solver (w/o init./fun./lin. alg.) = 0.001 s
Total wall secs = 0.405 s
EXIT: Optimal Solution Found (tol = 1.0e-08).Or provide a partial description. Missing tokens are auto-completed using the first matching method from methods() (top-to-bottom priority):
# Only specify the solver → defaults to :collocation, :adnlp, :cpu
sol = solve(ocp, :madnlp; print_level=MadNLP.ERROR)▫ This is OptimalControl 1.3.3-beta, solving with: collocation → adnlp → madnlp (cpu)
📦 Configuration:
├─ Discretizer: collocation
├─ Modeler: adnlp
└─ Solver: madnlp (linear_solver = MumpsSolver [cpu-dependent], print_level = ERROR)The completion algorithm searches methods() from top to bottom and selects the first quadruplet that matches all provided tokens. For example:
solve(ocp, :madnlp)matches(:collocation, :adnlp, :madnlp, :cpu)(first match with:madnlp)solve(ocp, :exa)matches(:collocation, :exa, :ipopt, :cpu)(first match with:exa)solve(ocp, :gpu)matches(:collocation, :exa, :madnlp, :gpu)(first GPU method)
All of these are equivalent (they all complete to :collocation, :adnlp, :ipopt, :cpu):
solve(ocp) # empty → use first method
solve(ocp, :collocation) # specify discretizer
solve(ocp, :adnlp) # specify modeler
solve(ocp, :ipopt) # specify solver
solve(ocp, :cpu) # specify parameter
solve(ocp, :collocation, :adnlp) # specify discretizer + modeler
solve(ocp, :collocation, :ipopt) # specify discretizer + solver
solve(ocp, :collocation, :adnlp, :ipopt, :cpu) # complete descriptionSolver requirements
Each solver requires its package to be loaded to provide the solver implementation:
- Ipopt:
using NLPModelsIpopt - MadNLP:
using MadNLP(CPU) orusing MadNLPGPU(GPU) - Uno:
using UnoSolver - MadNCL:
using MadNCLandusing MadNLP(requires both) - Knitro:
using NLPModelsKnitro(commercial license required)
For GPU solving with MadNLP or MadNCL, you also need: using CUDA
Passing options to strategies
You can pass options as keyword arguments. They are automatically routed to the appropriate strategy:
sol = solve(ocp, :madnlp;
grid_size=100, # → discretizer (Collocation)
max_iter=500, # → solver (MadNLP)
print_level=MadNLP.ERROR # → solver (MadNLP)
)▫ This is OptimalControl 1.3.3-beta, solving with: collocation → adnlp → madnlp (cpu)
📦 Configuration:
├─ Discretizer: collocation (grid_size = 100)
├─ Modeler: adnlp
└─ Solver: madnlp
linear_solver = MumpsSolver [cpu-dependent], max_iter = 500, print_level = ERRORThe solve function displays the configuration and shows which options were applied:
sol = solve(ocp, :ipopt;
grid_size=50,
scheme=:trapeze,
max_iter=100,
print_level=0
)▫ This is OptimalControl 1.3.3-beta, solving with: collocation → adnlp → ipopt (cpu)
📦 Configuration:
├─ Discretizer: collocation (grid_size = 50, scheme = trapeze)
├─ Modeler: adnlp
└─ Solver: ipopt (max_iter = 100, print_level = 0)Notice the 📦 Configuration box showing:
- Discretizer:
collocationwithgrid_size = 50, scheme = trapeze - Modeler:
adnlp(no custom options) - Solver:
ipoptwithmax_iter = 100, print_level = 0
Strategy options
Each strategy declares its available options. You can inspect them using describe.
When describe shows (default: NotProvided) for an option, it means OptimalControl does not override the strategy's native default value. For example:
- For Ipopt options with
(default: NotProvided), Ipopt's own default values are used - For MadNLP options with
(default: NotProvided), MadNLP's own default values are used - For other strategies, the same principle applies
Only options with explicit default values (e.g., (default: 100)) are overridden by OptimalControl.
Discretizer options
describe(:collocation)Collocation (strategy)
├─ id: :collocation
├─ hierarchy: Collocation → AbstractDiscretizer → AbstractStrategy
├─ family: AbstractDiscretizer
│
└─ options (3 options):
├─ grid_size::Int64 (default: 250)
│ description: Number of time steps for the collocation grid
│
├─ scheme::Symbol (default: midpoint)
│ description: Time integration scheme (e.g., :midpoint, :trapeze)
│
└─ time_grid::Any (default: nothing)
description: Explicit time grid (possibly non uniform) for the collocationModeler options
describe(:adnlp)ADNLP{CPU} (strategy)
├─ id: :adnlp
├─ hierarchy: ADNLP → AbstractNLPModeler → AbstractStrategy
├─ family: AbstractNLPModeler
├─ default parameter: CPU
├─ parameters: CPU
│
└─ options (11 options):
├─ show_time::Bool (default: NotProvided)
│ description: Whether to show timing information while building the ADNLP model
│
├─ backend (adnlp_backend)::Symbol (default: optimized)
│ description: Automatic differentiation backend used by ADNLPModels
│
├─ matrix_free::Bool (default: NotProvided)
│ description: Enable matrix-free mode (avoids explicit Hessian/Jacobian matrices)
│
├─ name::String (default: NotProvided)
│ description: Name of the optimization model for identification
│
├─ gradient_backend::Union{Nothing, ADNLPModels.ADBackend, Type{<:ADNLPModels.ADBackend}} (default: NotProvided)
│ description: Override backend for gradient computation (advanced users only)
│
├─ hprod_backend::Union{Nothing, ADNLPModels.ADBackend, Type{<:ADNLPModels.ADBackend}} (default: NotProvided)
│ description: Override backend for Hessian-vector product (advanced users only)
│
├─ jprod_backend::Union{Nothing, ADNLPModels.ADBackend, Type{<:ADNLPModels.ADBackend}} (default: NotProvided)
│ description: Override backend for Jacobian-vector product (advanced users only)
│
├─ jtprod_backend::Union{Nothing, ADNLPModels.ADBackend, Type{<:ADNLPModels.ADBackend}} (default: NotProvided)
│ description: Override backend for transpose Jacobian-vector product (advanced users only)
│
├─ jacobian_backend::Union{Nothing, ADNLPModels.ADBackend, Type{<:ADNLPModels.ADBackend}} (default: NotProvided)
│ description: Override backend for Jacobian matrix computation (advanced users only)
│
├─ hessian_backend::Union{Nothing, ADNLPModels.ADBackend, Type{<:ADNLPModels.ADBackend}} (default: NotProvided)
│ description: Override backend for Hessian matrix computation (advanced users only)
│
└─ ghjvprod_backend::Union{Nothing, ADNLPModels.ADBackend, Type{<:ADNLPModels.ADBackend}} (default: NotProvided)
description: Override backend for g^T ∇²c(x)v computation (advanced users only)using CUDA
describe(:exa)Exa{CPU} (strategy)
├─ id: :exa
├─ hierarchy: Exa → AbstractNLPModeler → AbstractStrategy
├─ family: AbstractNLPModeler
├─ default parameter: CPU
├─ parameters: CPU, GPU
│
┌ Warning: CUDA is loaded but not functional. GPU backend may not work properly.
└ @ CTSolversCUDA ~/.julia/packages/CTSolvers/oXAcJ/ext/CTSolversCUDA.jl:29
├─ common options (1 option):
│ └─ base_type::DataType (default: Float64)
│ description: Base floating-point type used by ExaModels
│
├─ computed options for CPU:
│ └─ backend (exa_backend)::Any (default: nothing [computed])
│ description: Execution backend for ExaModels (CPU, GPU, etc.)
│
└─ computed options for GPU:
└─ backend (exa_backend)::Union{Nothing, KernelAbstractions.Backend} (default: CUDA.CUDAKernels.CUDABackend(false, false) [computed])
description: Execution backend for ExaModels (CPU, GPU, etc.)Solver options
using NLPModelsIpopt
describe(:ipopt)Ipopt{CPU} (strategy)
├─ id: :ipopt
├─ hierarchy: Ipopt → AbstractNLPSolver → AbstractStrategy
├─ family: AbstractNLPSolver
├─ default parameter: CPU
├─ parameters: CPU
│
└─ options (29 options):
├─ tol::Real (default: 1.0e-8)
│ description: Desired convergence tolerance (relative). Determines the convergence tolerance for the algorithm. The algorithm terminates successfully, if the (scaled) NLP error becomes smaller than this value, and if the (absolute) criteria according to dual_inf_tol, constr_viol_tol, and compl_inf_tol are met.
│
├─ max_iter (maxiter, max_iterations, maxit)::Integer (default: 1000)
│ description: Maximum number of iterations. The algorithm terminates with a message if the number of iterations exceeded this number.
│
├─ max_wall_time (maxtime, max_time, time_limit)::Real (default: NotProvided)
│ description: Maximum number of walltime clock seconds. A limit on walltime clock seconds that Ipopt can use to solve one problem.
│
├─ max_cpu_time::Real (default: NotProvided)
│ description: Maximum number of CPU seconds. A limit on CPU seconds that Ipopt can use to solve one problem.
│
├─ dual_inf_tol::Real (default: NotProvided)
│ description: Desired threshold for the dual infeasibility. Absolute tolerance on the dual infeasibility. Successful termination requires that the max-norm of the (unscaled) dual infeasibility is less than this threshold.
│
├─ constr_viol_tol::Real (default: NotProvided)
│ description: Desired threshold for the constraint and variable bound violation. Absolute tolerance on the constraint and variable bound violation.
│
├─ acceptable_tol (acc_tol)::Real (default: NotProvided)
│ description: Acceptable convergence tolerance (relative). Determines which (scaled) optimality error is considered close enough.
│
├─ acceptable_iter::Integer (default: NotProvided)
│ description: Number of "acceptable" iterations required to trigger termination. If the algorithm encounters this many consecutive iterations that are acceptable, it terminates.
│
├─ diverging_iterates_tol::Real (default: NotProvided)
│ description: Threshold for maximal value of primal iterates. If any component of the primal iterates exceeds this value (in absolute terms), the optimization is aborted.
│
├─ derivative_test::String (default: NotProvided)
│ description: Enable derivative check. If enabled, performs a finite difference check of the derivatives.
│
├─ derivative_test_tol::Real (default: NotProvided)
│ description: Threshold for identifying incorrect derivatives. If the relative error of the finite difference approximation exceeds this value, an error is reported.
│
├─ derivative_test_print_all::String (default: NotProvided)
│ description: Indicates whether information for all estimated derivatives should be printed.
│
├─ hessian_approximation::String (default: NotProvided)
│ description: Indicates what Hessian information regarding the Lagrangian function is to be used.
│
├─ limited_memory_update_type::String (default: NotProvided)
│ description: Quasi-Newton update method for the limited memory approximation.
│
├─ warm_start_init_point::String (default: NotProvided)
│ description: Indicates whether specific warm start values should be used for the primal and dual variables.
│
├─ warm_start_bound_push::Real (default: NotProvided)
│ description: Indicates how much the primal variables should be pushed inside the bounds for the warm start.
│
├─ warm_start_mult_bound_push::Real (default: NotProvided)
│ description: Indicates how much the dual variables should be pushed inside the bounds for the warm start.
│
├─ mu_strategy::String (default: adaptive)
│ description: Barrier parameter update strategy
│
├─ mu_init::Real (default: NotProvided)
│ description: Initial value for the barrier parameter.
│
├─ mu_max_fact::Real (default: NotProvided)
│ description: Factor for maximal barrier parameter. This factor determines the upper bound on the barrier parameter.
│
├─ mu_max::Real (default: NotProvided)
│ description: Maximal value for barrier parameter. This option overrides the factor setting.
│
├─ mu_min::Real (default: NotProvided)
│ description: Minimal value for barrier parameter.
│
├─ timing_statistics::String (default: NotProvided)
│ description: Indicates whether to measure time spent in components of Ipopt and NLP evaluation. The overall algorithm time is unaffected by this option.
│
├─ linear_solver::String (default: mumps)
│ description: Linear solver used for step computations. Determines which linear algebra package is to be used for the solution of the augmented linear system (for obtaining the search directions).
│
├─ print_level::Integer (default: 5)
│ description: Ipopt output verbosity (0-12)
│
├─ print_timing_statistics::String (default: NotProvided)
│ description: Switch to print timing statistics. If selected, the program will print the time spent for selected tasks. This implies timing_statistics=yes.
│
├─ print_frequency_iter::Integer (default: NotProvided)
│ description: Determines at which iteration frequency the summarizing iteration output line should be printed. Summarizing iteration output is printed every print_frequency_iter iterations, if at least print_frequency_time seconds have passed since last output.
│
├─ print_frequency_time::Real (default: NotProvided)
│ description: Determines at which time frequency the summarizing iteration output line should be printed. Summarizing iteration output is printed if at least print_frequency_time seconds have passed since last output and the iteration number is a multiple of print_frequency_iter.
│
└─ sb::String (default: yes)
description: Suppress Ipopt banner (yes/no)using MadNLPGPU
describe(:madnlp)MadNLP{CPU} (strategy)
├─ id: :madnlp
├─ hierarchy: MadNLP → AbstractNLPSolver → AbstractStrategy
├─ family: AbstractNLPSolver
├─ default parameter: CPU
├─ parameters: CPU, GPU
│
├─ common options (22 options):
│ ├─ bound_push::Real (default: NotProvided)
│ │ description: Amount by which the initial point is pushed inside the bounds to ensure strictly interior starting point.
│ │
│ ├─ acceptable_tol (acc_tol)::Real (default: NotProvided)
│ │ description: Relaxed tolerance for acceptable solution. If optimality error stays below this for 'acceptable_iter' iterations, algorithm terminates with SOLVED_TO_ACCEPTABLE_LEVEL.
│ │
│ ├─ bound_fac::Real (default: NotProvided)
│ │ description: Factor to determine how much the initial point is pushed inside the bounds.
│ │
│ ├─ mu_init::Real (default: NotProvided)
│ │ description: Initial value for the barrier parameter mu.
│ │
│ ├─ equality_treatment::Type{<:MadNLP.AbstractEqualityTreatment} (default: NotProvided)
│ │ description: Method to handle equality constraints. Options: MadNLP.EnforceEquality, MadNLP.RelaxEquality.
│ │
│ ├─ nlp_scaling_max_gradient::Real (default: NotProvided)
│ │ description: Maximum allowed gradient value when scaling the NLP problem. Used to prevent excessive scaling.
│ │
│ ├─ hessian_approximation::Union{UnionAll, Type{<:MadNLP.AbstractHessian}} (default: NotProvided)
│ │ description: Hessian approximation method (e.g., MadNLP.ExactHessian, MadNLP.CompactLBFGS, MadNLP.BFGS).
│ │
│ ├─ max_iter (maxiter, max_iterations, maxit)::Integer (default: 1000)
│ │ description: Maximum number of interior-point iterations before termination. Set to 0 to evaluate initial point only.
│ │
│ ├─ tol::Real (default: 1.0e-8)
│ │ description: Convergence tolerance for optimality conditions. The algorithm terminates when optimality error falls below this threshold.
│ │
│ ├─ mu_min::Real (default: NotProvided)
│ │ description: Minimum value for the barrier parameter mu.
│ │
│ ├─ fixed_variable_treatment::Type{<:MadNLP.AbstractFixedVariableTreatment} (default: NotProvided)
│ │ description: Method to handle fixed variables. Options: MadNLP.MakeParameter, MadNLP.RelaxBound, MadNLP.NoFixedVariables.
│ │
│ ├─ max_wall_time (max_time, maxtime, time_limit)::Real (default: NotProvided)
│ │ description: Maximum wall-clock time limit in seconds. Algorithm terminates with MAXIMUM_WALLTIME_EXCEEDED if exceeded.
│ │
│ ├─ diverging_iterates_tol::Real (default: NotProvided)
│ │ description: NLP error threshold above which algorithm is declared diverging. Terminates with DIVERGING_ITERATES status.
│ │
│ ├─ print_level::MadNLP.LogLevels (default: INFO)
│ │ description: Logging verbosity level. Valid values: MadNLP.TRACE, DEBUG, INFO (default), NOTICE, WARN, ERROR.
│ │
│ ├─ nlp_scaling::Bool (default: NotProvided)
│ │ description: Whether to scale the NLP problem. If true, MadNLP automatically scales the objective and constraints.
│ │
│ ├─ constr_mult_init_max::Real (default: NotProvided)
│ │ description: Maximum allowed value for the initial constraint multipliers.
│ │
│ ├─ kkt_system::Union{UnionAll, Type{<:MadNLP.AbstractKKTSystem}} (default: NotProvided)
│ │ description: KKT system solver type (e.g., MadNLP.SparseKKTSystem, MadNLP.DenseKKTSystem).
│ │
│ ├─ tau_min::Real (default: NotProvided)
│ │ description: Lower bound for the fraction-to-the-boundary parameter tau.
│ │
│ ├─ acceptable_iter::Integer (default: NotProvided)
│ │ description: Number of consecutive iterations with acceptable (but not optimal) error required before accepting the solution.
│ │
│ ├─ inertia_correction_method::Type{<:MadNLP.AbstractInertiaCorrector} (default: NotProvided)
│ │ description: Method for assumption of inertia correction (e.g., MadNLP.InertiaAuto, MadNLP.InertiaBased).
│ │
│ ├─ hessian_constant (hessian_cst)::Bool (default: NotProvided)
│ │ description: Whether the Hessian of the Lagrangian is constant (i.e., quadratic objective with linear constraints). Can improve performance.
│ │
│ └─ jacobian_constant (jacobian_cst)::Bool (default: NotProvided)
│ description: Whether the Jacobian of the constraints is constant (i.e., linear constraints). Can improve performance.
│
├─ computed options for CPU:
│ └─ linear_solver::Type{<:MadNLP.AbstractLinearSolver} (default: MumpsSolver [computed])
│ description: Sparse linear solver for the KKT system. Default is MadNLP.MumpsSolver for CPU, MadNLPGPU.CUDSSSolver for GPU. Other options include MadNLP.UmfpackSolver, MadNLP.LDLSolver, MadNLP.CHOLMODSolver.
│
└─ computed options for GPU:
└─ linear_solver::Type{<:MadNLP.AbstractLinearSolver} (default: MadNLPGPU.CUDSSSolver [computed])
description: Sparse linear solver for the KKT system. Default is MadNLP.MumpsSolver for CPU, MadNLPGPU.CUDSSSolver for GPU. Other options include MadNLP.UmfpackSolver, MadNLP.LDLSolver, MadNLP.CHOLMODSolver.using MadNCL
describe(:madncl)MadNCL{CPU} (strategy)
├─ id: :madncl
├─ hierarchy: MadNCL → AbstractNLPSolver → AbstractStrategy
├─ family: AbstractNLPSolver
├─ default parameter: CPU
├─ parameters: CPU, GPU
│
├─ common options (23 options):
│ ├─ bound_push::Real (default: NotProvided)
│ │ description: Amount by which the initial point is pushed inside the bounds to ensure strictly interior starting point.
│ │
│ ├─ acceptable_tol (acc_tol)::Real (default: NotProvided)
│ │ description: Relaxed tolerance for acceptable solution. If optimality error stays below this for 'acceptable_iter' iterations, algorithm terminates with SOLVED_TO_ACCEPTABLE_LEVEL.
│ │
│ ├─ bound_fac::Real (default: NotProvided)
│ │ description: Factor to determine how much the initial point is pushed inside the bounds.
│ │
│ ├─ ncl_options::MadNCL.NCLOptions (default: MadNCL.NCLOptions{Float64}(true, true, 0.3, true, 1.0, 1.0e-8, 1.0e-8, 0.0001, 100.0, 1.0e12, 20, 0.1, 1.99, 0.2, 1.0e-9))
│ │ description: Low-level NCLOptions structure controlling the augmented Lagrangian algorithm.
Available fields:
- `verbose` (Bool): Print convergence logs (default: true)
- `scaling` (Bool): Enable scaling (default: false)
- `opt_tol` (Float): Optimality tolerance (default: 1e-8)
- `feas_tol` (Float): Feasibility tolerance (default: 1e-8)
- `rho_init` (Float): Initial Augmented Lagrangian penalty (default: 10.0)
- `max_auglag_iter` (Int): Maximum number of outer iterations (default: 30)
│ │
│ ├─ mu_init::Real (default: NotProvided)
│ │ description: Initial value for the barrier parameter mu.
│ │
│ ├─ equality_treatment::Type{<:MadNLP.AbstractEqualityTreatment} (default: NotProvided)
│ │ description: Method to handle equality constraints. Options: MadNLP.EnforceEquality, MadNLP.RelaxEquality.
│ │
│ ├─ nlp_scaling_max_gradient::Real (default: NotProvided)
│ │ description: Maximum allowed gradient value when scaling the NLP problem. Used to prevent excessive scaling.
│ │
│ ├─ hessian_approximation::Union{UnionAll, Type{<:MadNLP.AbstractHessian}} (default: NotProvided)
│ │ description: Hessian approximation method (e.g., MadNLP.ExactHessian, MadNLP.CompactLBFGS, MadNLP.BFGS).
│ │
│ ├─ max_iter (maxiter, max_iterations, maxit)::Integer (default: 1000)
│ │ description: Maximum number of augmented Lagrangian iterations
│ │
│ ├─ tol::Real (default: 1.0e-8)
│ │ description: Optimality tolerance
│ │
│ ├─ mu_min::Real (default: NotProvided)
│ │ description: Minimum value for the barrier parameter mu.
│ │
│ ├─ fixed_variable_treatment::Type{<:MadNLP.AbstractFixedVariableTreatment} (default: NotProvided)
│ │ description: Method to handle fixed variables. Options: MadNLP.MakeParameter, MadNLP.RelaxBound, MadNLP.NoFixedVariables.
│ │
│ ├─ max_wall_time (max_time, maxtime, time_limit)::Real (default: NotProvided)
│ │ description: Maximum wall-clock time limit in seconds. Algorithm terminates with MAXIMUM_WALLTIME_EXCEEDED if exceeded.
│ │
│ ├─ diverging_iterates_tol::Real (default: NotProvided)
│ │ description: NLP error threshold above which algorithm is declared diverging. Terminates with DIVERGING_ITERATES status.
│ │
│ ├─ print_level::MadNLP.LogLevels (default: INFO)
│ │ description: MadNCL/MadNLP logging level
│ │
│ ├─ nlp_scaling::Bool (default: NotProvided)
│ │ description: Whether to scale the NLP problem. If true, MadNLP automatically scales the objective and constraints.
│ │
│ ├─ constr_mult_init_max::Real (default: NotProvided)
│ │ description: Maximum allowed value for the initial constraint multipliers.
│ │
│ ├─ kkt_system::Union{UnionAll, Type{<:MadNLP.AbstractKKTSystem}} (default: NotProvided)
│ │ description: KKT system solver type (e.g., MadNLP.SparseKKTSystem, MadNLP.DenseKKTSystem).
│ │
│ ├─ tau_min::Real (default: NotProvided)
│ │ description: Lower bound for the fraction-to-the-boundary parameter tau.
│ │
│ ├─ acceptable_iter::Integer (default: NotProvided)
│ │ description: Number of consecutive iterations with acceptable (but not optimal) error required before accepting the solution.
│ │
│ ├─ inertia_correction_method::Type{<:MadNLP.AbstractInertiaCorrector} (default: NotProvided)
│ │ description: Method for assumption of inertia correction (e.g., MadNLP.InertiaAuto, MadNLP.InertiaBased).
│ │
│ ├─ hessian_constant (hessian_cst)::Bool (default: NotProvided)
│ │ description: Whether the Hessian of the Lagrangian is constant (i.e., quadratic objective with linear constraints). Can improve performance.
│ │
│ └─ jacobian_constant (jacobian_cst)::Bool (default: NotProvided)
│ description: Whether the Jacobian of the constraints is constant (i.e., linear constraints). Can improve performance.
│
├─ computed options for CPU:
│ └─ linear_solver::Type{<:MadNLP.AbstractLinearSolver} (default: MumpsSolver [computed])
│ description: Linear solver implementation used inside MadNCL. Default is MadNLP.MumpsSolver for CPU, MadNLPGPU.CUDSSSolver for GPU.
│
└─ computed options for GPU:
└─ linear_solver::Type{<:MadNLP.AbstractLinearSolver} (default: MadNLPGPU.CUDSSSolver [computed])
description: Linear solver implementation used inside MadNCL. Default is MadNLP.MumpsSolver for CPU, MadNLPGPU.CUDSSSolver for GPU.using UnoSolver
describe(:uno)Uno{CPU} (strategy)
├─ id: :uno
├─ hierarchy: Uno → AbstractNLPSolver → AbstractStrategy
├─ family: AbstractNLPSolver
├─ default parameter: CPU
├─ parameters: CPU
│
└─ options (17 options):
├─ preset::String (default: ipopt)
│ description: Uno implements presets, that is combinations of ingredients that correspond to existing solvers. At the moment, the available presets are filtersqp (after the trust-region restoration filter SQP solver filterSQP) and ipopt (after the line-search filter restoration infeasible interior-point solver IPOPT).
│
├─ primal_tolerance::Real (default: 1.0e-8)
│ description: Tolerance on constraint violation. Determines the convergence tolerance for primal feasibility.
│
├─ dual_tolerance::Real (default: 1.0e-8)
│ description: Tolerance on stationarity and complementarity. Determines the convergence tolerance for dual feasibility.
│
├─ loose_primal_tolerance::Real (default: NotProvided)
│ description: Loose tolerance on constraint violation. Used for acceptable termination criteria.
│
├─ loose_dual_tolerance::Real (default: NotProvided)
│ description: Loose tolerance on stationarity and complementarity. Used for acceptable termination criteria.
│
├─ loose_tolerance_iteration_threshold::Integer (default: NotProvided)
│ description: Number of iterations for the loose tolerance to apply. If the algorithm encounters this many consecutive iterations that satisfy loose tolerances, it terminates.
│
├─ max_iterations (maxiter, max_iter, maxit)::Integer (default: 1000)
│ description: Maximum number of outer iterations. The algorithm terminates with a message if the number of iterations exceeded this number.
│
├─ time_limit (max_wall_time, maxtime, max_time)::Real (default: NotProvided)
│ description: Time limit in seconds. A limit on walltime clock seconds that Uno can use to solve one problem.
│
├─ print_solution::Bool (default: false)
│ description: Whether the primal-dual solution is printed at termination.
│
├─ unbounded_objective_threshold::Real (default: NotProvided)
│ description: Objective threshold under which the problem is declared unbounded. If the objective value falls below this threshold, the solver terminates with unbounded status.
│
├─ logger::String (default: INFO)
│ description: Verbosity level of the logger. Controls the amount of output during the solve.
│
├─ progress_norm::String (default: NotProvided)
│ description: Norm used for the progress measures. Determines how progress is measured during the solve.
│
├─ residual_norm::String (default: NotProvided)
│ description: Norm used for the residuals. Determines how residuals are measured for convergence.
│
├─ residual_scaling_threshold::Real (default: NotProvided)
│ description: Scaling factor in stationarity and complementarity residuals. Controls how residuals are scaled for convergence checks.
│
├─ protect_actual_reduction_against_roundoff::Bool (default: NotProvided)
│ description: Whether the actual reduction is slightly modified to account for roundoff errors. Can improve numerical stability.
│
├─ protected_actual_reduction_macheps_coefficient::Real (default: NotProvided)
│ description: Coefficient of the machine epsilon in the protected actual reduction. Only used if protect_actual_reduction_against_roundoff is true.
│
└─ print_subproblem::Bool (default: false)
description: Whether the subproblem is printed in DEBUG mode. Useful for debugging subproblem formulations.Official documentation
For complete option lists, see the official documentation:
- ADNLP: ADNLPModels documentation
- Exa: ExaModels documentation
- Ipopt: Ipopt options
- MadNLP: MadNLP options
- Uno: Uno documentation
- MadNCL: MadNCL documentation
- Knitro: Knitro options
See also
- Advanced options: option routing,
route_tofor disambiguation,bypassfor unknown options, introspection tools - Explicit mode: using typed components (
Collocation(),Ipopt()) instead of symbols - GPU solving: using the
:gpuparameter orExa{GPU}()/MadNLP{GPU}()types - Initial guess: detailed guide on the
@initmacro - Solution: working with the returned solution object