Get a problem
Each problem in the OptimalControlProblems package is modelled both in JuMP and in OptimalControl. To obtain a model, you need to specify either the JuMP or the OptimalControl backend, but first import the package:
using OptimalControlProblemsGet an OptimalControl model
To get an OptimalControl model, first install OptimalControl and import the package:
using OptimalControlThen, to obtain the OptimalControl model of the beam problem, run:
docp = beam(OptimalControlBackend())
nlp = nlp_model(docp)ADNLPModel - Model with automatic differentiation backend ADModelBackend{
ReverseDiffADGradient,
EmptyADbackend,
EmptyADbackend,
EmptyADbackend,
SparseADJacobian,
SparseReverseADHessian,
EmptyADbackend,
}
Problem name: Generic
All variables: ████████████████████ 1503 All constraints: ████████████████████ 1004
free: ██████████████⋅⋅⋅⋅⋅⋅ 1002 free: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
low/upp: ███████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 501 low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 fixed: ████████████████████ 1004
infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
nnzh: ( 99.96% sparsity) 501 linear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
nonlinear: ████████████████████ 1004
nnzj: ( 99.73% sparsity) 4004
Counters:
obj: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 grad: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 cons: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
cons_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 cons_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jcon: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jgrad: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jac: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jac_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jac_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jprod_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jprod_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jtprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jtprod_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jtprod_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 hess: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 hprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jhess: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jhprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
The nlp model represents the nonlinear programming problem (NLP) obtained after discretising the optimal control problem (OCP). See the Introduction page for details. The model is an ADNLPModels.ADNLPModel, which provides automatic differentiation (AD)-based models that follow the NLPModels.jl API.
You also have access to the DOCP model, which corresponds to the discretised optimal control problem. Roughly speaking, the DOCP model is the union of the NLP and OCP models. For more details, see this tutorial or the documentation of CTDirect.DOCP. To get the OCP model:
ocp = ocp_model(docp)You can pass any description and kwargs of CTDirect.direct_transcription to the beam problem or any other.
docp = beam(OptimalControlBackend(), :adnlp; grid_size=100, disc_method=:euler)You can also replace any default parameter value:
docp = beam(OptimalControlBackend(); parameters=(tf=2, ))To get the list of :beam parameters and the default values, make:
metadata(:beam)[:parameters]To have the description of the Beam problem parameters, either check the Beam page or the code.
Number of variables, constraints, and nonzeros
The nlp model follows the NLPModels.jl API. See the existing Attributes and the available getter functions (get_X).
To get the number of variables, import the package:
using NLPModelsand then, use the associated getter:
get_nvar(nlp)1503To get the number of constraints:
get_ncon(nlp)1004To get the number of nonzeros:
nnzo = get_nnzo(nlp) # Gradient of the objective
nnzj = get_nnzj(nlp) # Jacobian of the constraints
nnzh = get_nnzh(nlp) # Hessian of the Lagrangian
println("nnzo = ", nnzo)
println("nnzj = ", nnzj)
println("nnzh = ", nnzh)nnzo = 1503
nnzj = 4004
nnzh = 501You can also evaluate the objective function, the constraints, and more. See the API page.
Number of steps
The (default) number of steps $N$ is stored in the metadata:
N = metadata(:beam)[:grid_size]500Each problem can be parameterised by the number of steps:
docp = beam(OptimalControlBackend(); grid_size=100)
get_nvar(nlp_model(docp))303docp = beam(OptimalControlBackend(); grid_size=200)
get_nvar(nlp_model(docp))603Get a JuMP model
To get a JuMP model, install JuMP and import the package:
using JuMPThen, to obtain the JuMP model of the beam problem, run:
nlp = beam(JuMPBackend())A JuMP Model
├ solver: none
├ objective_sense: MIN_SENSE
│ └ objective_function_type: QuadExpr
├ num_variables: 1503
├ num_constraints: 2006
│ ├ AffExpr in MOI.EqualTo{Float64}: 1004
│ ├ VariableRef in MOI.GreaterThan{Float64}: 501
│ └ VariableRef in MOI.LessThan{Float64}: 501
└ Names registered in the model
└ :N, :control_components, :costate_components, :dc, :dx₁, :dx₂, :state_components, :time_grid, :u, :variable_components, :x₁, :x₂, :Δt, :∂x₁, :∂x₂For details on how to interact with the JuMP model, see the JuMP documentation. In particular, you can pass any arguments and keyword arguments of JuMP.Model to the beam problem or any other.
using Ipopt
nlp = beam(JuMPBackend(), Ipopt.Optimizer; add_bridges=true)- As with
OptimalControlBackend, you can replace any default parameter value. - You can transform the JuMP model into a
NLPModelsJuMP.MathOptNLPModeland then use all the API of NLPModels.jl. See this tutorial for more details.