Get a problem
Each problem in the OptimalControlProblems package is modelled both in JuMP and in OptimalControl. To obtain a model, you need to specify either the JuMP or the OptimalControl backend.
Get an OptimalControl model
DOCP and NLP Model
To get an OptimalControl model, first install OptimalControl and import the packages:
using OptimalControl
using OptimalControlProblems
Then, to obtain the OptimalControl model of the beam problem, run:
docp = beam(OptimalControlBackend())
nlp = nlp_model(docp)
ADNLPModel - Model with automatic differentiation backend ADModelBackend{
ReverseDiffADGradient,
EmptyADbackend,
EmptyADbackend,
EmptyADbackend,
SparseADJacobian,
SparseReverseADHessian,
EmptyADbackend,
}
Problem name: Generic
All variables: ████████████████████ 1503 All constraints: ████████████████████ 1004
free: ███████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 501 free: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
low/upp: ██████████████⋅⋅⋅⋅⋅⋅ 1002 low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 fixed: ████████████████████ 1004
infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
nnzh: ( 99.96% sparsity) 501 linear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
nonlinear: ████████████████████ 1004
nnzj: ( 99.73% sparsity) 4004
Counters:
obj: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 grad: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 cons: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
cons_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 cons_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jcon: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jgrad: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jac: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jac_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jac_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jprod_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jprod_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jtprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jtprod_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jtprod_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 hess: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 hprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jhess: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jhprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
The nlp
model represents the nonlinear programming problem (NLP) obtained after discretising the optimal control problem (OCP). See the Introduction page for details. The model is an ADNLPModels.ADNLPModel
, which provides automatic differentiation (AD)-based models that follow the NLPModels.jl API.
You also have access to the DOCP model, which corresponds to the discretised optimal control problem. Roughly speaking, the DOCP model is the union of the NLP and OCP models. For more details, see this tutorial or the documentation of CTDirect.DOCP
. To get the OCP model:
ocp = ocp_model(docp)
You can pass any description
and kwargs
of CTDirect.direct_transcription
to the beam
problem or any other.
docp = beam(OptimalControlBackend(), :madnlp; grid_size=100, disc_method=:euler)
Number of variables, constraints, and nonzeros
The nlp
model follows the NLPModels.jl API. See the existing Attributes and the available getter functions (get_X
) here.
To get the number of variables:
using NLPModels
get_nvar(nlp)
1503
To get the number of constraints:
get_ncon(nlp)
1004
To get the number of nonzeros:
nnzo = get_nnzo(nlp) # Gradient of the objective
nnzj = get_nnzj(nlp) # Jacobian of the constraints
nnzh = get_nnzh(nlp) # Hessian of the Lagrangian
println("nnzo = ", nnzo)
println("nnzj = ", nnzj)
println("nnzh = ", nnzh)
nnzo = 1503
nnzj = 4004
nnzh = 501
You can also evaluate the objective function, the constraints, and more. See the API page.
Number of steps
The number of steps $N$ is stored in the metadata:
metadata[:beam][:N]
500
Each problem can be parameterised by the number of steps:
docp = beam(OptimalControlBackend(); N=100)
nlp = nlp_model(docp)
get_nvar(nlp)
303
docp = beam(OptimalControlBackend(); N=200)
nlp = nlp_model(docp)
get_nvar(nlp)
603
Get a JuMP model
To get a JuMP model, first install JuMP and import the packages:
using JuMP
using OptimalControlProblems
Then, to obtain the JuMP model of the beam problem, run:
nlp = beam(JuMPBackend())
A JuMP Model
├ solver: none
├ objective_sense: MIN_SENSE
│ └ objective_function_type: QuadExpr
├ num_variables: 1503
├ num_constraints: 3008
│ ├ AffExpr in MOI.EqualTo{Float64}: 1004
│ ├ VariableRef in MOI.GreaterThan{Float64}: 1002
│ └ VariableRef in MOI.LessThan{Float64}: 1002
└ Names registered in the model
└ :u, :x1, :x2, :∂x1, :∂x2
For details on how to interact with the JuMP model, see the JuMP documentation. In particular, you can pass any arguments and keyword arguments of JuMP.Model
to the beam
problem or any other.
using Ipopt
nlp = beam(JuMPBackend(), Ipopt.Optimizer; add_bridges=true)
You can transform a JuMP model into a MathOptNLPModel
and then use all the API of NLPModels.jl. See this tutorial for more details.