The optimal control problem object: structure and usage
In this manual, we'll first recall the main functionalities you can use when working with an optimal control problem (OCP). This includes essential operations like:
- Solving an OCP: How to find the optimal solution for your defined problem.
- Computing flows from an OCP: Understanding the dynamics and trajectories derived from the optimal solution.
- Printing an OCP: How to display a summary of your problem's definition.
After covering these core functionalities, we'll delve into the structure of an OCP. Since an OCP is structured as a OptimalControl.Model struct, we'll first explain how to access its underlying attributes, such as the problem's dynamics, costs, and constraints. Following this, we'll shift our focus to the simple properties inherent to an OCP, learning how to determine aspects like whether the problem:
- Is autonomous: Does its dynamics depend explicitly on time?
- Has a fixed or free initial/final time: Is the duration of the control problem predetermined or not?
Content
Main functionalities
Let's define a basic optimal control problem.
using OptimalControl
t0 = 0
tf = 1
x0 = [-1, 0]
ocp = @def begin
t ∈ [ t0, tf ], time
x = (q, v) ∈ R², state
u ∈ R, control
x(t0) == x0
x(tf) == [0, 0]
ẋ(t) == [v(t), u(t)]
0.5∫( u(t)^2 ) → min
endTo print it, simply:
ocpAbstract definition:
t ∈ [t0, tf], time
x = ((q, v) ∈ R², state)
u ∈ R, control
x(t0) == x0
x(tf) == [0, 0]
ẋ(t) == [v(t), u(t)]
0.5 * ∫(u(t) ^ 2) → min
The (autonomous) optimal control problem is of the form:
minimize J(x, u) = ∫ f⁰(x(t), u(t)) dt, over [0, 1]
subject to
ẋ(t) = f(x(t), u(t)), t in [0, 1] a.e.,
ϕ₋ ≤ ϕ(x(0), x(1)) ≤ ϕ₊,
where x(t) = (q(t), v(t)) ∈ R² and u(t) ∈ R.
We can now solve the problem (for more details, visit the solve manual):
using NLPModelsIpopt
solve(ocp)▫ This is OptimalControl 1.3.3-beta, solving with: collocation → adnlp → ipopt (cpu)
📦 Configuration:
├─ Discretizer: collocation
├─ Modeler: adnlp
└─ Solver: ipopt
▫ This is Ipopt version 3.14.19, running with linear solver MUMPS 5.8.2.
Number of nonzeros in equality constraint Jacobian...: 1754
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian.............: 250
Total number of variables............................: 752
variables with only lower bounds: 0
variables with lower and upper bounds: 0
variables with only upper bounds: 0
Total number of equality constraints.................: 504
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 5.0000000e-03 1.10e+00 2.03e-14 0.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 6.0000960e+00 2.22e-16 1.78e-15 -11.0 6.08e+00 - 1.00e+00 1.00e+00h 1
Number of Iterations....: 1
(scaled) (unscaled)
Objective...............: 6.0000960015360247e+00 6.0000960015360247e+00
Dual infeasibility......: 1.7763568394002505e-15 1.7763568394002505e-15
Constraint violation....: 2.2204460492503131e-16 2.2204460492503131e-16
Variable bound violation: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity.........: 0.0000000000000000e+00 0.0000000000000000e+00
Overall NLP error.......: 1.7763568394002505e-15 1.7763568394002505e-15
Number of objective function evaluations = 2
Number of objective gradient evaluations = 2
Number of equality constraint evaluations = 2
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 2
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 1
Total seconds in IPOPT = 2.899
EXIT: Optimal Solution Found.You can also compute flows (for more details, see the flow manual) from the optimal control problem, providing a control law in feedback form. The pseudo-Hamiltonian of this problem is
\[ H(x, p, u) = p_q\, v + p_v\, u + p^0 \frac{u^2}{2},\]
where $p^0 = -1$ since we are in the normal case. From the Pontryagin maximum principle, the maximising control is given in feedback form by
\[u(x, p) = p_v\]
since $\partial^2_{uu} H = p^0 = - 1 < 0$.
u = (x, p) -> p[2] # control law in feedback form
using OrdinaryDiffEq # needed to import numerical integrators
f = Flow(ocp, u) # compute the Hamiltonian flow function
p0 = [12, 6] # initial covector solution
xf, pf = f(t0, x0, p0, tf) # flow from (x0, p0) at time t0 to tf
xf # should be (0, 0)2-element Vector{Float64}:
-1.6443649131320877e-15
6.0194942550307896e-15A more advanced feature allows for the discretization of the optimal control problem. From the discretized version, you can obtain a Nonlinear Programming problem (or optimization problem) and solve it using any appropriate NLP solver. For more details, visit the NLP manipulation tutorial.
Model struct
The optimal control problem ocp is a OptimalControl.Model struct.
CTModels.OCP.Model — Type
struct Model{TD<:CTModels.OCP.TimeDependence, TimesModelType<:CTModels.OCP.AbstractTimesModel, StateModelType<:CTModels.OCP.AbstractStateModel, ControlModelType<:CTModels.OCP.AbstractControlModel, VariableModelType<:CTModels.OCP.AbstractVariableModel, DynamicsModelType<:Function, ObjectiveModelType<:CTModels.OCP.AbstractObjectiveModel, ConstraintsModelType<:CTModels.OCP.AbstractConstraintsModel, BuildExaModelType<:Union{Nothing, Function}} <: CTModels.OCP.AbstractModelImmutable optimal control problem model containing all problem components.
A Model is created from a PreModel once all required fields have been set. It is parameterised by the time dependence type (Autonomous or NonAutonomous) and the types of all its components.
Fields
times::TimesModelType: Initial and final time specification.state::StateModelType: State variable structure (name, components).control::ControlModelType: Control variable structure (name, components).variable::VariableModelType: Optimisation variable structure (may be empty).dynamics::DynamicsModelType: System dynamics function(t, x, u, v) -> ẋ.objective::ObjectiveModelType: Cost functional (Mayer, Lagrange, or Bolza).constraints::ConstraintsModelType: All problem constraints.definition::Expr: Original symbolic definition of the problem.build_examodel::BuildExaModelType: Optional ExaModels builder function.
Example
julia> using CTModels
julia> # Models are typically created via the @def macro or PreModel
julia> ocp = CTModels.Model # Type referenceEach field can be accessed directly (ocp.times, etc) or by a getter:
For instance, we can retrieve the times and definition values.
times(ocp) # returns the TimesModel struct containing time informationCTModels.OCP.TimesModel{CTModels.OCP.FixedTimeModel{Int64}, CTModels.OCP.FixedTimeModel{Int64}}(CTModels.OCP.FixedTimeModel{Int64}(0, "0"), CTModels.OCP.FixedTimeModel{Int64}(1, "1"), "t")definition(ocp)quote
#= manual-model.md:36 =#
(t ∈ [t0, tf], time)
#= manual-model.md:37 =#
x = ((q, v) ∈ R², state)
#= manual-model.md:38 =#
(u ∈ R, control)
#= manual-model.md:39 =#
x(t0) == x0
#= manual-model.md:40 =#
x(tf) == [0, 0]
#= manual-model.md:41 =#
ẋ(t) == [v(t), u(t)]
#= manual-model.md:42 =#
0.5 * ∫(u(t) ^ 2) → min
endAPI Reference by Component
This section provides a comprehensive reference of all methods available for inspecting and querying optimal control problems. Methods are organized by component for easy navigation.
To illustrate the various methods, we define a more complex optimal control problem with free final time, variables, and various types of constraints:
ocp = @def begin
v = (w, tf) ∈ R², variable
s ∈ [0, tf], time
q = (x, y) ∈ R², state
u ∈ R, control
0 ≤ tf ≤ 2, (1)
u(s) ≥ 0, (cons_u)
x(s) + u(s) ≤ 10, (cons_mixed)
w == 0
x(0) == -1
y(0) - tf == 0, (cons_bound)
q(tf) == [0, 0]
q̇(s) == [y(s)+w, u(s)]
0.5∫( u(s)^2 ) → min
endTimes
The time component defines the temporal domain of the optimal control problem.
Times model
Get the times model:
times(ocp) # returns the TimesModel struct containing time informationCTModels.OCP.TimesModel{CTModels.OCP.FixedTimeModel{Int64}, CTModels.OCP.FreeTimeModel}(CTModels.OCP.FixedTimeModel{Int64}(0, "0"), CTModels.OCP.FreeTimeModel(2, "tf"), "s")You can also access initial and final times separately:
initial_time(ocp) # returns the initial time value0For the final time, if it is free (part of the variable), you need to provide the variable value:
v = [1, 2] # example variable values: w=1, tf=2
final_time(ocp, v) # returns tf value from variable2If you try to get the final time without providing the variable when it's free, an error occurs:
julia> final_time(ocp) # error: tf is free, need variableERROR: Control Toolbox Error ❌ Error: CTBase.Exceptions.PreconditionError, Cannot get final time with this function ❓ Reason: This model type does not support direct final time access 📂 Context: final_time on AbstractModel 💡 Suggestion: Use final_time(ocp) on a Model with FixedTimeModel or use final_time(ocp, variable) for variable final time
Time variable names
Get the names of the time variable and time bounds:
time_name(ocp) # returns "s" (the time variable name in this OCP)"s"initial_time_name(ocp) # returns "0" (initial time is fixed at 0)"0"final_time_name(ocp) # returns "tf" (final time is a variable)"tf"Time fixedness predicates
Check whether initial or final times are fixed or free:
has_fixed_initial_time(ocp) # true if t0 is fixedtruehas_free_initial_time(ocp) # true if t0 is free (part of variable)falseAlternative methods with is_* prefix are also available and equivalent:
is_initial_time_fixed(ocp)≡has_fixed_initial_time(ocp)is_initial_time_free(ocp)≡has_free_initial_time(ocp)is_final_time_fixed(ocp)≡has_fixed_final_time(ocp)is_final_time_free(ocp)≡has_free_final_time(ocp)
Similarly for final time:
has_fixed_final_time(ocp) # false (tf is free in this OCP)falsehas_free_final_time(ocp) # true (tf is part of variable v)trueAutonomy
Check if the dynamics and Lagrange cost are autonomous (time-independent):
is_autonomous(ocp) # false if dynamics or cost depend on timetrueFor more details on autonomy, see the Time dependence section below.
Summary table
| Method | Returns | Description |
|---|---|---|
times(ocp) | (Float64, Any) | Time interval (t0, tf) or (t0, tf_name) |
initial_time(ocp) | Float64 | Initial time t0 |
final_time(ocp) | Float64 | Final time tf (error if free) |
final_time(ocp, v) | Float64 | Final time tf from variable v |
time_name(ocp) | String | Time variable name |
initial_time_name(ocp) | String | Initial time name or value |
final_time_name(ocp) | String | Final time name or value |
has_fixed_initial_time(ocp) | Bool | True if t0 is fixed |
has_free_initial_time(ocp) | Bool | True if t0 is free |
has_fixed_final_time(ocp) | Bool | True if tf is fixed |
has_free_final_time(ocp) | Bool | True if tf is free |
is_autonomous(ocp) | Bool | True if time-independent |
State
The state component represents the state variables of the optimal control problem.
State component information
Get the name, dimension, and component names of the state:
state_name(ocp) # returns "q" (the state variable name)"q"state_dimension(ocp) # returns 2 (dimension of state)2state_components(ocp) # returns ["x", "y"] (component names)2-element Vector{String}:
"x"
"y"The component names are used when plotting the solution. See the plot manual.
State box constraints
Get the box constraints on the state (lower and upper bounds):
state_constraints_box(ocp) # returns box constraints if any(Float64[], Int64[], Float64[], Symbol[])The returned tuple has the structure (lb, indices, ub, labels) where:
lb: vector of lower boundsindices: vector of component indices (1-based)ub: vector of upper boundslabels: vector of constraint labels
Get the dimension of state box constraints:
dim_state_constraints_box(ocp) # returns number of box constraints on state0Summary table
| Method | Returns | Description |
|---|---|---|
state_name(ocp) | String | State variable name |
state_dimension(ocp) | Int | State dimension |
state_components(ocp) | Vector{String} | State component names |
state_constraints_box(ocp) | Box constraints | State box constraints |
dim_state_constraints_box(ocp) | Int | Number of state box constraints |
Control
The control component represents the control variables of the optimal control problem.
Control component information
Get the name, dimension, and component names of the control:
control_name(ocp) # returns "u" (the control variable name)"u"control_dimension(ocp) # returns 1 (dimension of control)1control_components(ocp) # returns ["u"] (component names)1-element Vector{String}:
"u"Control box constraints
Get the box constraints on the control:
control_constraints_box(ocp) # returns box constraints if any([0.0], [1], [Inf], [:cons_u])The returned tuple has the structure (lb, indices, ub, labels) where:
lb: vector of lower boundsindices: vector of component indices (1-based)ub: vector of upper boundslabels: vector of constraint labels
Get the dimension of control box constraints:
dim_control_constraints_box(ocp) # returns number of box constraints on control1Summary table
| Method | Returns | Description |
|---|---|---|
control_name(ocp) | String | Control variable name |
control_dimension(ocp) | Int | Control dimension |
control_components(ocp) | Vector{String} | Control component names |
control_constraints_box(ocp) | Box constraints | Control box constraints |
dim_control_constraints_box(ocp) | Int | Number of control box constraints |
Variable
The variable component represents the optimization variables (parameters) of the optimal control problem.
Variable component information
Get the name, dimension, and component names of the variable:
variable_name(ocp) # returns "v" (the variable name)"v"variable_dimension(ocp) # returns 2 (dimension of variable)2variable_components(ocp) # returns ["w", "tf"] (component names)2-element Vector{String}:
"w"
"tf"Variable box constraints
Get the box constraints on the variable:
variable_constraints_box(ocp) # returns box constraints if any([0.0, 0.0], [2, 1], [2.0, 0.0], [:eq1, Symbol("label##1874")])The returned tuple has the structure (lb, indices, ub, labels) where:
lb: vector of lower boundsindices: vector of component indices (1-based)ub: vector of upper boundslabels: vector of constraint labels
Get the dimension of variable box constraints:
dim_variable_constraints_box(ocp) # returns number of box constraints on variable2Summary table
| Method | Returns | Description |
|---|---|---|
variable_name(ocp) | String | Variable name |
variable_dimension(ocp) | Int | Variable dimension |
variable_components(ocp) | Vector{String} | Variable component names |
variable_constraints_box(ocp) | Box constraints | Variable box constraints |
dim_variable_constraints_box(ocp) | Int | Number of variable box constraints |
Dynamics
The dynamics component defines the differential equations governing the state evolution.
Dynamics function
The dynamics are stored as an in-place function of the form f!(dx, t, x, u, v):
f! = dynamics(ocp)
s = 0.5 # time
q = [0.0, 1.0] # state
u = 2.0 # control
v = [1.0, 2.0] # variable
dq = similar(q)
f!(dq, s, q, u, v)
dq # returns the derivative q̇2-element Vector{Float64}:
2.0
2.0The first argument dx is mutated upon call and contains the state derivative. The other arguments are:
t: timex: stateu: controlv: variable
Summary table
| Method | Returns | Description |
|---|---|---|
dynamics(ocp) | Function | In-place dynamics function f!(dx, t, x, u, v) |
Objective
The objective component defines the cost function to minimize or maximize.
Criterion
The criterion indicates whether the problem is a minimization or maximization:
criterion(ocp) # returns :min or :max:minObjective form
The objective function can be in Mayer form, Lagrange form, or Bolza form (combination of both):
- Mayer: $g(x(t_0), x(t_f), v) \to \min$
- Lagrange: $\int_{t_0}^{t_f} f^0(t, x(t), u(t), v)\, \mathrm{d}t \to \min$
- Bolza: $g(x(t_0), x(t_f), v) + \int_{t_0}^{t_f} f^0(t, x(t), u(t), v)\, \mathrm{d}t \to \min$
Check which form is present:
has_mayer_cost(ocp) # true if Mayer cost existsfalsehas_lagrange_cost(ocp) # true if Lagrange cost existstrueAlternative methods are also available:
is_mayer_cost_defined(ocp)≡has_mayer_cost(ocp)is_lagrange_cost_defined(ocp)≡has_lagrange_cost(ocp)
Mayer cost
Get the Mayer cost function with signature g(x0, xf, v):
julia> g = mayer(ocp) # error if no Mayer costERROR: Control Toolbox Error ❌ Error: CTBase.Exceptions.PreconditionError, Cannot access Mayer cost ❓ Reason: This OCP has no Mayer objective defined 📂 Context: mayer accessor 💡 Suggestion: Define a Mayer objective using objective!(ocp, :min/:max, mayer=...) before accessing it
Lagrange cost
Get the Lagrange cost function with signature f⁰(t, x, u, v):
f⁰ = lagrange(ocp)
s = 0.5
q = [0.0, 1.0]
u = 2.0
v = [1.0, 2.0]
f⁰(s, q, u, v) # returns the integrand value2.0Summary table
| Method | Returns | Description |
|---|---|---|
criterion(ocp) | Symbol | :min or :max |
has_mayer_cost(ocp) | Bool | True if Mayer cost exists |
has_lagrange_cost(ocp) | Bool | True if Lagrange cost exists |
mayer(ocp) | Function | Mayer cost function g(x0, xf, v) |
lagrange(ocp) | Function | Lagrange cost function f⁰(t, x, u, v) |
Constraints
The constraints component defines the constraints on the optimal control problem.
Individual constraints
Retrieve a specific constraint by its label using the constraint function. It returns a tuple (type, f, lb, ub):
(type, f, lb, ub) = constraint(ocp, :eq1)
println("type: ", type)
x0 = [0, 1]
xf = [2, 3]
v = [1, 4]
println("val: ", f(x0, xf, v))
println("lb: ", lb)
println("ub: ", ub)type: variable
val: 4
lb: 0.0
ub: 2.0The function signature depends on the constraint type:
- For
:boundaryand:variableconstraints:f(x0, xf, v) - For other constraints (
:control,:state,:mixed):f(t, x, u, v)
Examples of different constraint types:
(type, f, lb, ub) = constraint(ocp, :cons_bound)
println("type: ", type)
println("val: ", f(x0, xf, v))type: boundary
val: -3.0(type, f, lb, ub) = constraint(ocp, :cons_u)
println("type: ", type)
s = 0.5
q = [1.0, 2.0]
u = 3.0
println("val: ", f(s, q, u, v))type: control
val: 3.0(type, f, lb, ub) = constraint(ocp, :cons_mixed)
println("type: ", type)
println("val: ", f(s, q, u, v))type: path
val: 4.0All constraints
Get all constraints as a collection:
constraints(ocp) # returns all constraintsCTModels.OCP.ConstraintsModel{Tuple{Vector{Float64}, Main.var"#fun##1870#100", Vector{Float64}, Vector{Symbol}}, Tuple{Vector{Float64}, CTModels.OCP.var"#boundary_cons_nl!#build##3"{Int64, Vector{Int64}, Tuple{Main.var"#fun##1878#101", Main.var"#fun##1883#102", Main.var"#fun##1889#103"}}, Vector{Float64}, Vector{Symbol}}, Tuple{Vector{Float64}, Vector{Int64}, Vector{Float64}, Vector{Symbol}}, Tuple{Vector{Float64}, Vector{Int64}, Vector{Float64}, Vector{Symbol}}, Tuple{Vector{Float64}, Vector{Int64}, Vector{Float64}, Vector{Symbol}}}(([-Inf], Main.var"#fun##1870#100"(), [10.0], [:cons_mixed]), ([-1.0, 0.0, 0.0, 0.0], CTModels.OCP.var"#boundary_cons_nl!#build##3"{Int64, Vector{Int64}, Tuple{Main.var"#fun##1878#101", Main.var"#fun##1883#102", Main.var"#fun##1889#103"}}(3, [1, 1, 2], (Main.var"#fun##1878#101"(), Main.var"#fun##1883#102"(), Main.var"#fun##1889#103"())), [-1.0, 0.0, 0.0, 0.0], [Symbol("label##1876"), :cons_bound, Symbol("label##1887"), Symbol("label##1887")]), (Float64[], Int64[], Float64[], Symbol[]), ([0.0], [1], [Inf], [:cons_u]), ([0.0, 0.0], [2, 1], [2.0, 0.0], [:eq1, Symbol("label##1874")]))Nonlinear constraints
Get nonlinear path and boundary constraints:
path_constraints_nl(ocp) # returns nonlinear path constraints([-Inf], Main.var"#fun##1870#100"(), [10.0], [:cons_mixed])boundary_constraints_nl(ocp) # returns nonlinear boundary constraints([-1.0, 0.0, 0.0, 0.0], CTModels.OCP.var"#boundary_cons_nl!#build##3"{Int64, Vector{Int64}, Tuple{Main.var"#fun##1878#101", Main.var"#fun##1883#102", Main.var"#fun##1889#103"}}(3, [1, 1, 2], (Main.var"#fun##1878#101"(), Main.var"#fun##1883#102"(), Main.var"#fun##1889#103"())), [-1.0, 0.0, 0.0, 0.0], [Symbol("label##1876"), :cons_bound, Symbol("label##1887"), Symbol("label##1887")])The returned tuples have the structure (lb, f!, ub, labels) where:
lb: vector of lower boundsf!: constraint function (in-place)ub: vector of upper boundslabels: vector of constraint labels
The constraint functions have the following signatures:
- Path constraints:
f!(val, t, x, u, v)wherevalis mutated - Boundary constraints:
f!(val, x0, xf, v)wherevalis mutated
Get the dimensions of nonlinear constraints:
dim_path_constraints_nl(ocp) # number of nonlinear path constraints1dim_boundary_constraints_nl(ocp) # number of nonlinear boundary constraints4To get the dual variable (or Lagrange multiplier) associated to a constraint, use the dual method on a solution.
Summary table
| Method | Returns | Description |
|---|---|---|
constraint(ocp, label) | (Symbol, Function, Real, Real) | Get constraint by label |
constraints(ocp) | Collection | All constraints |
path_constraints_nl(ocp) | Constraints | Nonlinear path constraints |
boundary_constraints_nl(ocp) | Constraints | Nonlinear boundary constraints |
dim_path_constraints_nl(ocp) | Int | Number of nonlinear path constraints |
dim_boundary_constraints_nl(ocp) | Int | Number of nonlinear boundary constraints |
Other accessors
Problem definition
Get the problem definition as a string:
definition(ocp) # returns the OCP definitionquote
#= manual-model.md:131 =#
v = ((w, tf) ∈ R², variable)
#= manual-model.md:132 =#
(s ∈ [0, tf], time)
#= manual-model.md:133 =#
q = ((x, y) ∈ R², state)
#= manual-model.md:134 =#
(u ∈ R, control)
#= manual-model.md:135 =#
(0 ≤ tf ≤ 2, 1)
#= manual-model.md:136 =#
(u(s) ≥ 0, cons_u)
#= manual-model.md:137 =#
(x(s) + u(s) ≤ 10, cons_mixed)
#= manual-model.md:138 =#
w == 0
#= manual-model.md:139 =#
x(0) == -1
#= manual-model.md:140 =#
(y(0) - tf == 0, cons_bound)
#= manual-model.md:141 =#
q(tf) == [0, 0]
#= manual-model.md:142 =#
q̇(s) == [y(s) + w, u(s)]
#= manual-model.md:143 =#
0.5 * ∫(u(s) ^ 2) → min
endTime dependence
Optimal control problems can be autonomous or non-autonomous. In an autonomous problem, neither the dynamics nor the Lagrange cost explicitly depends on the time variable.
The following problem is autonomous.
ocp = @def begin
t ∈ [ 0, 1 ], time
x ∈ R, state
u ∈ R, control
ẋ(t) == u(t) # no explicit dependence on t
x(1) + 0.5∫( u(t)^2 ) → min # no explicit dependence on t
end
is_autonomous(ocp)trueThe following problem is non-autonomous since the dynamics depends on t.
ocp = @def begin
t ∈ [ 0, 1 ], time
x ∈ R, state
u ∈ R, control
ẋ(t) == u(t) + t # explicit dependence on t
x(1) + 0.5∫( u(t)^2 ) → min
end
is_autonomous(ocp)falseFinally, this last problem is non-autonomous because the Lagrange part of the cost depends on t.
ocp = @def begin
t ∈ [ 0, 1 ], time
x ∈ R, state
u ∈ R, control
ẋ(t) == u(t)
x(1) + 0.5∫( t + u(t)^2 ) → min # explicit dependence on t
end
is_autonomous(ocp)false