Examples
An optimal control problem can be described as minimising the cost functional
\[g(t_0, x(t_0), t_f, x(t_f)) + \int_{t_0}^{t_f} f^{0}(t, x(t), u(t))~\mathrm{d}t\]
where the state $x$ and the control $u$ are functions subject, for $t \in [t_0, t_f]$, to the differential constraint
\[ \dot{x}(t) = f(t, x(t), u(t))\]
and other constraints such as
\[\begin{array}{llcll} ~\xi_l &\le& \xi(t, u(t)) &\le& \xi_u, \\ \eta_l &\le& \eta(t, x(t)) &\le& \eta_u, \\ \psi_l &\le& \psi(t, x(t), u(t)) &\le& \psi_u, \\ \phi_l &\le& \phi(t_0, x(t_0), t_f, x(t_f)) &\le& \phi_u. \end{array}\]
Let us define the following optimal control problem.
using OptimalControl
ocp = Model()
state!(ocp, 2, "x", ["r", "v"]) # dimension of the state with the names of the components
control!(ocp, 1) # dimension of the control
time!(ocp, t0=0, tf=1, name="s") # initial and final time, with the name of the variable time
constraint!(ocp, :initial, lb=[-1, 0], ub=[-1, 0])
constraint!(ocp, :final , lb=[ 0, 0], ub=[ 0, 0])
A = [ 0 1
0 0 ]
B = [ 0
1 ]
dynamics!(ocp, (x, u) -> A*x + B*u)
objective!(ocp, :lagrange, (x, u) -> 0.5u^2)
Then, we can print the form of this optimal control problem:
ocp
The (autonomous) optimal control problem is of the form:
minimize J(x, u) = ∫ f⁰(x(s), u(s)) ds, over [0, 1]
subject to
ẋ(s) = f(x(s), u(s)), s in [0, 1] a.e.,
ϕl ≤ ϕ(x(0), x(1)) ≤ ϕu,
where x(s) = (r(s), v(s)) ∈ R² and u(s) ∈ R.
Declarations (* required):
╭────────┬────────┬──────────┬──────────┬───────────┬────────────┬─────────────╮
│ times* │ state* │ control* │ variable │ dynamics* │ objective* │ constraints │
├────────┼────────┼──────────┼──────────┼───────────┼────────────┼─────────────┤
│ V │ V │ V │ X │ V │ V │ V │
╰────────┴────────┴──────────┴──────────┴───────────┴────────────┴─────────────╯
You can also define the optimal control problem in an abstract form:
using OptimalControl
ocp = @def begin
t ∈ [ 0, 1 ], time
x ∈ R^2, state
u ∈ R, control
x(0) == [ -1, 0 ], (1)
x(1) == [ 0, 0 ]
ẋ(t) == A * x(t) + B * u(t)
∫( 0.5u(t)^2 ) → min
end
A = [ 0 1
0 0 ]
B = [ 0
1 ]
Then, you can print this optimal control problem:
ocp
The (autonomous) optimal control problem is given by:
t ∈ [0, 1], time
x ∈ R ^ 2, state
u ∈ R, control
x(0) == [-1, 0], 1
x(1) == [0, 0]
ẋ(t) == A * x(t) + B * u(t)
∫(0.5 * u(t) ^ 2) → min
The (autonomous) optimal control problem is of the form:
minimize J(x, u) = ∫ f⁰(x(t), u(t)) dt, over [0, 1]
subject to
ẋ(t) = f(x(t), u(t)), t in [0, 1] a.e.,
ϕl ≤ ϕ(x(0), x(1)) ≤ ϕu,
where x(t) ∈ R² and u(t) ∈ R.
Declarations (* required):
╭────────┬────────┬──────────┬──────────┬───────────┬────────────┬─────────────╮
│ times* │ state* │ control* │ variable │ dynamics* │ objective* │ constraints │
├────────┼────────┼──────────┼──────────┼───────────┼────────────┼─────────────┤
│ V │ V │ V │ X │ V │ V │ V │
╰────────┴────────┴──────────┴──────────┴───────────┴────────────┴─────────────╯