Double integrator: energy minimisation
Let us consider a wagon moving along a rail, whose acceleration can be controlled by a force $u$. We denote by $x = (q, v)$ the state of the wagon, where $q$ is the position and $v$ the velocity.
We assume that the mass is constant and equal to one, and that there is no friction. The dynamics are given by
\[ \dot q(t) = v(t), \quad \dot v(t) = u(t),\quad u(t) \in \R,\]
which is simply the double integrator system. Let us consider a transfer starting at time $t_0 = 0$ and ending at time $t_f = 1$, for which we want to minimise the transfer energy
\[ \frac{1}{2}\int_{0}^{1} u^2(t) \, \mathrm{d}t\]
starting from $x(0) = (-1, 0)$ and aiming to reach the target $x(1) = (0, 0)$.
First, we need to import the OptimalControl.jl package to define the optimal control problem, NLPModelsIpopt.jl to solve it, and Plots.jl to visualise the solution.
using OptimalControl
using NLPModelsIpopt
using Plots<< @example-block not executed in draft mode >>Optimal control problem
Let us define the problem with the @def macro:
t0 = 0; tf = 1; x0 = [-1, 0]; xf = [0, 0]
ocp = @def begin
t ∈ [t0, tf], time
x = (q, v) ∈ R², state
u ∈ R, control
x(t0) == x0
x(tf) == xf
ẋ(t) == [v(t), u(t)]
0.5∫( u(t)^2 ) → min
end
nothing # hide<< @example-block not executed in draft mode >>Mathematical formulation
\[ \begin{aligned} & \text{Minimise} && \frac{1}{2}\int_0^1 u^2(t) \,\mathrm{d}t \\ & \text{subject to} \\ & && \dot{x}(t) = [v(t), u(t)], \\[1.0em] & && x(0) = (-1,0), \\[0.5em] & && x(1) = (0,0). \end{aligned}\]
For a comprehensive introduction to the syntax used above to define the optimal control problem, see this abstract syntax tutorial. In particular, non-Unicode alternatives are available for derivatives, integrals, etc.
Solve and plot
Direct method
We can solve it simply with:
direct_sol = solve(ocp)
nothing # hide<< @example-block not executed in draft mode >>And plot the solution with:
plot(direct_sol)<< @example-block not executed in draft mode >>The solve function has options, see the solve tutorial. You can customise the plot, see the plot tutorial.
Indirect method
The first solution was obtained using the so-called direct method.[1] Another approach is to use an indirect simple shooting method. We begin by importing the necessary packages.
using OrdinaryDiffEq # Ordinary Differential Equations (ODE) solver
using NonlinearSolve # Nonlinear Equations (NLE) solver<< @example-block not executed in draft mode >>To define the shooting function, we must provide the maximising control in feedback form:
# maximising control, H(x, p, u) = p₁v + p₂u - u²/2
u(x, p) = p[2]
# Hamiltonian flow
f = Flow(ocp, u)
# state projection, p being the costate
π((x, p)) = x
# shooting function
S(p0) = π( f(t0, x0, p0, tf) ) - xf
nothing # hide<< @example-block not executed in draft mode >>We are now ready to solve the shooting equations.
# auxiliary in-place NLE function
nle!(s, p0, _) = s[:] = S(p0)
# initial guess for the Newton solver from the direct solution
t = time_grid(direct_sol) # the time grid as a vector
p = costate(direct_sol) # the costate as a function of time
p0_guess = p(t0) # initial costate
# NLE problem with initial guess
prob = NonlinearProblem(nle!, p0_guess)
# resolution of S(p0) = 0
shooting_sol = solve(prob; show_trace=Val(true))
p0_sol = shooting_sol.u # costate solution
# print the costate solution and the shooting function evaluation
println("\ncostate: p0 = ", p0_sol)
println("shoot: S(p0) = ", S(p0_sol), "\n")<< @example-block not executed in draft mode >>To plot the solution obtained by the indirect method, we need to build the solution of the optimal control problem. This is done using the costate solution and the flow function.
indirect_sol = f((t0, tf), x0, p0_sol; saveat=range(t0, tf, 100))
plot(indirect_sol)<< @example-block not executed in draft mode >>- You can use MINPACK.jl instead of NonlinearSolve.jl.
- For more details about the flow construction, visit the Compute flows from optimal control problems page.
- In this simple example, we have set an arbitrary initial guess. It can be helpful to use the solution of the direct method to initialise the shooting method. See the Goddard tutorial for such a concrete application.
- For a version with a state constraint on the velocity, see the State constraint example.
- 1J. T. Betts. Practical methods for optimal control using nonlinear programming. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2001.