Glider

Description of the problem

The hang glider problem is a classical benchmark in optimal control. It consists of steering a hang glider from an initial horizontal position and altitude to a target altitude while maximising the horizontal distance travelled. The glider dynamics incorporate lift, drag, gravity, and the effect of a thermal updraft. The control variable is the lift coefficient $c_L$, which modulates the aerodynamic lift and influences the trajectory through the thermal region.

Mathematical formulation

\[\begin{aligned} \min_{x, y, v_x, v_y, c_L, t_f} \quad & -x(t_f) \\[1em] \text{s.t.} \quad & \dot{x} = v_x, \quad \dot{y} = v_y, \\[0.25em] & \dot{v}_x = -\frac{L \, w + D \, v_x}{m \, v}, \quad \dot{v}_y = \frac{L \, v_x - D \, w}{m \, v} - g, \\[1em] & c_L^{\min} \le c_L(t) \le c_L^{\max}, \quad x(t) \ge 0, \quad v_x(t) \ge 0, \\[1em] & (x, y, v_x, v_y)|_{t=0} = (x_0, y_0, v_{x0}, v_{y0}), \quad (y, v_x, v_y)|_{t=t_f} = (y_f, v_{xf}, v_{yf}), \end{aligned}\]

with

\[v = \sqrt{v_x^2 + w^2}, \quad w = v_y - U_\text{updraft}(x), \quad L = \frac{1}{2} \rho S c_L v^2, \quad D = \frac{1}{2} \rho S (c_0 + c_1 c_L^2) v^2,\]

and

\[U_\text{updraft}(x) = u_c\, (1 - r) e^{-r}, \quad r = \left( \frac{x}{r_0} - 2.5 \right)^2,\]

where $m, g, S, \rho, c_0, c_1, u_c, r_0$ are constants describing the glider and the thermal properties.


System parameters

ParameterSymbolValueDescription
Initial horizontal position$x_0$0m
Initial altitude$y_0$1000m
Final altitude$y_f$900m
Initial horizontal velocity$v_{x0}$13.23m/s
Final horizontal velocity$v_{xf}$13.23m/s
Initial vertical velocity$v_{y0}$-1.288m/s
Final vertical velocity$v_{yf}$-1.288m/s
Lift coefficient bounds$c_L$[0, 1.4]Control input
Final time$t_f$frees

Qualitative behaviour

  • The optimal trajectory exploits the thermal updraft to maximise horizontal distance.
  • The lift coefficient $c_L$ balances altitude loss and horizontal progression.
  • The dynamics are nonlinear due to the coupling of lift, drag, and relative velocity in the thermal.
  • Horizontal velocity is maintained positive, and the trajectory respects the control and state constraints.

Characteristics

  • Nonlinear four-dimensional dynamics with one control input.
  • Free final time.
  • Mixed state and control constraints.
  • Widely used as a benchmark for trajectory optimisation and direct transcription methods in nonlinear optimal control.

References

  • Dolan, E. D., & More, J. J. (2004). Benchmarking Optimization Software with COPS 3.0. Argonne National Laboratory. PDF Includes the hang glider problem in the COPS 3.0 benchmark collection with problem formulation and solver comparisons.

  • PSOPT Example: Hang Glider Problem. psopt.net/list-of-examples Demonstrates a practical implementation of the hang glider optimal control problem in PSOPT.

  • PSOPT GitHub Repository: Hang Glider Example. github.com/PSOPT/psopt/blob/master/examples/glider/glider.cxx Source code example for testing direct transcription and NLP solvers with the hang glider problem.

Numerical set-up

In this section, we prepare the numerical environment required to study the problem. We begin by importing the relevant Julia packages and then initialise the data frames that will store the results of our simulations and computations. These structures provide the foundation for solving the problem and for comparing the different solution strategies in a consistent way.

using OptimalControlProblems    # to access the Beam model
using OptimalControl            # to import the OptimalControl model
using NLPModelsIpopt            # to solve the model with Ipopt
import DataFrames: DataFrame    # to store data
using NLPModels                 # to retrieve data from the NLP solution
using Plots                     # to plot the trajectories
using Plots.PlotMeasures        # for leftmargin, bottommargin
using JuMP                      # to import the JuMP model
using Ipopt                     # to solve the JuMP model with Ipopt
using Printf                    # to print

data_pb = DataFrame(            # to store data about the problem
    Problem=Symbol[],
    Grid_Size=Int[],
    Variables=Int[],
    Constraints=Int[],
)

data_re = DataFrame(            # to store data about the resolutions
    Model=Symbol[],
    Flag=Any[],
    Iterations=Int[],
    Objective=Float64[],
)

Metadata

The default number of time steps is:

metadata(:glider)[:grid_size]
500

The default values of the parameters are:

metadata(:glider)[:parameters]
Parameter = Value
------------------
    t0 =  0.0000e+00
  x_t0 =  0.0000e+00
  y_t0 =  1.0000e+03
  y_tf =  9.0000e+02
 vx_t0 =  1.3230e+01
 vx_tf =  1.3230e+01
 vy_t0 = -1.2880e+00
 vy_tf = -1.2880e+00
   u_c =  2.5000e+00
  r_t0 =  1.0000e+02
     m =  1.0000e+02
     g =  9.8100e+00
    c0 =  3.4000e-02
    c1 =  6.9662e-02
     S =  1.4000e+01
     ρ =  1.1300e+00
cL_min =  0.0000e+00
cL_max =  1.4000e+00
  tf_l =  0.0000e+00
   x_l =  0.0000e+00
  vx_l =  0.0000e+00

Initial guess

Before solving the problem, it is often useful to inspect the initial guess (sometimes called the first iterate). This guess is obtained by running the NLP solver with max_iter = 0, which evaluates the problem formulation without performing any optimisation steps.

We plot the resulting trajectories for both the OptimalControl and JuMP models. Since both backends represent the same mathematical problem, their initial guesses should coincide, providing a useful consistency check before moving on to the optimised solution.

Code to plot the initial guess
Click to unfold and see the code for plotting the initial guess.
function plot_initial_guess(problem)

    # -----------------------------
    # Build OptimalControl problem
    # -----------------------------
    docp = eval(problem)(OptimalControlBackend())
    nlp_oc = nlp_model(docp)
    ocp_oc = ocp_model(docp)

    # Solve NLP with zero iterations (initial guess)
    nlp_oc_sol = NLPModelsIpopt.ipopt(nlp_oc; max_iter=0)

    # Build OptimalControl solution
    ocp_sol = build_ocp_solution(docp, nlp_oc_sol)

    # get dimensions
    n = state_dimension(ocp_oc)
    m = control_dimension(ocp_oc)

    # -----------------------------
    # Plot OptimalControl solution
    # -----------------------------
    plt = plot(
        ocp_sol;
        state_style=(color=1,),
        costate_style=(color=1, legend=:none),
        control_style=(color=1, legend=:none),
        path_style=(color=1, legend=:none),
        dual_style=(color=1, legend=:none),
        size=(816, 220*(n+m)),
        label="OptimalControl",
        leftmargin=20mm,
    )

    # Hide legend for additional state plots
    for i in 2:n
        plot!(plt[i]; legend=:none)
    end

    # -----------------------------
    # Build JuMP model
    # -----------------------------
    nlp_jp = eval(problem)(JuMPBackend())

    # Solve NLP with zero iterations (initial guess)
    set_optimizer(nlp_jp, Ipopt.Optimizer)
    set_optimizer_attribute(nlp_jp, "max_iter", 0)
    optimize!(nlp_jp)

    # Extract trajectories
    t_grid = time_grid(nlp_jp)
    x_fun = state(nlp_jp)
    u_fun = control(nlp_jp)
    p_fun = costate(nlp_jp)

    # -----------------------------
    # Plot JuMP solution on top
    # -----------------------------
    # States
    for i in 1:n
        label = i == 1 ? "JuMP" : :none
        plot!(plt[i], t_grid, t -> x_fun(t)[i]; color=2, linestyle=:dash, label=label)
    end

    # Costates
    for i in 1:n
        plot!(plt[n+i], t_grid, t -> -p_fun(t)[i]; color=2, linestyle=:dash, label=:none)
    end

    # Controls
    for i in 1:m
        plot!(plt[2*n+i], t_grid, t -> u_fun(t)[i]; color=2, linestyle=:dash, label=:none)
    end

    return plt
end
plot_initial_guess(:glider)
Example block output

Solving the problem

To solve an optimal control problem, we can rely on two complementary formulations: the OptimalControl backend, which works directly with the discretised control problem, and the JuMP backend, which leverages JuMP’s flexible modelling framework.

Both approaches generate equivalent NLPs that can be solved with Ipopt, and comparing them ensures consistency between the two formulations.

Before solving, we can inspect the discretisation details of the problem. The table below reports the number of grid points, decision variables, and constraints associated with the chosen formulation.

push!(data_pb,(
    Problem=:glider,
    Grid_Size=metadata(:glider)[:grid_size],
    Variables=get_nvar(nlp_model(glider(OptimalControlBackend()))),
    Constraints=get_ncon(nlp_model(glider(OptimalControlBackend()))),
))
1×4 DataFrame
RowProblemGrid_SizeVariablesConstraints
SymbolInt64Int64Int64
1glider50025062007

OptimalControl model

We first solve the problem using the OptimalControl backend. The process begins by importing the problem definition and constructing the associated nonlinear programming (NLP) model. This NLP is then passed to the Ipopt solver, with standard options for tolerance and barrier parameter strategy.

# import DOCP model
docp = glider(OptimalControlBackend())

# get NLP model
nlp_oc = nlp_model(docp)

# solve
nlp_oc_sol = NLPModelsIpopt.ipopt(
    nlp_oc;
    print_level=4,
    tol=1e-8,
    mu_strategy="adaptive",
    sb="yes",
)
Total number of variables............................:     2506
                     variables with only lower bounds:     1003
                variables with lower and upper bounds:      501
                     variables with only upper bounds:        0
Total number of equality constraints.................:     2007
Total number of inequality constraints...............:        0
        inequality constraints with only lower bounds:        0
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        0

MUMPS returned INFO(1) = -9 and requires more memory, reallocating.  Attempt 1
  Increasing icntl[13] from 1000 to 2000.

Number of Iterations....: 861

                                   (scaled)                 (unscaled)
Objective...............:  -1.2479784300232518e+03    1.2479784300232518e+03
Dual infeasibility......:   5.9265211740624020e-10    5.9265211740624020e-10
Constraint violation....:   1.0853185017367650e-10    1.0853185017367650e-10
Variable bound violation:   1.3987538460824567e-08    1.3987538460824567e-08
Complementarity.........:   4.9348463188155563e-09    4.9348463188155563e-09
Overall NLP error.......:   4.9348463188155563e-09    4.9348463188155563e-09


Number of objective function evaluations             = 877
Number of objective gradient evaluations             = 861
Number of equality constraint evaluations            = 877
Number of inequality constraint evaluations          = 0
Number of equality constraint Jacobian evaluations   = 864
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations             = 861
Total seconds in IPOPT                               = 26.132

EXIT: Optimal Solution Found.

JuMP model

We now repeat the procedure using the JuMP backend. Here, the problem is reformulated as a JuMP model, which offers a flexible and widely used framework for nonlinear optimisation in Julia. The solver settings are chosen to mirror those used previously, so that the results can be compared on an equal footing.

# import model
nlp_jp = glider(JuMPBackend())

# solve with Ipopt
set_optimizer(nlp_jp, Ipopt.Optimizer)
set_optimizer_attribute(nlp_jp, "print_level", 4)
set_optimizer_attribute(nlp_jp, "tol", 1e-8)
set_optimizer_attribute(nlp_jp, "mu_strategy", "adaptive")
set_optimizer_attribute(nlp_jp, "linear_solver", "mumps")
set_optimizer_attribute(nlp_jp, "sb", "yes")
optimize!(nlp_jp)
Total number of variables............................:     2506
                     variables with only lower bounds:     1003
                variables with lower and upper bounds:      501
                     variables with only upper bounds:        0
Total number of equality constraints.................:     2007
Total number of inequality constraints...............:        0
        inequality constraints with only lower bounds:        0
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        0

MUMPS returned INFO(1) = -9 and requires more memory, reallocating.  Attempt 1
  Increasing icntl[13] from 1000 to 2000.

Number of Iterations....: 714

                                   (scaled)                 (unscaled)
Objective...............:  -1.2479784300208219e+03    1.2479784300208219e+03
Dual infeasibility......:   7.3523642768177161e-12    7.3523642768177161e-12
Constraint violation....:   1.3464784842653899e-12    1.3464784842653899e-12
Variable bound violation:   1.3987538460824567e-08    1.3987538460824567e-08
Complementarity.........:   7.1075365774773016e-11    7.1075365774773016e-11
Overall NLP error.......:   7.1075365774773016e-11    7.1075365774773016e-11


Number of objective function evaluations             = 738
Number of objective gradient evaluations             = 714
Number of equality constraint evaluations            = 738
Number of inequality constraint evaluations          = 0
Number of equality constraint Jacobian evaluations   = 717
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations             = 714
Total seconds in IPOPT                               = 27.956

EXIT: Optimal Solution Found.

Numerical comparisons

In this section, we examine the results of the problem resolutions. We extract the solver status (flag), the number of iterations, and the objective value for each model. This provides a first overview of how each approach performs and sets the stage for a more detailed comparison of the solution trajectories.

# from OptimalControl model
push!(data_re,(
    Model=:OptimalControl,
    Flag=nlp_oc_sol.status,
    Iterations=nlp_oc_sol.iter,
    Objective=nlp_oc_sol.objective,
))

# from JuMP model
push!(data_re,(
    Model=:JuMP,
    Flag=termination_status(nlp_jp),
    Iterations=barrier_iterations(nlp_jp),
    Objective=objective_value(nlp_jp),
))
2×4 DataFrame
RowModelFlagIterationsObjective
SymbolAnyInt64Float64
1OptimalControlfirst_order8611247.98
2JuMPLOCALLY_SOLVED7141247.98

We compare the solutions obtained from the OptimalControl and JuMP models by examining the number of iterations required for convergence, the $L^2$-norms of the differences in states, controls, and additional variables, and the corresponding objective values. Both absolute and relative errors are reported, providing a clear quantitative measure of the agreement between the two approaches.

Code to print the numerical comparisons
Click to unfold and get the code of the numerical comparisons.
function L2_norm(T, X)
    # T and X are supposed to be one dimensional
    s = 0.0
    for i in 1:(length(T) - 1)
        s += 0.5 * (X[i]^2 + X[i + 1]^2) * (T[i + 1]-T[i])
    end
    return √(s)
end

function print_numerical_comparisons(problem, docp, nlp_oc_sol, nlp_jp)

    # get relevant data from OptimalControl model
    ocp_sol = build_ocp_solution(docp, nlp_oc_sol)
    t_oc = time_grid(ocp_sol)
    x_oc = state(ocp_sol).(t_oc)
    u_oc = control(ocp_sol).(t_oc)
    v_oc = variable(ocp_sol)
    o_oc = objective(ocp_sol)
    i_oc = iterations(ocp_sol)

    # get relevant data from JuMP model
    t_jp = time_grid(nlp_jp)
    x_jp = state(nlp_jp).(t_jp)
    u_jp = control(nlp_jp).(t_jp)
    o_jp = objective(nlp_jp)
    v_jp = variable(nlp_jp)
    i_jp = iterations(nlp_jp)

    x_vars = state_components(nlp_jp)
    u_vars = control_components(nlp_jp)
    v_vars = variable_components(nlp_jp)

    println("┌─ ", string(problem))
    println("│")
    println("├─  Number of Iterations")
    @printf("│     OptimalControl : %d   JuMP : %d\n", i_oc, i_jp)

    # States
    println("├─  States (L2 Norms)")
    for i in eachindex(x_vars)
        xi_oc = [x_oc[k][i] for k in eachindex(t_oc)]
        xi_jp = [x_jp[k][i] for k in eachindex(t_jp)]
        L2_ae = L2_norm(t_oc, xi_oc - xi_jp)
        L2_re = L2_ae / (0.5 * (L2_norm(t_oc, xi_oc) + L2_norm(t_oc, xi_jp)))
        @printf("│     %-6s Abs: %.3e   Rel: %.3e\n", x_vars[i], L2_ae, L2_re)
    end

    # Controls
    println("├─  Controls (L2 Norms)")
    for i in eachindex(u_vars)
        ui_oc = [u_oc[k][i] for k in eachindex(t_oc)]
        ui_jp = [u_jp[k][i] for k in eachindex(t_jp)]
        L2_ae = L2_norm(t_oc, ui_oc - ui_jp)
        L2_re = L2_ae / (0.5 * (L2_norm(t_oc, ui_oc) + L2_norm(t_oc, ui_jp)))
        @printf("│     %-6s Abs: %.3e   Rel: %.3e\n", u_vars[i], L2_ae, L2_re)
    end

    # Variables
    if !isnothing(v_vars)
        println("├─  Variables")
        for i in eachindex(v_vars)
            vi_oc = v_oc[i]
            vi_jp = v_jp[i]
            vi_ae = abs(vi_oc - vi_jp)
            vi_re = vi_ae / (0.5 * (abs(vi_oc) + abs(vi_jp)))
            @printf("│     %-6s Abs: %.3e   Rel: %.3e\n", v_vars[i], vi_ae, vi_re)
        end
    end

    # Objective
    o_ae = abs(o_oc - o_jp)
    o_re = o_ae / (0.5 * (abs(o_oc) + abs(o_jp)))
    println("├─  Objective")
    @printf("│            Abs: %.3e   Rel: %.3e\n", o_ae, o_re)
    println("└─")
    return nothing
end
print_numerical_comparisons(:glider, docp, nlp_oc_sol, nlp_jp)
┌─ glider
│
├─  Number of Iterations
│     OptimalControl : 861   JuMP : 714
├─  States (L2 Norms)
│     x      Abs: 2.077e-06   Rel: 2.960e-10
│     y      Abs: 7.731e-07   Rel: 8.128e-11
│     vx     Abs: 6.903e-07   Rel: 5.454e-09
│     vy     Abs: 7.184e-07   Rel: 5.486e-08
├─  Controls (L2 Norms)
│     cL     Abs: 3.378e-07   Rel: 4.102e-08
├─  Variables
│     tf     Abs: 9.837e-08   Rel: 9.995e-10
├─  Objective
│            Abs: 2.430e-09   Rel: 1.947e-12
└─

Plotting the solutions

In this section, we visualise the trajectories of the states, costates, and controls obtained from both the OptimalControl and JuMP solutions. The plots provide an intuitive way to compare the two approaches and to observe how the constraints and the optimal control influence the system dynamics.

For each variable, the OptimalControl solution is shown in solid lines, while the JuMP solution is overlaid using dashed lines. Since both models represent the same mathematical problem, their trajectories should closely coincide, highlighting the consistency between the two formulations.

# build an ocp solution to use the plot from OptimalControl package
ocp_sol = build_ocp_solution(docp, nlp_oc_sol)

# dimensions
n = state_dimension(ocp_sol)
m = control_dimension(ocp_sol)

# from OptimalControl solution
plt = plot(
    ocp_sol;
    color=1,
    size=(816, 240*(n+m)),
    label="OptimalControl",
    leftmargin=20mm,
)
for i in 2:length(plt)
    plot!(plt[i]; legend=:none)
end

# from JuMP solution
t = time_grid(nlp_jp)     # t0, ..., tN = tf
x = state(nlp_jp)         # function of time
u = control(nlp_jp)       # function of time
p = costate(nlp_jp)       # function of time

for i in 1:n # state
    label = i == 1 ? "JuMP" : :none
    plot!(plt[i], t, t -> x(t)[i]; color=2, linestyle=:dash, label=label)
end

for i in 1:n # costate
    plot!(plt[n+i], t, t -> -p(t)[i]; color=2, linestyle=:dash, label=:none)
end

for i in 1:m # control
    plot!(plt[2n+i], t, t -> u(t)[i]; color=2, linestyle=:dash, label=:none)
end
Example block output