Beam
Description of the problem
The clamped beam problem is a classical benchmark in constrained optimal control. It models the deflection of a flexible beam that is fixed (clamped) at both ends and subject to an external force. Originating from the Euler–Bernoulli beam model Bryson et al. 1963, it is widely used in benchmark suites. Both the state trajectory $x(\cdot)$ and the control $u(\cdot)$ are decision variables, and the aim is to minimise the control effort under boundary conditions and a state inequality constraint on the beam deflection.
Mathematical formulation
The problem can be stated as
\[\begin{aligned} \min_{x,u} \quad & J(x,u) = \int_0^1 u^2(t) \,\mathrm{d}t \\[1em] \text{s.t.} \quad & \dot{x}_1(t) = x_2(t), \quad \dot{x}_2(t) = u(t), \\[0.5em] & x_1(0) = 0, \; x_1(1) = 0, \; x_2(0) = 1, \; x_2(1) = -1, \\[0.5em] & 0 \le x_1(t) \le a. \end{aligned}\]
where $a$ denotes the maximal admissible deflection (e.g. $a=0.1$ in the current implementation).
Qualitative behaviour
This problem involves a second-order state constraint. Differentiating the inequality twice reveals that the control is directly related to the curvature of the state:
\[\dot{x}_1(t) = x_2(t), \quad \ddot{x}_1(t) = u(t).\]
Along a boundary arc, where $x_1$ remains constant, both derivatives vanish. Consequently, the control along such an arc is zero:
\[u(t) = 0.\]
In optimal control terminology, a touch point refers to the case where the state hits the bound at a single instant, while a boundary arc designates an interval where the state remains on the bound.
The qualitative behaviour of the optimal trajectory depends on the parameter $a$:
Case 1: $a \ge 1/4$. The inequality constraint is inactive. The optimal solution is simply the unconstrained trajectory: $x(t) = t\, (1-t)$.
Case 2: $1/6 \le a \le 1/4$. The trajectory touches the boundary exactly once (a touch point) at $t=1/2$. There is no interval of constraint activation, but the single contact reflects the influence of the state bound.
Case 3: $a < 1/6$. A boundary arc appears: the state constraint is active over a finite interval. Along this interval, the control vanishes: $u(t) = 0$.
These cases illustrate how second-order state inequality constraints can yield unconstrained solutions, isolated touch points, or extended boundary arcs, thereby shaping both the trajectory and the control.
Characteristics
- Linear–quadratic dynamics and cost.
- State inequality constraint induces either unconstrained motion, touch points, or boundary arcs depending on the parameter $a$.
- Serves as a benchmark for optimal control methods handling state constraints.
References
Bryson, A.E., Denham, W.F., & Dreyfus, S.E. (1963). Optimal programming problems with inequality constraints I: necessary conditions for extremal solutions. AIAA Journal. doi.org/10.2514/3.2107 This seminal work develops the theoretical foundation for optimal control problems with inequality constraints, including the derivation of necessary conditions for extremal solutions. It provides the basis for analysing state-constrained control problems like the clamped beam.
BOCOP examples: Clamped Beam problem. Examples-BOCOP.pdf This example illustrates the practical implementation of the clamped beam problem in BOCOP, providing a benchmark for testing optimal control methods under state inequality constraints.
Numerical set-up
In this section, we prepare the numerical environment required to study the problem. We begin by importing the relevant Julia packages and then initialise the data frames that will store the results of our simulations and computations. These structures provide the foundation for solving the problem and for comparing the different solution strategies in a consistent way.
using OptimalControlProblems # to access the Beam model
using OptimalControl # to import the OptimalControl model
using NLPModelsIpopt # to solve the model with Ipopt
import DataFrames: DataFrame # to store data
using NLPModels # to retrieve data from the NLP solution
using Plots # to plot the trajectories
using Plots.PlotMeasures # for leftmargin, bottommargin
using JuMP # to import the JuMP model
using Ipopt # to solve the JuMP model with Ipopt
using Printf # to print
data_pb = DataFrame( # to store data about the problem
Problem=Symbol[],
Grid_Size=Int[],
Variables=Int[],
Constraints=Int[],
)
data_re = DataFrame( # to store data about the resolutions
Model=Symbol[],
Flag=Any[],
Iterations=Int[],
Objective=Float64[],
)
Metadata
The default number of time steps is:
metadata(:beam)[:grid_size]
500
The default values of the parameters are:
metadata(:beam)[:parameters]
Parameter = Value
------------------
t0 = 0.0000e+00
tf = 1.0000e+00
x₁_l = 0.0000e+00
x₁_u = 1.0000e-01
x₁_t0 = 0.0000e+00
x₂_t0 = 1.0000e+00
x₁_tf = 0.0000e+00
x₂_tf = -1.0000e+00
Initial guess
Before solving the problem, it is often useful to inspect the initial guess (sometimes called the first iterate). This guess is obtained by running the NLP solver with max_iter = 0
, which evaluates the problem formulation without performing any optimisation steps.
We plot the resulting trajectories for both the OptimalControl and JuMP models. Since both backends represent the same mathematical problem, their initial guesses should coincide, providing a useful consistency check before moving on to the optimised solution.
Click to unfold and see the code for plotting the initial guess.
function plot_initial_guess(problem)
# -----------------------------
# Build OptimalControl problem
# -----------------------------
docp = eval(problem)(OptimalControlBackend())
nlp_oc = nlp_model(docp)
ocp_oc = ocp_model(docp)
# Solve NLP with zero iterations (initial guess)
nlp_oc_sol = NLPModelsIpopt.ipopt(nlp_oc; max_iter=0)
# Build OptimalControl solution
ocp_sol = build_ocp_solution(docp, nlp_oc_sol)
# get dimensions
n = state_dimension(ocp_oc)
m = control_dimension(ocp_oc)
# -----------------------------
# Plot OptimalControl solution
# -----------------------------
plt = plot(
ocp_sol;
state_style=(color=1,),
costate_style=(color=1, legend=:none),
control_style=(color=1, legend=:none),
path_style=(color=1, legend=:none),
dual_style=(color=1, legend=:none),
size=(816, 220*(n+m)),
label="OptimalControl",
leftmargin=5mm,
)
# Hide legend for additional state plots
for i in 2:n
plot!(plt[i]; legend=:none)
end
# -----------------------------
# Build JuMP model
# -----------------------------
nlp_jp = eval(problem)(JuMPBackend())
# Solve NLP with zero iterations (initial guess)
set_optimizer(nlp_jp, Ipopt.Optimizer)
set_optimizer_attribute(nlp_jp, "max_iter", 0)
optimize!(nlp_jp)
# Extract trajectories
t_grid = time_grid(nlp_jp)
x_fun = state(nlp_jp)
u_fun = control(nlp_jp)
p_fun = costate(nlp_jp)
# -----------------------------
# Plot JuMP solution on top
# -----------------------------
# States
for i in 1:n
label = i == 1 ? "JuMP" : :none
plot!(plt[i], t_grid, t -> x_fun(t)[i]; color=2, linestyle=:dash, label=label)
end
# Costates
for i in 1:n
plot!(plt[n+i], t_grid, t -> -p_fun(t)[i]; color=2, linestyle=:dash, label=:none)
end
# Controls
for i in 1:m
plot!(plt[2*n+i], t_grid, t -> u_fun(t)[i]; color=2, linestyle=:dash, label=:none)
end
return plt
end
plot_initial_guess(:beam)
Solving the problem
To solve an optimal control problem, we can rely on two complementary formulations: the OptimalControl
backend, which works directly with the discretised control problem, and the JuMP
backend, which leverages JuMP’s flexible modelling framework.
Both approaches generate equivalent NLPs that can be solved with Ipopt, and comparing them ensures consistency between the two formulations.
Before solving, we can inspect the discretisation details of the problem. The table below reports the number of grid points, decision variables, and constraints associated with the chosen formulation.
push!(data_pb,(
Problem=:beam,
Grid_Size=metadata(:beam)[:grid_size],
Variables=get_nvar(nlp_model(beam(OptimalControlBackend()))),
Constraints=get_ncon(nlp_model(beam(OptimalControlBackend()))),
))
Row | Problem | Grid_Size | Variables | Constraints |
---|---|---|---|---|
Symbol | Int64 | Int64 | Int64 | |
1 | beam | 500 | 1503 | 1004 |
OptimalControl model
We first solve the problem using the OptimalControl
backend. The process begins by importing the problem definition and constructing the associated nonlinear programming (NLP) model. This NLP is then passed to the Ipopt solver, with standard options for tolerance and barrier parameter strategy.
# import DOCP model
docp = beam(OptimalControlBackend())
# get NLP model
nlp_oc = nlp_model(docp)
# solve
nlp_oc_sol = NLPModelsIpopt.ipopt(
nlp_oc;
print_level=4,
tol=1e-8,
mu_strategy="adaptive",
sb="yes",
)
Total number of variables............................: 1503
variables with only lower bounds: 0
variables with lower and upper bounds: 501
variables with only upper bounds: 0
Total number of equality constraints.................: 1004
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
Number of Iterations....: 18
(scaled) (unscaled)
Objective...............: 8.8892818085812575e+00 8.8892818085812575e+00
Dual infeasibility......: 6.9988215544086416e-15 6.9988215544086416e-15
Constraint violation....: 1.1102230246251565e-16 1.1102230246251565e-16
Variable bound violation: 9.9998418573443715e-09 9.9998418573443715e-09
Complementarity.........: 3.8120412893005582e-09 3.8120412893005582e-09
Overall NLP error.......: 3.8120412893005582e-09 3.8120412893005582e-09
Number of objective function evaluations = 19
Number of objective gradient evaluations = 19
Number of equality constraint evaluations = 19
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 19
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 18
Total seconds in IPOPT = 1.186
EXIT: Optimal Solution Found.
JuMP model
We now repeat the procedure using the JuMP
backend. Here, the problem is reformulated as a JuMP model, which offers a flexible and widely used framework for nonlinear optimisation in Julia. The solver settings are chosen to mirror those used previously, so that the results can be compared on an equal footing.
# import model
nlp_jp = beam(JuMPBackend())
# solve with Ipopt
set_optimizer(nlp_jp, Ipopt.Optimizer)
set_optimizer_attribute(nlp_jp, "print_level", 4)
set_optimizer_attribute(nlp_jp, "tol", 1e-8)
set_optimizer_attribute(nlp_jp, "mu_strategy", "adaptive")
set_optimizer_attribute(nlp_jp, "linear_solver", "mumps")
set_optimizer_attribute(nlp_jp, "sb", "yes")
optimize!(nlp_jp)
Total number of variables............................: 1503
variables with only lower bounds: 0
variables with lower and upper bounds: 501
variables with only upper bounds: 0
Total number of equality constraints.................: 1004
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
Number of Iterations....: 18
(scaled) (unscaled)
Objective...............: 8.8892818101688942e+00 8.8892818101688942e+00
Dual infeasibility......: 7.1054273576010019e-15 7.1054273576010019e-15
Constraint violation....: 9.4542429440735987e-17 9.4542429440735987e-17
Variable bound violation: 9.9994253710544712e-09 9.9994253710544712e-09
Complementarity.........: 3.8341725376273833e-09 3.8341725376273833e-09
Overall NLP error.......: 3.8341725376273833e-09 3.8341725376273833e-09
Number of objective function evaluations = 19
Number of objective gradient evaluations = 19
Number of equality constraint evaluations = 19
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 1
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 1
Total seconds in IPOPT = 0.040
EXIT: Optimal Solution Found.
Numerical comparisons
In this section, we examine the results of the problem resolutions. We extract the solver status (flag), the number of iterations, and the objective value for each model. This provides a first overview of how each approach performs and sets the stage for a more detailed comparison of the solution trajectories.
# from OptimalControl model
push!(data_re,(
Model=:OptimalControl,
Flag=nlp_oc_sol.status,
Iterations=nlp_oc_sol.iter,
Objective=nlp_oc_sol.objective,
))
# from JuMP model
push!(data_re,(
Model=:JuMP,
Flag=termination_status(nlp_jp),
Iterations=barrier_iterations(nlp_jp),
Objective=objective_value(nlp_jp),
))
Row | Model | Flag | Iterations | Objective |
---|---|---|---|---|
Symbol | Any | Int64 | Float64 | |
1 | OptimalControl | first_order | 18 | 8.88928 |
2 | JuMP | LOCALLY_SOLVED | 18 | 8.88928 |
We compare the solutions obtained from the OptimalControl and JuMP models by examining the number of iterations required for convergence, the $L^2$-norms of the differences in states, controls, and additional variables, and the corresponding objective values. Both absolute and relative errors are reported, providing a clear quantitative measure of the agreement between the two approaches.
Click to unfold and get the code of the numerical comparisons.
function L2_norm(T, X)
# T and X are supposed to be one dimensional
s = 0.0
for i in 1:(length(T) - 1)
s += 0.5 * (X[i]^2 + X[i + 1]^2) * (T[i + 1]-T[i])
end
return √(s)
end
function print_numerical_comparisons(problem, docp, nlp_oc_sol, nlp_jp)
# get relevant data from OptimalControl model
ocp_sol = build_ocp_solution(docp, nlp_oc_sol)
t_oc = time_grid(ocp_sol)
x_oc = state(ocp_sol).(t_oc)
u_oc = control(ocp_sol).(t_oc)
v_oc = variable(ocp_sol)
o_oc = objective(ocp_sol)
i_oc = iterations(ocp_sol)
# get relevant data from JuMP model
t_jp = time_grid(nlp_jp)
x_jp = state(nlp_jp).(t_jp)
u_jp = control(nlp_jp).(t_jp)
o_jp = objective(nlp_jp)
v_jp = variable(nlp_jp)
i_jp = iterations(nlp_jp)
x_vars = state_components(nlp_jp)
u_vars = control_components(nlp_jp)
v_vars = variable_components(nlp_jp)
println("┌─ ", string(problem))
println("│")
println("├─ Number of Iterations")
@printf("│ OptimalControl : %d JuMP : %d\n", i_oc, i_jp)
# States
println("├─ States (L2 Norms)")
for i in eachindex(x_vars)
xi_oc = [x_oc[k][i] for k in eachindex(t_oc)]
xi_jp = [x_jp[k][i] for k in eachindex(t_jp)]
L2_ae = L2_norm(t_oc, xi_oc - xi_jp)
L2_re = L2_ae / (0.5 * (L2_norm(t_oc, xi_oc) + L2_norm(t_oc, xi_jp)))
@printf("│ %-6s Abs: %.3e Rel: %.3e\n", x_vars[i], L2_ae, L2_re)
end
# Controls
println("├─ Controls (L2 Norms)")
for i in eachindex(u_vars)
ui_oc = [u_oc[k][i] for k in eachindex(t_oc)]
ui_jp = [u_jp[k][i] for k in eachindex(t_jp)]
L2_ae = L2_norm(t_oc, ui_oc - ui_jp)
L2_re = L2_ae / (0.5 * (L2_norm(t_oc, ui_oc) + L2_norm(t_oc, ui_jp)))
@printf("│ %-6s Abs: %.3e Rel: %.3e\n", u_vars[i], L2_ae, L2_re)
end
# Variables
if !isnothing(v_vars)
println("├─ Variables")
for i in eachindex(v_vars)
vi_oc = v_oc[i]
vi_jp = v_jp[i]
vi_ae = abs(vi_oc - vi_jp)
vi_re = vi_ae / (0.5 * (abs(vi_oc) + abs(vi_jp)))
@printf("│ %-6s Abs: %.3e Rel: %.3e\n", v_vars[i], vi_ae, vi_re)
end
end
# Objective
o_ae = abs(o_oc - o_jp)
o_re = o_ae / (0.5 * (abs(o_oc) + abs(o_jp)))
println("├─ Objective")
@printf("│ Abs: %.3e Rel: %.3e\n", o_ae, o_re)
println("└─")
return nothing
end
print_numerical_comparisons(:beam, docp, nlp_oc_sol, nlp_jp)
┌─ beam
│
├─ Number of Iterations
│ OptimalControl : 18 JuMP : 18
├─ States (L2 Norms)
│ x₁ Abs: 1.186e-08 Rel: 1.337e-07
│ x₂ Abs: 9.991e-08 Rel: 2.884e-07
├─ Controls (L2 Norms)
│ u Abs: 1.760e-06 Rel: 5.904e-07
├─ Variables
├─ Objective
│ Abs: 1.588e-09 Rel: 1.786e-10
└─
Plotting the solutions
In this section, we visualise the trajectories of the states, costates, and controls obtained from both the OptimalControl and JuMP solutions. The plots provide an intuitive way to compare the two approaches and to observe how the constraints and the optimal control influence the system dynamics.
For each variable, the OptimalControl solution is shown in solid lines, while the JuMP solution is overlaid using dashed lines. Since both models represent the same mathematical problem, their trajectories should closely coincide, highlighting the consistency between the two formulations.
# build an ocp solution to use the plot from OptimalControl package
ocp_sol = build_ocp_solution(docp, nlp_oc_sol)
# dimensions
n = state_dimension(ocp_sol)
m = control_dimension(ocp_sol)
# from OptimalControl solution
plt = plot(
ocp_sol;
color=1,
size=(816, 240*(n+m)),
label="OptimalControl",
leftmargin=5mm,
)
for i in 2:length(plt)
plot!(plt[i]; legend=:none)
end
# from JuMP solution
t = time_grid(nlp_jp) # t0, ..., tN = tf
x = state(nlp_jp) # function of time
u = control(nlp_jp) # function of time
p = costate(nlp_jp) # function of time
for i in 1:n # state
label = i == 1 ? "JuMP" : :none
plot!(plt[i], t, t -> x(t)[i]; color=2, linestyle=:dash, label=label)
end
for i in 1:n # costate
plot!(plt[n+i], t, t -> -p(t)[i]; color=2, linestyle=:dash, label=:none)
end
for i in 1:m # control
plot!(plt[2n+i], t, t -> u(t)[i]; color=2, linestyle=:dash, label=:none)
end