Private API

This page lists the non-exported (internal) symbols of CTBenchmarks.

Access these symbols with:

import CTBenchmarks
CTBenchmarks.<NAME>

ComboPerformance

CTBenchmarks.ComboPerformanceType
ComboPerformance

Performance metrics for a single solver combination.

Fields

  • combo::String: Solver combination label (e.g., "(exa, ipopt)")
  • robustness::Float64: Percentage of instances solved (0-100)
  • efficiency::Float64: Percentage of instances where this combo was fastest (0-100)

ITERATION

CTBenchmarks.ITERATIONConstant
ITERATION::Base.RefValue{Int}

Internal counter used to track how many times the JuMP solve loop has been executed, in order to adjust the solver print level after the first iteration.

PerformanceProfile

CTBenchmarks.PerformanceProfileType
PerformanceProfile{M}

Immutable structure containing all data needed to plot and analyze a performance profile, together with the configuration that was used to build it.

Type parameter

  • M: Metric type used in the underlying profile (e.g., Float64 for CPU time).

Fields

  • bench_id::String: Benchmark identifier
  • df_instances::DataFrame: All (problem, grid_size) instances attempted
  • df_successful::DataFrame: Successful runs with aggregated metric and ratios
  • combos::Vector{String}: List of solver labels (typically "(model, solver)")
  • total_problems::Int: Total number of instances (N in Dolan–Moré)
  • min_ratio::Float64: Minimum performance ratio across all combos
  • max_ratio::Float64: Maximum performance ratio across all combos
  • config::PerformanceProfileConfig{M}: Configuration used to construct this profile

PerformanceProfilePlotConfig

CTBenchmarks.PerformanceProfilePlotConfigType
PerformanceProfilePlotConfig

Configuration for performance profile plot styling.

Fields

  • size::Tuple{Int,Int}: Plot size (width, height)
  • xlabel::String: X-axis label
  • ylabel::String: Y-axis label
  • title_font::Plots.Font: Font settings for the title
  • label_font::Plots.Font: Font settings for labels
  • linewidth::Float64: Width of the profile lines
  • markersize::Int: Size of the markers
  • framestyle::Symbol: Plot frame style
  • legend_position::Symbol: Legend position

ProfileAnalysis

CTBenchmarks.ProfileAnalysisType
ProfileAnalysis

Complete analysis results for a performance profile.

Fields

  • bench_id::String: Benchmark identifier
  • stats::ProfileStats: Statistical summary
  • performances::Vector{ComboPerformance}: Performance metrics for each combo
  • most_robust::Vector{String}: Combo(s) with highest robustness
  • most_efficient::Vector{String}: Combo(s) with highest efficiency

ProfileStats

CTBenchmarks.ProfileStatsType
ProfileStats

Statistical summary of a performance profile dataset.

Fields

  • n_problems::Int: Number of unique problems
  • n_instances::Int: Total number of instances (problem × grid_size combinations)
  • n_combos::Int: Number of solver combinations
  • n_successful_runs::Int: Number of successful runs across all combos
  • n_successful_instances::Int: Number of instances with at least one successful run
  • unsuccessful_instances::Vector{Tuple}: List of instances that failed for all combos
  • instance_cols::Vector{Symbol}: Instance column names
  • solver_cols::Vector{Symbol}: Solver column names
  • criterion_name::String: Name of the performance criterion

_add_combo_series!

CTBenchmarks._add_combo_series!Function
_add_combo_series!(plt, x, y, label, color, marker, cfg)

Add a single solver combination series (line + markers) to the plot.

_add_reference_lines!

_aggregate_metrics

CTBenchmarks._aggregate_metricsFunction
_aggregate_metrics(df, cfg) -> DataFrame

Aggregate metrics when multiple runs exist for the same instance/solver combination.

_compute_curve_points

CTBenchmarks._compute_curve_pointsFunction
_compute_curve_points(ratios, total_problems) -> (Vector{Float64}, Vector{Float64})

Compute the step function (x, y) points for the performance profile.

_compute_dolan_more_ratios

_compute_profile_metadata

CTBenchmarks._compute_profile_metadataFunction
_compute_profile_metadata(df, cfg) -> (Vector{String}, Float64, Float64)

Generate solver combination labels and compute min/max ratio bounds.

_extract_benchmark_metrics

_filter_benchmark_data

CTBenchmarks._filter_benchmark_dataFunction
_filter_benchmark_data(df, cfg, allowed_combos) -> DataFrame

Filter benchmark rows based on configuration criteria and allowed combinations.

_format_analysis_markdown

CTBenchmarks._format_analysis_markdownFunction
_format_analysis_markdown(analysis::ProfileAnalysis) -> String

Format a ProfileAnalysis as a Markdown string. (Internal helper)

Arguments

  • analysis::ProfileAnalysis: Structured analysis results

Returns

  • String: Markdown-formatted analysis report

_init_profile_plot

_marker_indices_for_curve

CTBenchmarks._marker_indices_for_curveFunction
_marker_indices_for_curve(ratios; M = 6)

Compute marker positions for a performance profile curve.

Places M markers uniformly in log2 space between the first and last ratio, then snaps to the nearest available grid points.

_nearest_index

_plot_font_settings

CTBenchmarks._plot_font_settingsFunction
_plot_font_settings()

Return font settings for plot titles and axis labels.

Returns

  • Tuple{Plots.Font, Plots.Font}: tuple (title_font, label_font).

_validate_benchmark_df

CTBenchmarks._validate_benchmark_dfFunction
_validate_benchmark_df(df::DataFrame, cfg::PerformanceProfileConfig)

Check that the benchmark DataFrame contains all required columns. (Internal helper)

benchmark_data

CTBenchmarks.benchmark_dataFunction
benchmark_data(
;
    problems,
    solver_models,
    grid_sizes,
    disc_methods,
    tol,
    ipopt_mu_strategy,
    print_trace,
    max_iter,
    max_wall_time
)

Run benchmarks on optimal control problems and return results as a DataFrame.

For each combination of problem, solver, model, and grid size, this function:

  1. Sets up and solves the optimization problem
  2. Captures timing and memory statistics using @btimed or CUDA.@timed
  3. Extracts solver statistics (objective value, iterations)
  4. Stores all data in a DataFrame row

Arguments

  • problems: Vector of problem names (Symbols)
  • solver_models: Vector of Pairs mapping solver => models (e.g., [:ipopt => [:jump, :adnlp], :madnlp => [:exa, :exa_gpu]])
  • grid_sizes: Vector of grid sizes (Int)
  • disc_methods: Vector of discretization methods (Symbols)
  • tol: Solver tolerance (Float64)
  • ipopt_mu_strategy: Mu strategy for Ipopt (String)
  • print_trace: Boolean - whether to print solver output (for debugging)
  • max_iter: Maximum number of iterations (Int)
  • max_wall_time: Maximum wall time in seconds (Float64)

Returns

A DataFrame with columns:

  • problem: Symbol - problem name
  • solver: Symbol - solver used (:ipopt or :madnlp)
  • model: Symbol - model type (:jump, :adnlp, :exa, or :exa_gpu)
  • disc_method: Symbol - discretization method
  • grid_size: Int - number of grid points
  • tol: Float64 - solver tolerance
  • mu_strategy: Union{String, Missing} - mu strategy for Ipopt (missing for MadNLP)
  • max_iter: Int - maximum number of iterations
  • max_wall_time: Float64 - maximum wall time in seconds
  • benchmark: NamedTuple - full benchmark object from @btimed or CUDA.@timed
  • objective: Union{Float64, Missing} - objective function value (missing if failed)
  • iterations: Union{Int, Missing} - number of solver iterations (missing if failed)
  • status: Any - termination status (type depends on solver/model)
  • success: Bool - whether the solve succeeded
  • criterion: Union{String, Missing} - optimization sense ("min" or "max", missing if failed)
  • solution: Any - underlying solution object (JuMP model or OptimalControl solution)

build_payload

CTBenchmarks.build_payloadFunction
build_payload(
    results::DataFrames.DataFrame,
    meta::Dict,
    config::Dict
) -> Dict

Combine benchmark results, metadata, and configuration into a JSON-friendly payload.

The results DataFrame is converted to a vector of dictionaries (one per row) for easy JSON serialisation and reconstruction. Solutions are extracted and kept in memory (not serialised to JSON) for later plot generation.

Arguments

  • results::DataFrame: Benchmark results table produced by benchmark_data
  • meta::Dict: Environment metadata produced by generate_metadata
  • config::Dict: Configuration describing the benchmark run (problems, solvers, grids, etc.)

Returns

  • Dict: Payload with three keys:
    • "metadata" – merged metadata and configuration
    • "results" – vector of row dictionaries obtained from results
    • "solutions" – vector of solution objects (kept in memory only)

Example

julia> using CTBenchmarks

julia> payload = CTBenchmarks.build_payload(results, meta, config)
Dict{String, Any} with 3 entries:
  "metadata"  => Dict{String, Any}(...)
  "results"   => Vector{Dict}(...)
  "solutions" => Any[...]

compute_profile_stats

CTBenchmarks.compute_profile_statsFunction
compute_profile_stats(pp::PerformanceProfile) -> ProfileAnalysis

Compute statistical analysis of a performance profile.

This function extracts and calculates all performance metrics without any formatting. It returns structured data that can be used for different presentation formats (Markdown, JSON, etc.).

Arguments

  • pp::PerformanceProfile: Pre-computed performance profile data

Returns

  • ProfileAnalysis: Structured analysis results

costate_multiplier

CTBenchmarks.costate_multiplierFunction
costate_multiplier(criterion) -> Int64

Determine the sign used to plot costates based on the optimization criterion.

For maximisation problems, costates are plotted with a positive sign. For minimisation problems (the default), costates are plotted with a negative sign so that their visual behaviour matches the usual optimal control conventions.

Arguments

  • criterion: Optimization criterion (:min, :max, or missing).

Returns

  • Int: +1 if the problem is a maximisation, -1 otherwise.

Example

julia> using CTBenchmarks

julia> CTBenchmarks.costate_multiplier(:min)
-1

julia> CTBenchmarks.costate_multiplier(:max)
1

create_jump_layout

CTBenchmarks.create_jump_layoutFunction
create_jump_layout(
    n::Int64,
    m::Int64,
    problem::Symbol,
    grid_size::Int64,
    state_labels::Vector{<:AbstractString},
    control_labels::Vector{<:AbstractString}
) -> Any

Create a nested plot layout for JuMP solutions.

Generates a multi-panel layout with states and costates in two columns, and controls spanning the full width below. This layout facilitates easy visual comparison of multiple solutions overlaid on the same plots.

Arguments

  • n::Int: Number of states
  • m::Int: Number of controls
  • problem::Symbol: Problem name (for plot styling)
  • grid_size::Int: Grid size (used for sizing calculations)
  • state_labels::Vector{<:AbstractString}: Labels for state components
  • control_labels::Vector{<:AbstractString}: Labels for control components

Returns

  • Plots.Plot: Nested plot layout with (n + n + m) accessible subplots

Layout Structure

  • Left column: State trajectories (n subplots)
  • Right column: Costate trajectories (n subplots)
  • Bottom: Control trajectories (m subplots, full width)

Details

Subplots are accessed linearly:

  • plt[1:n] = states
  • plt[n+1:2n] = costates
  • plt[2n+1:2n+m] = controls

Example

julia> using CTBenchmarks

julia> state_labels = ["x₁", "x₂", "x₃"]
julia> control_labels = ["u₁", "u₂"]
julia> plt = CTBenchmarks.create_jump_layout(3, 2, :beam, 100, state_labels, control_labels)

default_plot_config

filter_models_for_backend

CTBenchmarks.filter_models_for_backendFunction
filter_models_for_backend(
    models::Vector{Symbol},
    disc_method::Symbol
) -> Vector{Symbol}

Filter solver models depending on backend availability and discretization support.

  • GPU models (ending with _gpu) are kept only if CUDA is available.
  • JuMP models are kept only when disc_method == :trapeze.

Arguments

  • models::Vector{Symbol}: Candidate model types (e.g. [:jump, :adnlp, :exa, :exa_gpu])
  • disc_method::Symbol: Discretization method (:trapeze or :midpoint)

Returns

  • Vector{Symbol}: Filtered list of models that are compatible with the current backend configuration.

Example

julia> using CTBenchmarks

julia> CTBenchmarks.filter_models_for_backend([:jump, :exa, :exa_gpu], :trapeze)
3-element Vector{Symbol}:
 :jump
 :exa
 :exa_gpu

format_solution_label

CTBenchmarks.format_solution_labelFunction
format_solution_label(
    model::Symbol,
    solver::Symbol,
    success::Bool
) -> String

Format a short label for use in plot legends, combining success status with the model and solver names.

The label starts with a tick or cross depending on whether the solution was successful, followed by model-solver.

Arguments

  • model::Symbol: Model name (e.g. :jump, :adnlp, :exa)
  • solver::Symbol: Solver name (e.g. :ipopt, :madnlp)
  • success::Bool: Whether the solve succeeded (true) or failed (false)

Returns

  • String: A label such as "✓ jump-ipopt" or "✗ exa-madnlp"

Example

julia> using CTBenchmarks

julia> CTBenchmarks.format_solution_label(:jump, :ipopt, true)
"✓ jump-ipopt"

julia> CTBenchmarks.format_solution_label(:exa, :madnlp, false)
"✗ exa-madnlp"

generate_metadata

CTBenchmarks.generate_metadataFunction
generate_metadata() -> Dict{String, String}

Collect metadata about the current Julia environment for benchmark reproducibility.

The returned dictionary includes a timestamp, Julia version, OS and machine information, as well as textual snapshots of the package environment.

Returns

  • Dict{String,String}: Dictionary with keys
    • "timestamp": Current time in UTC (ISO8601-like formatting)
    • "julia_version": Julia version string
    • "os": Kernel/OS identifier
    • "machine": Hostname of the current machine
    • "pkg_status": Output of Pkg.status() with ANSI colours
    • "versioninfo": Output of versioninfo() with ANSI colours
    • "pkg_manifest": Output of Pkg.status(mode=PKGMODE_MANIFEST) with ANSI colours

Example

julia> using CTBenchmarks

julia> meta = CTBenchmarks.generate_metadata()
Dict{String, String} with 7 entries:
  "timestamp"     => "2025-11-15 18:30:00 UTC"
  "julia_version" => "1.10.0"
  "os"            => "Linux"
  ⋮

get_color

CTBenchmarks.get_colorFunction
get_color(
    model::Union{String, Symbol},
    solver::Union{String, Symbol},
    idx::Int64
) -> Symbol

Return a consistent color for a given (model, solver) pair.

This function ensures visual consistency across plots by assigning fixed colors to known (model, solver) combinations. For unknown combinations, it cycles through a default palette based on the provided index.

Fixed Mappings

  • (adnlp, ipopt):blue
  • (exa, ipopt):red
  • (adnlp, madnlp):green
  • (exa, madnlp):orange
  • (jump, ipopt):purple
  • (jump, madnlp):brown
  • (exa_gpu, madnlp):cyan

Arguments

  • model::Union{Symbol,String}: Model name (case-insensitive)
  • solver::Union{Symbol,String}: Solver name (case-insensitive)
  • idx::Int: Index for palette fallback (used if pair not in fixed mappings)

Returns

  • Symbol: Color symbol suitable for Plots.jl (e.g., :blue, :red)

Example

julia> using CTBenchmarks

julia> CTBenchmarks.get_color(:adnlp, :ipopt, 1)
:blue

julia> CTBenchmarks.get_color(:unknown, :solver, 2)
:red
get_color(params::Vector, idx::Int) -> Symbol

Variadic version of get_color that handles any number of solver parameters. Uses the first two parameters for color mapping, falling back to palette if not in fixed mappings.

Arguments

  • params::Vector: Vector of solver parameter values (e.g., [model, solver] or [disc_method, solver, ...])
  • idx::Int: Index for palette fallback

Returns

  • Symbol: Color symbol suitable for Plots.jl

get_config

CTBenchmarks.get_configFunction
get_config(registry::PerformanceProfileRegistry, name::AbstractString) -> PerformanceProfileConfig

Retrieve a registered performance profile configuration by name.

Arguments

  • registry: The registry to search.
  • name: Name of the configuration.

Throws

  • KeyError if the name is not found in the registry.

get_dimensions

CTBenchmarks.get_dimensionsFunction
get_dimensions(
    group::DataFrames.SubDataFrame
) -> Tuple{Any, Any}

Get state and control dimensions from the first available solution in a group.

Extracts the problem dimensions (number of states and controls) by examining the first solution in the group. Works with both OptimalControl.Solution and JuMP.Model objects.

Arguments

  • group::SubDataFrame: DataFrame subset with solution rows

Returns

  • Tuple{Int, Int}: (n, m) where n = number of states, m = number of controls

Example

julia> using CTBenchmarks

julia> n, m = CTBenchmarks.get_dimensions(group)
(3, 2)

get_left_margin

CTBenchmarks.get_left_marginFunction
get_left_margin(problem::Symbol) -> Measures.AbsoluteLength

Get the left margin for plots based on the problem.

Different problems may require different margins to accommodate axis labels and titles. The beam problem uses a smaller margin (5mm) while other problems use 20mm.

Arguments

  • problem::Symbol: Problem name (e.g., :beam, :shuttle)

Returns

  • Plots.Measure: Left margin in millimeters (5mm or 20mm)

Example

julia> using CTBenchmarks

julia> CTBenchmarks.get_left_margin(:beam)
5 mm

julia> CTBenchmarks.get_left_margin(:shuttle)
20 mm

get_marker_indices

CTBenchmarks.get_marker_indicesFunction
get_marker_indices(
    idx::Int64,
    card_g::Int64,
    grid_size::Int64,
    marker_interval::Int64
) -> StepRange{Int64, Int64}

Calculate marker indices with offset to avoid superposition between curves.

When multiple curves are overlaid on the same plot, markers can overlap and obscure the visualization. This function staggers the marker positions across curves by applying an offset based on the curve index.

Arguments

  • idx::Int: Curve index (1-based)
  • card_g::Int: Total number of curves
  • grid_size::Int: Number of grid points on the curve
  • marker_interval::Int: Base spacing between markers

Returns

  • UnitRange{Int}: Range of indices for marker placement

Details

For curve idx out of card_g curves, the first marker is offset by:

offset = (idx - 1) * marker_interval / card_g

Example

julia> using CTBenchmarks

julia> CTBenchmarks.get_marker_indices(1, 3, 100, 20)
1:20:101

julia> CTBenchmarks.get_marker_indices(2, 3, 100, 20)
8:20:101

get_marker_style

CTBenchmarks.get_marker_styleFunction
get_marker_style(
    model::Union{String, Symbol},
    solver::Union{String, Symbol},
    idx::Int64
) -> Symbol

Get marker shape and spacing for a given (model, solver) pair.

This function provides consistent marker styles for known (model, solver) combinations and automatically calculates appropriate marker spacing based on grid size to avoid visual clutter while maintaining visibility.

Fixed Mappings

  • (adnlp, ipopt):circle
  • (exa, ipopt):square
  • (adnlp, madnlp):diamond
  • (exa, madnlp):utriangle
  • (jump, ipopt):dtriangle
  • (jump, madnlp):star5
  • (exa_gpu, madnlp):hexagon

Arguments

  • model::Union{Symbol,String}: Model name (case-insensitive)
  • solver::Union{Symbol,String}: Solver name (case-insensitive)
  • idx::Int: Index for marker fallback (used if pair not in fixed mappings)
  • grid_size::Int: Number of grid points on the curve

Returns

  • Tuple{Symbol, Int}: (marker_shape, marker_interval) where:
    • marker_shape: Symbol for marker type (e.g., :circle, :square)
    • marker_interval: Spacing between markers (calculated as max(1, grid_size ÷ 6))

Example

julia> using CTBenchmarks

julia> CTBenchmarks.get_marker_style(:adnlp, :ipopt, 1, 200)
(:circle, 33)

julia> CTBenchmarks.get_marker_style(:unknown, :solver, 2, 100)
(:square, 16)
get_marker_style(params::Vector, idx::Int) -> Symbol

Variadic version of getmarkerstyle that handles any number of solver parameters. Uses the first two parameters for marker mapping.

Arguments

  • params::Vector: Vector of solver parameter values
  • idx::Int: Index for marker fallback

Returns

  • Symbol: Marker shape symbol
get_marker_style(params::Vector, idx::Int, grid_size::Int) -> Tuple{Symbol, Int}

Variadic version with grid_size for marker interval calculation.

get_solution_dimensions

CTBenchmarks.get_solution_dimensionsFunction
get_solution_dimensions(
    solution::CTModels.Solution
) -> Tuple{Int64, Int64}

Extract state and control dimensions from an OptimalControl solution.

Arguments

  • solution::OptimalControl.Solution: OptimalControl solution object

Returns

  • Tuple{Int, Int}: (n, m) where n = number of states, m = number of controls
get_solution_dimensions(
    solution::JuMP.Model
) -> Tuple{Any, Any}

Extract state and control dimensions from a JuMP model solution.

Arguments

  • solution::JuMP.Model: JuMP model solution object

Returns

  • Tuple{Int, Int}: (n, m) where n = number of states, m = number of controls

is_cuda_on

CTBenchmarks.is_cuda_onFunction
is_cuda_on() -> Bool

Check whether CUDA is available and functional on this machine.

This function is used to decide whether GPU-based models (those whose name ends with _gpu) can be run in the benchmark suite.

Returns

  • Bool: true if CUDA is functional, false otherwise.

Example

julia> using CTBenchmarks

julia> CTBenchmarks.is_cuda_on()
false

list_profiles

CTBenchmarks.list_profilesFunction
list_profiles(registry::PerformanceProfileRegistry) -> Vector{String}

Return a list of all registered profile names.

plot_jump_group

CTBenchmarks.plot_jump_groupFunction
plot_jump_group(
    jump_rows::DataFrames.SubDataFrame,
    plt,
    color_idx::Int64,
    problem::Symbol,
    grid_size::Int64,
    n::Int64,
    m::Int64
) -> Tuple{Any, Int64}
plot_jump_group(
    jump_rows::DataFrames.SubDataFrame,
    plt,
    color_idx::Int64,
    problem::Symbol,
    grid_size::Int64,
    n::Int64,
    m::Int64,
    card_g_override::Union{Nothing, Int64}
) -> Tuple{Any, Int64}

Plot all JuMP solutions in a group with consistent styling.

This function creates the plot layout if plt is nothing, then adds all JuMP solutions from the group. JuMP solutions require special layout handling compared to OptimalControl solutions.

Arguments

  • jump_rows::SubDataFrame: Rows containing JuMP solutions
  • plt: Existing plot (or nothing to create new)
  • color_idx::Int: Current color index for consistent styling
  • problem::Symbol: Problem name
  • grid_size::Int: Grid size
  • n::Int: Number of states
  • m::Int: Number of controls
  • card_g_override::Union{Int,Nothing}: Override for total number of curves (for marker offset)

Returns

  • Tuple{Plots.Plot, Int}: Updated plot and next color index

plot_jump_solution

CTBenchmarks.plot_jump_solutionFunction
plot_jump_solution(
    solution,
    model::Symbol,
    solver::Symbol,
    success::Bool,
    color,
    problem::Symbol,
    grid_size::Int64,
    n::Int64,
    m::Int64,
    criterion
) -> Any
plot_jump_solution(
    solution,
    model::Symbol,
    solver::Symbol,
    success::Bool,
    color,
    problem::Symbol,
    grid_size::Int64,
    n::Int64,
    m::Int64,
    criterion,
    marker
) -> Any
plot_jump_solution(
    solution,
    model::Symbol,
    solver::Symbol,
    success::Bool,
    color,
    problem::Symbol,
    grid_size::Int64,
    n::Int64,
    m::Int64,
    criterion,
    marker,
    marker_interval
) -> Any
plot_jump_solution(
    solution,
    model::Symbol,
    solver::Symbol,
    success::Bool,
    color,
    problem::Symbol,
    grid_size::Int64,
    n::Int64,
    m::Int64,
    criterion,
    marker,
    marker_interval,
    idx::Int64
) -> Any
plot_jump_solution(
    solution,
    model::Symbol,
    solver::Symbol,
    success::Bool,
    color,
    problem::Symbol,
    grid_size::Int64,
    n::Int64,
    m::Int64,
    criterion,
    marker,
    marker_interval,
    idx::Int64,
    card_g::Int64
) -> Any

Create a new multi-panel plot for a single JuMP solution.

Generates a comprehensive visualization with state, costate, and control trajectories from a JuMP model, with spaced markers and legend entry indicating success status.

Arguments

  • solution: JuMP.Model object
  • model::Symbol: Model name (for legend)
  • solver::Symbol: Solver name (for legend)
  • success::Bool: Whether the solution converged successfully
  • color: Color symbol (from get_color)
  • problem::Symbol: Problem name (for plot styling)
  • grid_size::Int: Grid size
  • n::Int: Number of states
  • m::Int: Number of controls
  • criterion: Optimization criterion (:min or :max, affects costate sign)
  • marker: Marker shape symbol (default: :circle)
  • marker_interval::Int: Spacing between markers (default: 10)
  • idx::Int: Curve index for marker offset (default: 1)
  • card_g::Int: Total number of curves for marker offset (default: 1)

Returns

  • Plots.Plot: Multi-panel plot with (n + n + m) subplots

plot_jump_solution!

CTBenchmarks.plot_jump_solution!Function
plot_jump_solution!(
    plt,
    solution,
    model::Symbol,
    solver::Symbol,
    success::Bool,
    color,
    n::Int64,
    m::Int64,
    criterion
) -> Any
plot_jump_solution!(
    plt,
    solution,
    model::Symbol,
    solver::Symbol,
    success::Bool,
    color,
    n::Int64,
    m::Int64,
    criterion,
    marker
) -> Any
plot_jump_solution!(
    plt,
    solution,
    model::Symbol,
    solver::Symbol,
    success::Bool,
    color,
    n::Int64,
    m::Int64,
    criterion,
    marker,
    marker_interval
) -> Any
plot_jump_solution!(
    plt,
    solution,
    model::Symbol,
    solver::Symbol,
    success::Bool,
    color,
    n::Int64,
    m::Int64,
    criterion,
    marker,
    marker_interval,
    idx::Int64
) -> Any
plot_jump_solution!(
    plt,
    solution,
    model::Symbol,
    solver::Symbol,
    success::Bool,
    color,
    n::Int64,
    m::Int64,
    criterion,
    marker,
    marker_interval,
    idx::Int64,
    card_g::Int64
) -> Any

Add a JuMP solution to an existing multi-panel plot.

Appends state, costate, and control trajectories from a JuMP model to existing subplots with spaced markers and consistent styling. Updates the legend with success status.

Arguments

  • plt: Existing Plots.Plot to modify
  • solution: JuMP.Model object
  • model::Symbol: Model name (for legend)
  • solver::Symbol: Solver name (for legend)
  • success::Bool: Whether the solution converged successfully
  • color: Color symbol (from get_color)
  • n::Int: Number of states
  • m::Int: Number of controls
  • criterion: Optimization criterion (:min or :max, affects costate sign)
  • marker: Marker shape symbol (default: :none)
  • marker_interval::Int: Spacing between markers (default: 10)
  • idx::Int: Curve index for marker offset (default: 1)
  • card_g::Int: Total number of curves for marker offset (default: 1)

Returns

  • Plots.Plot: Modified plot with new solution added

Note

Even with nested layout, subplots are accessed linearly:

  • plt[1:n] = states
  • plt[n+1:2n] = costates
  • plt[2n+1:2n+m] = controls

plot_ocp_group

CTBenchmarks.plot_ocp_groupFunction
plot_ocp_group(
    ocp_rows::DataFrames.SubDataFrame,
    plt,
    color_idx::Int64,
    problem::Symbol,
    grid_size::Int64,
    n::Int64,
    m::Int64
) -> Tuple{Any, Int64}
plot_ocp_group(
    ocp_rows::DataFrames.SubDataFrame,
    plt,
    color_idx::Int64,
    problem::Symbol,
    grid_size::Int64,
    n::Int64,
    m::Int64,
    card_g_override::Union{Nothing, Int64}
) -> Tuple{Any, Int64}

Plot all OptimalControl solutions in a group with consistent styling.

This function creates the base plot if plt is nothing, then adds all OptimalControl solutions from the group with consistent colors and markers. It manages color indexing across multiple groups to ensure visual consistency.

Arguments

  • ocp_rows::SubDataFrame: Rows containing OptimalControl solutions
  • plt: Existing plot (or nothing to create new)
  • color_idx::Int: Current color index for consistent styling
  • problem::Symbol: Problem name
  • grid_size::Int: Grid size
  • n::Int: Number of states
  • m::Int: Number of controls
  • card_g_override::Union{Int,Nothing}: Override for total number of curves (for marker offset)

Returns

  • Tuple{Plots.Plot, Int}: Updated plot and next color index

plot_ocp_solution

CTBenchmarks.plot_ocp_solutionFunction
plot_ocp_solution(
    solution,
    model::Symbol,
    solver::Symbol,
    success::Bool,
    color,
    problem::Symbol,
    grid_size::Int64,
    n::Int64,
    m::Int64,
    marker,
    marker_interval
) -> Any
plot_ocp_solution(
    solution,
    model::Symbol,
    solver::Symbol,
    success::Bool,
    color,
    problem::Symbol,
    grid_size::Int64,
    n::Int64,
    m::Int64,
    marker,
    marker_interval,
    idx::Int64
) -> Any
plot_ocp_solution(
    solution,
    model::Symbol,
    solver::Symbol,
    success::Bool,
    color,
    problem::Symbol,
    grid_size::Int64,
    n::Int64,
    m::Int64,
    marker,
    marker_interval,
    idx::Int64,
    card_g::Int64
) -> Any

Create a new multi-panel plot for a single OptimalControl solution.

Generates a comprehensive visualization with state, costate, and control trajectories, with spaced markers for improved visibility and a legend entry indicating success status.

Arguments

  • solution: OptimalControl.Solution object
  • model::Symbol: Model name (for legend)
  • solver::Symbol: Solver name (for legend)
  • success::Bool: Whether the solution converged successfully
  • color: Color symbol (from get_color)
  • problem::Symbol: Problem name (for plot styling)
  • grid_size::Int: Grid size
  • n::Int: Number of states
  • m::Int: Number of controls
  • marker: Marker shape symbol (from get_marker_style)
  • marker_interval::Int: Spacing between markers
  • idx::Int: Curve index for marker offset (default: 1)
  • card_g::Int: Total number of curves for marker offset (default: 1)

Returns

  • Plots.Plot: Multi-panel plot with (n + n + m) subplots

plot_ocp_solution!

CTBenchmarks.plot_ocp_solution!Function
plot_ocp_solution!(
    plt,
    solution,
    model::Symbol,
    solver::Symbol,
    success::Bool,
    color,
    n::Int64,
    m::Int64,
    marker,
    marker_interval
) -> Any
plot_ocp_solution!(
    plt,
    solution,
    model::Symbol,
    solver::Symbol,
    success::Bool,
    color,
    n::Int64,
    m::Int64,
    marker,
    marker_interval,
    idx::Int64
) -> Any
plot_ocp_solution!(
    plt,
    solution,
    model::Symbol,
    solver::Symbol,
    success::Bool,
    color,
    n::Int64,
    m::Int64,
    marker,
    marker_interval,
    idx::Int64,
    card_g::Int64
) -> Any

Add an OptimalControl solution to an existing multi-panel plot.

Appends state, costate, and control trajectories to existing subplots with spaced markers and consistent styling. Updates the legend with success status.

Arguments

  • plt: Existing Plots.Plot to modify
  • solution: OptimalControl.Solution object
  • model::Symbol: Model name (for legend)
  • solver::Symbol: Solver name (for legend)
  • success::Bool: Whether the solution converged successfully
  • color: Color symbol (from get_color)
  • n::Int: Number of states
  • m::Int: Number of controls
  • marker: Marker shape symbol (from get_marker_style)
  • marker_interval::Int: Spacing between markers
  • idx::Int: Curve index for marker offset (default: 1)
  • card_g::Int: Total number of curves for marker offset (default: 1)

Returns

  • Plots.Plot: Modified plot with new solution added

plot_solution_comparison

CTBenchmarks.plot_solution_comparisonFunction
plot_solution_comparison(
    group::DataFrames.SubDataFrame,
    problem::Symbol,
    grid_size::Int64
) -> Any

Create a comprehensive comparison plot for all solutions in a group.

This function orchestrates the plotting of all OptimalControl and JuMP solutions for a given problem and grid size, arranging them in a multi-panel layout with consistent styling.

Arguments

  • group::SubDataFrame: DataFrame subset with rows for the same (problem, grid_size)
  • problem::Symbol: Problem name (used for plot styling, e.g., left margin)
  • grid_size::Int: Grid size (used for marker spacing calculations)

Returns

  • Plots.Plot: Multi-panel plot with states, costates, and controls

Layout

  • Top panels: State trajectories (n columns)
  • Middle panels: Costate trajectories (n columns)
  • Bottom panels: Control trajectories (m columns, full width)

Strategy

  1. OptimalControl solutions plotted first (simple overlay with plot!)
  2. JuMP solutions plotted last (for proper subplot layout)
  3. All solutions use consistent colors and markers via get_color and get_marker_style
  4. Success/failure indicators (✓/✗) shown in legend

prettymemory

CTBenchmarks.prettymemoryFunction
prettymemory(b) -> String

Format a memory footprint bytes into a human-readable string using binary prefixes (bytes, KiB, MiB, GiB) with two decimal places.

The function uses standard binary units (1024 bytes = 1 KiB) and automatically selects the most appropriate unit based on the magnitude of the input value.

Arguments

  • bytes::Integer: Memory size in bytes (must be non-negative)

Returns

  • String: Formatted memory string with two decimal places and unit suffix

Example

julia> using CTBenchmarks

julia> CTBenchmarks.prettymemory(512)
"512 bytes"

julia> CTBenchmarks.prettymemory(1048576)
"1.00 MiB"

julia> CTBenchmarks.prettymemory(2147483648)
"2.00 GiB"

prettytime

CTBenchmarks.prettytimeFunction
prettytime(t) -> String

Format a duration t expressed in seconds into a human-readable string with three decimal places and adaptive units (ns, μs, ms, s).

The function automatically selects the most appropriate unit based on the magnitude of the input value, ensuring readable output across a wide range of timescales.

Arguments

  • t::Real: Duration in seconds (can be positive or negative)

Returns

  • String: Formatted time string with three decimal places and unit suffix

Example

julia> using CTBenchmarks

julia> CTBenchmarks.prettytime(0.001234)
"1.234 ms"

julia> CTBenchmarks.prettytime(1.5)
"1.500 s "

julia> CTBenchmarks.prettytime(5.6e-7)
"560.000 ns"
CTBenchmarks.print_benchmark_lineFunction
print_benchmark_line(model::Symbol, stats::NamedTuple)

Print a formatted line summarizing benchmark statistics for model with colors.

This function formats and displays benchmark results in a human-readable table row, including execution time, memory usage, solver objective value, iteration count, and success status. It automatically detects and handles both CPU benchmarks (from @btimed) and GPU benchmarks (from CUDA.@timed).

Arguments

  • model::Symbol: Name of the model being benchmarked (e.g., :jump, :adnlp)
  • stats::NamedTuple: Statistics dictionary containing:
    • benchmark: Timing and memory data (Dict or NamedTuple) with fields:
      • :time: Execution time in seconds
      • :bytes or :cpu_bytes, :gpu_bytes: Memory allocation
    • objective: Solver objective value (or missing)
    • iterations: Number of solver iterations (or missing)
    • success: Boolean indicating successful completion
    • criterion: Optimization criterion (e.g., :min, :max) or missing
    • status: Error message (used when benchmark is missing)

Output

Prints a colored, formatted line to stdout with:

  • Success indicator (✓ in green or ✗ in red)
  • Model name in magenta
  • Formatted execution time
  • Iteration count
  • Objective value in scientific notation
  • Criterion type
  • Memory usage (CPU and/or GPU)

Example

julia> using CTBenchmarks

julia> stats = (
           benchmark = (time = 0.123, bytes = 1048576),
           objective = 42.5,
           iterations = 100,
           success = true,
           criterion = :min
       )

julia> CTBenchmarks.print_benchmark_line(:jump, stats)
  ✓ | jump     | time:      0.123 s  | iters:   100 | obj: 4.250000e+01 (min) | CPU:       1.00 MiB

register!

CTBenchmarks.register!Function
register!(registry::PerformanceProfileRegistry, name::AbstractString, config::PerformanceProfileConfig)

Register a performance profile configuration under a given name.

Arguments

  • registry: The registry to add the configuration to.
  • name: Name to associate with the configuration.
  • config: The performance profile configuration.

save_json

CTBenchmarks.save_jsonFunction
save_json(payload::Dict, filepath::AbstractString) -> Int64

Save a JSON payload to a file. Creates the parent directory if needed and uses pretty printing for readability.

The payload is typically produced by build_payload. The "solutions" entry is excluded from serialisation so that the JSON contains only metadata and results.

Arguments

  • payload::Dict: Benchmark results with metadata
  • filepath::AbstractString: Full path to the output JSON file (including filename)

Returns

  • Nothing: Writes the JSON file as a side effect.

Example

julia> using CTBenchmarks

julia> payload = CTBenchmarks.build_payload(results, meta, config)

julia> CTBenchmarks.save_json(payload, "benchmarks.json")

set_print_level

CTBenchmarks.set_print_levelFunction
set_print_level(
    solver::Symbol,
    print_trace::Bool
) -> Union{Int64, MadNLP.LogLevels}

Set print level based on solver and print_trace flag.

For Ipopt, this returns an integer verbosity level. For MadNLP, it returns a MadNLP.LogLevels value. The flag print_trace is typically propagated from high-level benchmarking options.

Arguments

  • solver::Symbol: Solver name (:ipopt or :madnlp)
  • print_trace::Bool: Whether detailed solver output should be printed

Returns

  • Int or MadNLP.LogLevels: Print level appropriate for the chosen solver

Example

julia> using CTBenchmarks

julia> CTBenchmarks.set_print_level(:ipopt, true)
5

julia> CTBenchmarks.set_print_level(:madnlp, false)
MadNLP.ERROR

solve_and_extract_data

CTBenchmarks.solve_and_extract_dataFunction
solve_and_extract_data(
    problem::Symbol,
    solver::Symbol,
    model::Symbol,
    grid_size::Int64,
    disc_method::Symbol,
    tol::Float64,
    mu_strategy::Union{Missing, String},
    print_trace::Bool,
    max_iter::Int64,
    max_wall_time::Float64
) -> NamedTuple{(:benchmark, :objective, :iterations, :status, :success, :criterion, :solution)}

Solve an optimal control problem and extract performance and solver statistics.

This internal helper function orchestrates the solve process for different model types (JuMP, adnlp, exa, exa_gpu) and captures timing, memory, and solver statistics. It handles error cases gracefully by returning missing values instead of propagating exceptions.

Arguments

  • problem::Symbol: problem name (e.g., :beam, :chain)
  • solver::Symbol: solver to use (:ipopt or :madnlp)
  • model::Symbol: model type (:jump, :adnlp, :exa, or :exa_gpu)
  • grid_size::Int: number of grid points
  • disc_method::Symbol: discretization method (:trapeze or :midpoint)
  • tol::Float64: solver tolerance
  • mu_strategy::Union{String, Missing}: mu strategy for Ipopt (missing for MadNLP)
  • print_trace::Bool: whether to emit detailed solver output
  • max_iter::Int: maximum number of iterations
  • max_wall_time::Float64: maximum wall time in seconds

Returns

A NamedTuple with fields:

  • benchmark: full benchmark object from @btimed (CPU) or CUDA.@timed (GPU)
  • objective::Union{Float64, Missing}: objective function value (missing if failed)
  • iterations::Union{Int, Missing}: number of solver iterations (missing if failed)
  • status::Any: termination status (type depends on solver/model)
  • success::Bool: whether the solve succeeded
  • criterion::Union{String, Missing}: optimization sense ("min" or "max", missing if failed)
  • solution::Union{Any, Missing}: the solution object (JuMP model or OCP solution, missing if failed)

Details

Model-specific logic:

  • JuMP (:jump): Uses @btimed for CPU benchmarking, requires :trapeze discretization
  • GPU (:exa_gpu): Uses CUDA.@timed for GPU benchmarking, requires MadNLP solver and functional CUDA
  • OptimalControl (:adnlp, :exa): Uses @btimed for CPU benchmarking with OptimalControl backend

Solver configuration:

  • Ipopt: Configured with MUMPS linear solver, mu strategy, and second-order barrier
  • MadNLP: Configured with MUMPS linear solver

Print level adjustment: The solver print level is reduced after the first iteration to avoid excessive output during benchmarking (controlled by the ITERATION counter).

Error handling: If any solve fails, returns a NamedTuple with success=false and missing values for objective, iterations, and solution, allowing batch processing to continue.

Throws

  • AssertionError: If GPU model is used without MadNLP, without functional CUDA, if JuMP model uses non-trapeze discretization, or if Ipopt is used without mu_strategy.

strip_benchmark_value

CTBenchmarks.strip_benchmark_valueFunction
strip_benchmark_value(bench) -> NamedTuple

Remove the value field from benchmark outputs (NamedTuple or Dict) to ensure JSON-serializable data while preserving all other statistics.

The value field typically contains the actual return value from the benchmarked code, which may not be JSON-serializable. This function strips it out while keeping timing, memory allocation, and other benchmark statistics intact.

Arguments

  • bench: Benchmark output (NamedTuple, Dict, or other type)

Returns

  • Same type as input, with value field removed (if present)

Details

Three methods are provided:

  • Default: Returns input unchanged (for types without a value field)
  • NamedTuple: Reconstructs NamedTuple without the :value key
  • Dict: Creates new Dict excluding both :value and "value" keys

Example

julia> using CTBenchmarks

julia> bench_nt = (time=0.001, alloc=1024, value=42)
(time = 0.001, alloc = 1024, value = 42)

julia> CTBenchmarks.strip_benchmark_value(bench_nt)
(time = 0.001, alloc = 1024)

julia> bench_dict = Dict("time" => 0.001, "value" => 42)
Dict{String, Float64} with 2 entries:
  "time"  => 0.001
  "value" => 42

julia> CTBenchmarks.strip_benchmark_value(bench_dict)
Dict{String, Float64} with 1 entry:
  "time" => 0.001