• Stars
    star
    129
  • Rank 270,979 (Top 6 %)
  • Language
    Julia
  • License
    MIT License
  • Created over 10 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Julia interface for the CPLEX optimization software

CPLEX.jl

Build Status codecov

CPLEX.jl is a wrapper for the IBM® ILOG® CPLEX® Optimization Studio.

CPLEX.jl has two components:

The C API can be accessed via CPLEX.CPXxx functions, where the names and arguments are identical to the C API. See the CPLEX documentation for details.

Affiliation

This wrapper is maintained by the JuMP community and is not officially supported by IBM. However, we thank IBM for providing us with a CPLEX license to test CPLEX.jl on GitHub. If you are a commercial customer interested in official support for CPLEX in Julia, let them know.

License

CPLEX.jl is licensed under the MIT License.

The underlying solver is a closed-source commercial product for which you must purchase a license.

Free CPLEX licenses are available for academics and students.

Installation

CPLEX.jl requires CPLEX version 12.10, 20.1, or 22.1.

First, obtain a license of CPLEX and install CPLEX solver, following the instructions on IBM's website.

Once installed, set the CPLEX_STUDIO_BINARIES environment variable as appropriate and run Pkg.add("CPLEX"). For example:

# On Windows, this might be:
ENV["CPLEX_STUDIO_BINARIES"] = "C:\\Program Files\\CPLEX_Studio1210\\cplex\\bin\\x86-64_win\\"
# On OSX, this might be:
ENV["CPLEX_STUDIO_BINARIES"] = "/Applications/CPLEX_Studio1210/cplex/bin/x86-64_osx/"
# On Unix, this might be:
ENV["CPLEX_STUDIO_BINARIES"] = "/opt/CPLEX_Studio1210/cplex/bin/x86-64_linux/"

import Pkg
Pkg.add("CPLEX")

!!! note The exact path may differ. Check which folder you installed CPLEX in, and update the path accordingly.

Use with JuMP

Use CPLEX.jl with JuMP as follows:

using JuMP, CPLEX
model = Model(CPLEX.Optimizer)
set_attribute(model, "CPX_PARAM_EPINT", 1e-8)

MathOptInterface API

The CPLEX optimizer supports the following constraints and attributes.

List of supported objective functions:

List of supported variable types:

List of supported constraint types:

List of supported model attributes:

Options

Options match those of the C API in the CPLEX documentation.

Callbacks

CPLEX.jl provides a solver-specific callback to CPLEX:

using JuMP, CPLEX, Test

model = direct_model(CPLEX.Optimizer())
set_silent(model)

# This is very, very important!!! Only use callbacks in single-threaded mode.
MOI.set(model, MOI.NumberOfThreads(), 1)

@variable(model, 0 <= x <= 2.5, Int)
@variable(model, 0 <= y <= 2.5, Int)
@objective(model, Max, y)
cb_calls = Clong[]
function my_callback_function(cb_data::CPLEX.CallbackContext, context_id::Clong)
    # You can reference variables outside the function as normal
    push!(cb_calls, context_id)
    # You can select where the callback is run
    if context_id != CPX_CALLBACKCONTEXT_CANDIDATE
        return
    end
    ispoint_p = Ref{Cint}()
    ret = CPXcallbackcandidateispoint(cb_data, ispoint_p)
    if ret != 0 || ispoint_p[] == 0
        return  # No candidate point available or error
    end
    # You can query CALLBACKINFO items
    valueP = Ref{Cdouble}()
    ret = CPXcallbackgetinfodbl(cb_data, CPXCALLBACKINFO_BEST_BND, valueP)
    @info "Best bound is currently: $(valueP[])"
    # As well as any other C API
    x_p = Vector{Cdouble}(undef, 2)
    obj_p = Ref{Cdouble}()
    ret = CPXcallbackgetincumbent(cb_data, x_p, 0, 1, obj_p)
    if ret == 0
        @info "Objective incumbent is: $(obj_p[])"
        @info "Incumbent solution is: $(x_p)"
        # Use CPLEX.column to map between variable references and the 1-based
        # column.
        x_col = CPLEX.column(cb_data, index(x))
        @info "x = $(x_p[x_col])"
    else
        # Unable to query incumbent.
    end

    # Before querying `callback_value`, you must call:
    CPLEX.load_callback_variable_primal(cb_data, context_id)
    x_val = callback_value(cb_data, x)
    y_val = callback_value(cb_data, y)
    # You can submit solver-independent MathOptInterface attributes such as
    # lazy constraints, user-cuts, and heuristic solutions.
    if y_val - x_val > 1 + 1e-6
        con = @build_constraint(y - x <= 1)
        MOI.submit(model, MOI.LazyConstraint(cb_data), con)
    elseif y_val + x_val > 3 + 1e-6
        con = @build_constraint(y + x <= 3)
        MOI.submit(model, MOI.LazyConstraint(cb_data), con)
    end
end
MOI.set(model, CPLEX.CallbackFunction(), my_callback_function)
optimize!(model)
@test termination_status(model) == MOI.OPTIMAL
@test primal_status(model) == MOI.FEASIBLE_POINT
@test value(x) == 1
@test value(y) == 2

Annotations for automatic Benders' decomposition

Here is an example of using the annotation feature for automatic Benders' decomposition:

using JuMP, CPLEX

function add_annotation(
    model::JuMP.Model,
    variable_classification::Dict;
    all_variables::Bool = true,
)
    num_variables = sum(length(it) for it in values(variable_classification))
    if all_variables
        @assert num_variables == JuMP.num_variables(model)
    end
    indices, annotations = CPXINT[], CPXLONG[]
    for (key, value) in variable_classification
        for variable_ref in value
            push!(indices, variable_ref.index.value - 1)
            push!(annotations, CPX_BENDERS_MASTERVALUE + key)
        end
    end
    cplex = backend(model)
    index_p = Ref{CPXINT}()
    CPXnewlongannotation(
        cplex.env,
        cplex.lp,
        CPX_BENDERS_ANNOTATION,
        CPX_BENDERS_MASTERVALUE,
    )
    CPXgetlongannotationindex(
        cplex.env,
        cplex.lp,
        CPX_BENDERS_ANNOTATION,
        index_p,
    )
    CPXsetlongannotations(
        cplex.env,
        cplex.lp,
        index_p[],
        CPX_ANNOTATIONOBJ_COL,
        length(indices),
        indices,
        annotations,
    )
    return
end

# Problem

function illustrate_full_annotation()
    c_1, c_2 = [1, 4], [2, 3]
    dim_x, dim_y = length(c_1), length(c_2)
    b = [-2; -3]
    A_1, A_2 = [1 -3; -1 -3], [1 -2; -1 -1]
    model = JuMP.direct_model(CPLEX.Optimizer())
    set_optimizer_attribute(model, "CPXPARAM_Benders_Strategy", 1)
    @variable(model, x[1:dim_x] >= 0, Bin)
    @variable(model, y[1:dim_y] >= 0)
    variable_classification = Dict(0 => [x[1], x[2]], 1 => [y[1], y[2]])
    @constraint(model, A_2 * y + A_1 * x .<= b)
    @objective(model, Min, c_1' * x + c_2' * y)
    add_annotation(model, variable_classification)
    optimize!(model)
    x_optimal = value.(x)
    y_optimal = value.(y)
    println("x: $(x_optimal), y: $(y_optimal)")
end

function illustrate_partial_annotation()
    c_1, c_2 = [1, 4], [2, 3]
    dim_x, dim_y = length(c_1), length(c_2)
    b = [-2; -3]
    A_1, A_2 = [1 -3; -1 -3], [1 -2; -1 -1]
    model = JuMP.direct_model(CPLEX.Optimizer())
    # Note that the "CPXPARAM_Benders_Strategy" has to be set to 2 if partial
    # annotation is provided. If "CPXPARAM_Benders_Strategy" is set to 1, then
    # the following error will be thrown:
    # `CPLEX Error  2002: Invalid Benders decomposition.`
    set_optimizer_attribute(model, "CPXPARAM_Benders_Strategy", 2)
    @variable(model, x[1:dim_x] >= 0, Bin)
    @variable(model, y[1:dim_y] >= 0)
    variable_classification = Dict(0 => [x[1]], 1 => [y[1], y[2]])
    @constraint(model, A_2 * y + A_1 * x .<= b)
    @objective(model, Min, c_1' * x + c_2' * y)
    add_annotation(model, variable_classification; all_variables = false)
    optimize!(model)
    x_optimal = value.(x)
    y_optimal = value.(y)
    println("x: $(x_optimal), y: $(y_optimal)")
end

More Repositories

1

JuMP.jl

Modeling language for Mathematical Optimization (linear, mixed-integer, conic, semidefinite, nonlinear)
Julia
2,125
star
2

Convex.jl

A Julia package for disciplined convex programming
Julia
538
star
3

MathOptInterface.jl

An abstraction layer for mathematical optimization solvers.
Julia
361
star
4

Gurobi.jl

A Julia interface to the Gurobi Optimizer
Julia
211
star
5

Ipopt.jl

Julia interface to the Ipopt nonlinear solver
Julia
143
star
6

JuMPTutorials.jl

Tutorials on using JuMP for mathematical optimization in Julia
Jupyter Notebook
137
star
7

Hypatia.jl

interior point solver for general convex conic optimization problems
Julia
129
star
8

Pajarito.jl

A solver for mixed-integer convex optimization
Julia
127
star
9

DiffOpt.jl

Differentiating convex optimization programs w.r.t. program parameters
Julia
116
star
10

SumOfSquares.jl

Sum of Squares Programming for Julia
Julia
113
star
11

GLPK.jl

GLPK wrapper module for Julia
Julia
101
star
12

HiGHS.jl

Julia wrapper for the HiGHS solver
Julia
95
star
13

Dualization.jl

Automatic dualization feature for MathOptInterface.jl
Julia
90
star
14

SCS.jl

Julia Wrapper for SCS (https://github.com/cvxgrp/scs)
Julia
81
star
15

Cbc.jl

Julia wrapper for the Cbc solver
Julia
80
star
16

KNITRO.jl

Julia interface to the Artelys Knitro solver
Julia
72
star
17

AmplNLWriter.jl

Julia interface to AMPL-enabled solvers
Julia
64
star
18

Xpress.jl

A Julia interface for the FICO Xpress optimization suite
Julia
62
star
19

Pavito.jl

A gradient-based outer approximation solver for convex mixed-integer nonlinear programming (MINLP)
Julia
59
star
20

MultiObjectiveAlgorithms.jl

Julia
56
star
21

Clp.jl

Interface to the Coin-OR Linear Programming solver (CLP)
Julia
51
star
22

MutableArithmetics.jl

Interface for arithmetics on mutable types in Julia
TeX
49
star
23

PolyJuMP.jl

A JuMP extension for Polynomial Optimization
Julia
41
star
24

ECOS.jl

Julia wrapper for the ECOS conic optimization solver
Julia
40
star
25

ParametricOptInterface.jl

Extension for dealing with parameters
Julia
33
star
26

MosekTools.jl

MosekTools is the MathOptInterface.jl implementation for the MOSEK solver
Julia
29
star
27

CSDP.jl

Julia Wrapper for CSDP (https://projects.coin-or.org/Csdp/)
Julia
21
star
28

benchmarks

A repository for long-term benchmarking of JuMP performance
Julia
19
star
29

MathOptFormat

Specification and description of the MathOptFormat file format
Python
18
star
30

BARON.jl

Julia wrapper for the BARON mixed-integer nonlinear programming solver
Julia
18
star
31

MiniZinc.jl

Julia
15
star
32

MINLPTests.jl

Unit and Integration Tests for JuMP NLP and MINLP solvers
Julia
12
star
33

jump-dev.github.io

Source for jump.dev
Jupyter Notebook
11
star
34

SDPA.jl

Julia Wrapper for SDPA (http://sdpa.sourceforge.net/)
Julia
11
star
35

MatrixOptInterface.jl

An interface to pass matrix form problems
Julia
11
star
36

SDPNAL.jl

Julia wrapper for SDPNAL+ (https://blog.nus.edu.sg/mattohkc/softwares/sdpnalplus/)
Julia
10
star
37

ComplexOptInterface.jl

Extension of MathOptInterface to complex sets
Julia
8
star
38

SDPLR.jl

Julia wrapper for SDPLR
Julia
7
star
39

Penopt.jl

Julia wrapper for Penopt (http://www.penopt.com/)
Julia
7
star
40

SolverTests

Test that all solvers pass the tests before a new MOI release
7
star
41

SeDuMi.jl

Julia wrapper for SeDuMi (http://sedumi.ie.lehigh.edu/)
Julia
6
star
42

SDPT3.jl

Julia wrapper for SDPT3 (https://blog.nus.edu.sg/mattohkc/softwares/sdpt3/)
Julia
5
star
43

DSDP.jl

Julia wrapper for the DSDP semidefinite programming solver
Julia
4
star
44

GSOC2021

GSOC2021 information for JuMP
3
star
45

MOIPaperBenchmarks

Julia
1
star
46

GSOC2022

GSOC2022 information for JuMP
1
star
47

HiGHSBuilder

Julia
1
star
48

GSOC2020

GSOC2020 information for JuMP
1
star
49

JuMPPaperBenchmarks

Benchmarks for a paper on JuMP 1.0
Julia
1
star
50

GSOC

1
star