• Stars
    star
    207
  • Rank 183,239 (Top 4 %)
  • Language
    Julia
  • License
    MIT License
  • Created about 11 years ago
  • Updated about 1 month ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Julia interface for Gurobi Optimizer

Gurobi.jl

Build Status codecov

Gurobi.jl is a wrapper for the Gurobi Optimizer.

It has two components:

Affiliation

This wrapper is maintained by the JuMP community and is not officially supported by Gurobi. However, we thank Gurobi for providing us with a license to test Gurobi.jl on GitHub. If you are a commercial customer interested in official support for Gurobi in Julia, let them know.

License

Gurobi.jl is licensed under the MIT License.

The underlying solver is a closed-source commercial product for which you must obtain a license.

Free Gurobi licenses are available for academics and students.

Installation

First, obtain a license of Gurobi and install Gurobi solver.

Then, set the GUROBI_HOME environment variable as appropriate and run Pkg.add("Gurobi"):

# On Windows, this might be
ENV["GUROBI_HOME"] = "C:\\Program Files\\gurobi1000\\win64"
# ... or perhaps ...
ENV["GUROBI_HOME"] = "C:\\gurobi1000\\win64"
# On Mac, this might be
ENV["GUROBI_HOME"] = "/Library/gurobi1000/mac64"

import Pkg
Pkg.add("Gurobi")

Note: your path may differ. Check which folder you installed Gurobi in, and update the path accordingly.

By default, building Gurobi.jl will fail if the Gurobi library is not found. This may not be desirable in certain cases, for example when part of a package's test suite uses Gurobi as an optional test dependency, but Gurobi cannot be installed on a CI server running the test suite. To support this use case, the GUROBI_JL_SKIP_LIB_CHECK environment variable may be set (to any value) to make Gurobi.jl installable (but not usable).

Use with JuMP

To use Gurobi with JuMP, use Gurobi.Optimizer:

using JuMP, Gurobi
model = Model(Gurobi.Optimizer)
set_attribute(model, "TimeLimit", 100)
set_attribute(model, "Presolve", 0)

MathOptInterface API

The Gurobi optimizer supports the following constraints and attributes.

List of supported objective functions:

List of supported variable types:

List of supported constraint types:

List of supported model attributes:

Options

See the Gurobi Documentation for a list and description of allowable parameters.

C API

The C API can be accessed via Gurobi.GRBxx functions, where the names and arguments are identical to the C API.

See the Gurobi documentation for details.

Reusing the same Gurobi environment for multiple solves

When using this package via other packages such as JuMP.jl, the default behavior is to obtain a new Gurobi license token every time a model is created. If you are using Gurobi in a setting where the number of concurrent Gurobi uses is limited (for example, "Single-Use" or "Floating-Use" licenses), you might instead prefer to obtain a single license token that is shared by all models that your program solves.

You can do this by passing a Gurobi.Env() object as the first parameter to Gurobi.Optimizer. For example:

using JuMP, Gurobi
const GRB_ENV = Gurobi.Env()

model_1 = Model(() -> Gurobi.Optimizer(GRB_ENV))

# The solvers can have different options too
model_2 = direct_model(Gurobi.Optimizer(GRB_ENV))
set_attribute(model_2, "OutputFlag", 0)

Accessing Gurobi-specific attributes

Get and set Gurobi-specific variable, constraint, and model attributes as follows:

using JuMP, Gurobi
model = direct_model(Gurobi.Optimizer())
@variable(model, x >= 0)
@constraint(model, c, 2x >= 1)
@objective(model, Min, x)
MOI.set(model, Gurobi.ConstraintAttribute("Lazy"), c, 2)
optimize!(model)
MOI.get(model, Gurobi.VariableAttribute("LB"), x)  # Returns 0.0
MOI.get(model, Gurobi.ModelAttribute("NumConstrs")) # Returns 1

A complete list of supported Gurobi attributes can be found in their online documentation.

Callbacks

Here is an example using Gurobi's solver-specific callbacks.

using JuMP, Gurobi, Test

model = direct_model(Gurobi.Optimizer())
@variable(model, 0 <= x <= 2.5, Int)
@variable(model, 0 <= y <= 2.5, Int)
@objective(model, Max, y)
cb_calls = Cint[]
function my_callback_function(cb_data, cb_where::Cint)
    # You can reference variables outside the function as normal
    push!(cb_calls, cb_where)
    # You can select where the callback is run
    if cb_where != GRB_CB_MIPSOL && cb_where != GRB_CB_MIPNODE
        return
    end
    # You can query a callback attribute using GRBcbget
    if cb_where == GRB_CB_MIPNODE
        resultP = Ref{Cint}()
        GRBcbget(cb_data, cb_where, GRB_CB_MIPNODE_STATUS, resultP)
        if resultP[] != GRB_OPTIMAL
            return  # Solution is something other than optimal.
        end
    end
    # Before querying `callback_value`, you must call:
    Gurobi.load_callback_variable_primal(cb_data, cb_where)
    x_val = callback_value(cb_data, x)
    y_val = callback_value(cb_data, y)
    # You can submit solver-independent MathOptInterface attributes such as
    # lazy constraints, user-cuts, and heuristic solutions.
    if y_val - x_val > 1 + 1e-6
        con = @build_constraint(y - x <= 1)
        MOI.submit(model, MOI.LazyConstraint(cb_data), con)
    elseif y_val + x_val > 3 + 1e-6
        con = @build_constraint(y + x <= 3)
        MOI.submit(model, MOI.LazyConstraint(cb_data), con)
    end
    if rand() < 0.1
        # You can terminate the callback as follows:
        GRBterminate(backend(model))
    end
    return
end
# You _must_ set this parameter if using lazy constraints.
MOI.set(model, MOI.RawOptimizerAttribute("LazyConstraints"), 1)
MOI.set(model, Gurobi.CallbackFunction(), my_callback_function)
optimize!(model)
@test termination_status(model) == MOI.OPTIMAL
@test primal_status(model) == MOI.FEASIBLE_POINT
@test value(x) == 1
@test value(y) == 2

See the Gurobi documentation for other information that can be queried with GRBcbget.

Common Performance Pitfall with JuMP

Gurobi's API works differently than most solvers. Any changes to the model are not applied immediately, but instead go sit in a internal buffer (making any modifications appear to be instantaneous) waiting for a call to GRBupdatemodel (where the work is done).

This leads to a common performance pitfall that has the following message as its main symptom:

Warning: excessive time spent in model updates. Consider calling update less frequently.

This often means the JuMP program was structured in such a way that Gurobi.jl ends up calling GRBupdatemodel in each iteration of a loop.

Usually, it is possible (and easy) to restructure the JuMP program in a way it stays ssolver-agnostic and has a close-to-ideal performance with Gurobi.

To guide such restructuring it is good to keep in mind the following bits of information:

  1. GRBupdatemodel is only called if changes were done since last GRBupdatemodel (that is, if the internal buffer is not empty).
  2. GRBupdatemodel is called when JuMP.optimize! is called, but this often is not the source of the problem.
  3. GRBupdatemodel may be called when any model attribute is queried, even if that specific attribute was not changed. This often the source of the problem.

The worst-case scenario is, therefore, a loop of modify-query-modify-query, even if what is being modified and what is being queried are two completely distinct things.

As an example, instead of:

model = Model(Gurobi.Optimizer)
@variable(model, x[1:100] >= 0)
for i in 1:100
    set_upper_bound(x[i], i)
    # `GRBupdatemodel` called on each iteration of this loop.
    println(lower_bound(x[i]))
end

do

model = Model(Gurobi.Optimizer)
@variable(model, x[1:100] >= 0)
# All modifications are done before any queries.
for i in 1:100
    set_upper_bound(x[i], i)
end
for i in 1:100
    # Only the first `lower_bound` query may trigger an `GRBupdatemodel`.
    println(lower_bound(x[i]))
end

Common errors

Using Gurobi v9.0 and you got an error like Q not PSD?

You need to set the NonConvex parameter:

model = Model(Gurobi.Optimizer)
set_optimizer_attribute(model, "NonConvex", 2)

Gurobi Error 1009: Version number is XX.X, license is for version XX.X

Make sure that your license is correct for your Gurobi version. See the Gurobi documentation for details.

Once you are sure that the license and Gurobi versions match, re-install Gurobi.jl by running:

import Pkg
Pkg.build("Gurobi")

More Repositories

1

JuMP.jl

Modeling language for Mathematical Optimization (linear, mixed-integer, conic, semidefinite, nonlinear)
Julia
2,125
star
2

Convex.jl

A Julia package for disciplined convex programming
Julia
538
star
3

MathOptInterface.jl

An abstraction layer for mathematical optimization solvers.
Julia
361
star
4

Ipopt.jl

Julia interface to the Ipopt nonlinear solver
Julia
143
star
5

JuMPTutorials.jl

Tutorials on using JuMP for mathematical optimization in Julia
Jupyter Notebook
137
star
6

Hypatia.jl

interior point solver for general convex conic optimization problems
Julia
129
star
7

CPLEX.jl

Julia interface for the CPLEX optimization software
Julia
129
star
8

Pajarito.jl

A solver for mixed-integer convex optimization
Julia
126
star
9

DiffOpt.jl

Differentiating convex optimization programs w.r.t. program parameters
Julia
116
star
10

SumOfSquares.jl

Sum of Squares Programming for Julia
Julia
113
star
11

GLPK.jl

GLPK wrapper module for Julia
Julia
101
star
12

HiGHS.jl

Julia wrapper for the HiGHS solver
Julia
95
star
13

Dualization.jl

Automatic dualization feature for MathOptInterface.jl
Julia
90
star
14

SCS.jl

Julia Wrapper for SCS (https://github.com/cvxgrp/scs)
Julia
81
star
15

Cbc.jl

Julia wrapper for the Cbc solver
Julia
80
star
16

KNITRO.jl

Julia interface to the Artelys Knitro solver
Julia
72
star
17

AmplNLWriter.jl

Julia interface to AMPL-enabled solvers
Julia
64
star
18

Xpress.jl

A Julia interface for the FICO Xpress optimization suite
Julia
62
star
19

Pavito.jl

A gradient-based outer approximation solver for convex mixed-integer nonlinear programming (MINLP)
Julia
60
star
20

MultiObjectiveAlgorithms.jl

Julia
56
star
21

Clp.jl

Interface to the Coin-OR Linear Programming solver (CLP)
Julia
51
star
22

MutableArithmetics.jl

Interface for arithmetics on mutable types in Julia
TeX
49
star
23

PolyJuMP.jl

A JuMP extension for Polynomial Optimization
Julia
41
star
24

ECOS.jl

Julia wrapper for the ECOS conic optimization solver
Julia
40
star
25

ParametricOptInterface.jl

Extension for dealing with parameters
Julia
33
star
26

MosekTools.jl

MosekTools is the MathOptInterface.jl implementation for the MOSEK solver
Julia
29
star
27

CSDP.jl

Julia Wrapper for CSDP (https://projects.coin-or.org/Csdp/)
Julia
21
star
28

benchmarks

A repository for long-term benchmarking of JuMP performance
Julia
19
star
29

MathOptFormat

Specification and description of the MathOptFormat file format
Python
18
star
30

BARON.jl

Julia wrapper for the BARON mixed-integer nonlinear programming solver
Julia
18
star
31

MiniZinc.jl

Julia
15
star
32

MINLPTests.jl

Unit and Integration Tests for JuMP NLP and MINLP solvers
Julia
12
star
33

jump-dev.github.io

Source for jump.dev
Jupyter Notebook
11
star
34

SDPA.jl

Julia Wrapper for SDPA (http://sdpa.sourceforge.net/)
Julia
11
star
35

MatrixOptInterface.jl

An interface to pass matrix form problems
Julia
11
star
36

SDPNAL.jl

Julia wrapper for SDPNAL+ (https://blog.nus.edu.sg/mattohkc/softwares/sdpnalplus/)
Julia
10
star
37

ComplexOptInterface.jl

Extension of MathOptInterface to complex sets
Julia
8
star
38

SDPLR.jl

Julia wrapper for SDPLR
Julia
7
star
39

Penopt.jl

Julia wrapper for Penopt (http://www.penopt.com/)
Julia
7
star
40

SolverTests

Test that all solvers pass the tests before a new MOI release
7
star
41

SeDuMi.jl

Julia wrapper for SeDuMi (http://sedumi.ie.lehigh.edu/)
Julia
6
star
42

SDPT3.jl

Julia wrapper for SDPT3 (https://blog.nus.edu.sg/mattohkc/softwares/sdpt3/)
Julia
5
star
43

DSDP.jl

Julia wrapper for the DSDP semidefinite programming solver
Julia
4
star
44

GSOC2021

GSOC2021 information for JuMP
3
star
45

MOIPaperBenchmarks

Julia
1
star
46

GSOC2022

GSOC2022 information for JuMP
1
star
47

HiGHSBuilder

Julia
1
star
48

GSOC2020

GSOC2020 information for JuMP
1
star
49

JuMPPaperBenchmarks

Benchmarks for a paper on JuMP 1.0
Julia
1
star
50

GSOC

1
star