• Stars
    star
    154
  • Rank 242,095 (Top 5 %)
  • Language
    Julia
  • License
    Other
  • Created over 6 years ago
  • Updated 5 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Interior-point solver in pure Julia

Tulip

DOI

Tulip is an open-source interior-point solver for linear optimization, written in pure Julia. It implements the homogeneous primal-dual interior-point algorithm with multiple centrality corrections, and therefore handles unbounded and infeasible problems. Tulip’s main feature is that its algorithmic framework is disentangled from linear algebra implementations. This allows to seamlessly integrate specialized routines for structured problems.

License

Tulip is licensed under the MPL 2.0 license.

Installation

Install Tulip using the Julia package manager:

import Pkg
Pkg.add("Tulip")

Usage

The recommended way of using Tulip is through JuMP or MathOptInterface (MOI).

The low-level interface is still under development and is likely change in the future. The MOI interface is more stable.

Using with JuMP

Tulip follows the syntax convention PackageName.Optimizer:

using JuMP
import Tulip
model = Model(Tulip.Optimizer)

Linear objectives, linear constraints and lower/upper bounds on variables are supported.

Using with MOI

The type Tulip.Optimizer is parametrized by the model's arithmetic, for example, Float64 or BigFloat. This allows to solve problem in higher numerical precision. See the documentation for more details.

import MathOptInterface as MOI
import Tulip
model = Tulip.Optimizer{Float64}()   # Create a model in Float64 precision
model = Tulip.Optimizer()            # Defaults to the above call
model = Tulip.Optimizer{BigFloat}()  # Create a model in BigFloat precision

Solver parameters

See the documentation for a full list of parameters.

To set parameters in JuMP, use:

using JuMP, Tulip
model = Model(Tulip.Optimizer)
set_attribute(model, "IPM_IterationsLimit", 200)

To set parameters in MathOptInterface, use:

using Tulip
import MathOptInterface as MOI
model = Tulip.Optimizer{Float64}()
MOI.set(model, MOI.RawOptimizerAttribute("IPM_IterationsLimit"), 200)

To set parameters in the Tulip API, use:

using Tulip
model = Tulip.Model{Float64}()
Tulip.set_parameter(model, "IPM_IterationsLimit", 200)

Command-line executable

See app building instructions.

Citing Tulip.jl

If you use Tulip in your work, we kindly ask that you cite the following reference (preprint available here).

@Article{Tulip.jl,
  author   = {Tanneau, Mathieu and Anjos, Miguel F. and Lodi, Andrea},
  journal  = {Mathematical Programming Computation},
  title    = {Design and implementation of a modular interior-point solver for linear optimization},
  year     = {2021},
  issn     = {1867-2957},
  month    = feb,
  doi      = {10.1007/s12532-020-00200-8},
  language = {en},
  url      = {https://doi.org/10.1007/s12532-020-00200-8},
  urldate  = {2021-03-07},
}

More Repositories

1

learn2branch

Exact Combinatorial Optimization with Graph Convolutional Neural Networks (NeurIPS 2019)
Python
344
star
2

ecole

Extensible Combinatorial Optimization Learning Environments
C++
316
star
3

ml4co-competition

Machine Learning for Combinatorial Optimization - NeurIPS'21 competition
Python
125
star
4

branch-search-trees

Parameterizing Branch-and-Bound Search Trees to Learn Branching Policies (AAAI 2021)
Python
64
star
5

learn2branch-ecole

Reimplementation of "Exact Combinatorial Optimization with Graph Convolutional Neural Networks" (NeurIPS 2019)
Python
32
star
6

learn2comparenodes

Learning to Compare Nodes in Branch and Bound with Graph Neural Networks (NeurIPS 2022)
Jupyter Notebook
18
star
7

sparse-gcn

Sparse graph attention
Python
17
star
8

Bliss

Fork of Bliss
C++
12
star
9

ZERO

ZERO is a modular C++ library interfacing Mathematical Programming and Game Theory.
C++
9
star
10

singularity-conda

A Singularity recipe that properly initialize a conda environment
Shell
8
star
11

EPECInstances

An Instance generator for NASPs (Nash Games among Stackelberg Players)
Python
8
star
12

PySVMRank

Python API for SVMrank (http://www.cs.cornell.edu/people/tj/svm_light/svm_rank.html)
Python
7
star
13

nectar

Codebase for "A learning-based algorithm to quickly compute good primal solutions for Stochastic Integer Programs"
Python
4
star
14

tipsntricks

Tips, tricks and setup information at the chair
HTML
3
star
15

milp-outcome

Jupyter Notebook
3
star
16

miqp-clf2lin

Python
3
star
17

ml4co-competition-hidden

Machine Learning for Combinatorial Optimization - NeurIPS'21 competition (hidden code)
Python
3
star
18

CNG-Instances

Instances and results for the Critical Node Game
2
star
19

ecole-paper

Compare Ecole and Gasse et al. 2019 implementations
Jupyter Notebook
1
star
20

amcts-cplex

Asynchronous Monte-Carlo Tree Search (AMCTS) implementation for learning to branch in CPLEX.
Jupyter Notebook
1
star
21

GraphRL

Python
1
star
22

IrratDCM

"On the estimation of discrete choice models to capture irrational customer behaviors" by Jena et al. (2021)
1
star