DC3: A learning method for optimization with hard constraints
This repository is by Priya L. Donti, David Rolnick, and J. Zico Kolter and contains the PyTorch source code to reproduce the experiments in our paper "DC3: A learning method for optimization with hard constraints."
If you find this repository helpful in your publications, please consider citing our paper.
@inproceedings{donti2021dc3,
title={DC3: A learning method for optimization with hard constraints},
author={Donti, Priya and Rolnick, David and Kolter, J Zico},
booktitle={International Conference on Learning Representations},
year={2021}
}
Introduction
Large optimization problems with hard constraints arise in many settings, yet classical solvers are often prohibitively slow, motivating the use of deep networks as cheap "approximate solvers." Unfortunately, naive deep learning approaches typically cannot enforce the hard constraints of such problems, leading to infeasible solutions. In this work, we present Deep Constraint Completion and Correction (DC3), an algorithm to address this challenge. Specifically, this method enforces feasibility via a differentiable procedure, which implicitly completes partial solutions to satisfy equality constraints and unrolls gradient-based corrections to satisfy inequality constraints. We demonstrate the effectiveness of DC3 in both synthetic optimization tasks and the real-world setting of AC optimal power flow, where hard constraints encode the physics of the electrical grid. In both cases, DC3 achieves near-optimal objective values while preserving feasibility.
Dependencies
- Python 3.x
- PyTorch >= 1.8
- numpy/scipy/pandas
- osqp: State-of-the-art QP solver
- qpth: Differentiable QP solver for PyTorch
- ipopt: Interior point solver
- pypower: Power flow and optimal power flow solvers
- argparse: Input argument parsing
- pickle: Object serialization
- hashlib: Hash functions (used to generate folder names)
- setproctitle: Set process titles
- waitGPU (optional): Intelligently set
CUDA_VISIBLE_DEVICES
Instructions
Dataset generation
Datasets for the experiments presented in our paper are available in the datasets
folder. These datasets can be generated by running the Python script make_dataset.py
within each subfolder (simple
, nonconvex
, and acopf
) corresponding to the different problem types we test.
Running experiments
Our method and baselines can be run using the following Python files:
method.py
: Our method (DC3)baseline_nn.py
: Simple deep learning baseline (NN)baseline_eq_nn.py
: Supervised deep learning baseline with completion (Eq. NN)baseline_opt.py
: Traditional optimizers (Optimizer)
See each file for relevant flags to set the problem type and method parameters. Notably:
--probType
: Problem setting to test (simple
,nonconvex
, oracopf57
)--simpleVar
,--simpleIneq
,simpleEq
,simpleEx
: If the problem setting issimple
, the number of decision variables, inequalities, equalities, and datapoints, respectively.--nonconvexVar
,--nonconvexIneq
,nonconvexEq
,nonconvexEx
: If the problem setting isnonconvex
, the number of decision variables, inequalities, equalities, and datapoints, respectively.
Reproducing paper experiments
You can reproduce the experiments run in our paper (including baselines and ablations) via the bash script run_expers.sh
. For instance, the following commands can be used to run these experiments, 8 jobs at a time:
bash run_expers.sh > commands
cat commands | xargs -n1 -P8 -I{} /bin/sh -c "{}"
The script load_results.py
can be run to aggregate these results (both while experiments are running, and after they are done). In particular, this script outputs a summary of results across different replicates of the same experiment (results_summary.dict
) and information on how many jobs of each type are running or done (exper_status.dict
).
Generating tables
Tables can be generated via the Jupyter notebook ResultsViz.ipynb
. This notebook expects the dictionary results_summary.dict
as input; the version of this dictionary generated while running the experiments in the paper is available in this repository.