Strong mixed-integer programming formulations for trained neural networks

September 11th, 12:00 pm- 1:00 pm in DCH 3092
Speaker: Joey Huchette (CAAM)

Please indicate interest, especially if you want lunch, here.
Abstract:

We present mixed-integer programming (MIP) formulations for high-dimensional piecewise linear functions that correspond to trained neural networks. These formulations can be used for a number of important tasks, such as: 1) verifying that an image classification network is robust to adversarial inputs, 2) designing DNA sequences that exhibit desirable therapeutic properties, 3) producing good candidate policies as a subroutine in deep reinforcement learning algorithms, and 4) solving decision problems with machine learning models embedded inside (i.e. the “predict, then optimize” paradigm). We provide formulations for networks with many of the most popular nonlinear operations (e.g. ReLU and max pooling) that are strictly stronger than other approaches from the literature. We corroborate this computationally on image classification verification tasks, where we show that our formulations are able to solve to optimality in orders of magnitude less time than existing methods.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *