Optimizing Tensor Network Contraction

October 23rd, 12:00 pm- 1:00 pm in DCH 3092
Speaker: Jeffrey Dudek (COMP)

Please indicate interest, especially if you want lunch, here.
Abstract:

Significant time, effort, and funding has been poured into algorithmic and hardware optimizations for machine learning, especially for neural network training and inference. Can we leverage this work to also improve the solving of other AI problems? In this talk, I will introduce tensor networks and show how tensor networks can be used to apply these hardware optimizations to a broad class in problems in AI, including model counting, network reliability, and probabilistic inference. The central algorithmic challenge for this approach is in choosing the order to process tensors within the network; I will show
how the order can be optimized using state-of-the-art tools for graph reasoning. Finally, I will show empirical evidence that this approach is competitive with existing tools in AI for model counting and probabilistic inference.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *