Pretrained Parameter Configurator for Large Neighborhood Search to Solve Weighted Constraint Satisfaction Problems

Weighted constraint satisfaction problems (WCSPs) are one of the most important constraint programming models aiming to find a cost-minimal solution. Tree-based Large Neighborhood Search (T-LNS) is an important local search based incomplete algorithm to solve a WCSP. Currently, when solving unseen problem instances, the parameter of T-LNS (i.e., destroy rate t) is obtained by either trying different values or adapting the value that has been shown to be well-performed in a known problem set. However, the best value of the destroy rate $t$ that yields the best performance for T-LNS varies over different problem instances. As a result, tuning the parameter in such a hand-crafted way could either be tedious or hinder the performance of T-LNS. Therefore, to further stabilize and optimize the performance of T-LNS when solving WCSP instances, we propose to build a pretrained algorithm configurator that can recommend a suitable value of $t$ for T-LNS based on the problem instance it will solve, via supervised learning. In more detail, in order to achieve instance-specific parameter prediction, we propose to encode the information such as the size and structure of a WCSP instance into a feature vector, and then leverage the fledged machine learning models to build our first pretrained algorithm configurator. Then, in order to encode a WCSP instance more comprehensively, we propose to use directed tripartite graph to represent a WCSP instance, which can represent the high- dimensional cost values in constrain functions. Then, we use Graph Attention Networks (GATs) to learn the embedding of tripartite graph and then build our second pretrained algorithm configurator. Finally, the experimental results show that our proposed algorithm configurators can effectively recommend suitable parameters for T-LNS in a series problem instances, yielding better performance over other competitors on different benchmark problems.

[1]  Zhongshi He,et al.  Learning heuristics for weighted CSPs through deep reinforcement learning , 2022, Applied Intelligence.

[2]  E. Hullermeier,et al.  A Survey of Methods for Automated Algorithm Configuration , 2022, J. Artif. Intell. Res..

[3]  Yanchen Deng,et al.  Pretrained Cost Model for Distributed Constraint Optimization Problems , 2021, AAAI.

[4]  F. Hutter,et al.  SMAC3: A Versatile Bayesian Optimization Package for Hyperparameter Optimization , 2021, J. Mach. Learn. Res..

[5]  Albert Perez-Riba,et al.  Fast and Flexible Protein Design Using Deep Graph Neural Networks. , 2020, Cell systems.

[6]  Roie Zivan,et al.  Governing convergence of Max-sum on DCOPs through damping and splitting , 2020, Artif. Intell..

[7]  Hoong Chuin Lau,et al.  Distributed Gibbs: A Linear-Space Sampling-Based DCOP Algorithm , 2019, J. Artif. Intell. Res..

[8]  Enrico Pontelli,et al.  A Large Neighboring Search Schema for Multi-agent Optimization , 2018, CP.

[9]  Pietro Liò,et al.  Graph Attention Networks , 2017, ICLR.

[10]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[11]  Steven Okamoto,et al.  Distributed Breakout: Beyond Satisfaction , 2016, IJCAI.

[12]  Sepp Hochreiter,et al.  Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) , 2015, ICLR.

[13]  Simon de Givry,et al.  Anytime Hybrid Best-First Search with Tree Decomposition for Weighted CSP , 2015, CP.

[14]  Simon de Givry,et al.  Solving a Judge Assignment Problem Using Conjunctions of Global Cost Functions , 2014, CP.

[15]  Steven Okamoto,et al.  Explorative anytime local search for distributed constraint optimization , 2014, Artif. Intell..

[16]  Boi Faltings,et al.  DUCT: An Upper Confidence Bound Approach to Distributed Constraint Optimization Problems , 2012, AAAI.

[17]  Milind Tambe,et al.  Quality guarantees for region optimal DCOP algorithms , 2011, AAMAS.

[18]  Gaël Varoquaux,et al.  Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..

[19]  Kevin Leyton-Brown,et al.  Sequential Model-Based Optimization for General Algorithm Configuration , 2011, LION.

[20]  Yuri Malitsky,et al.  ISAC - Instance-Specific Algorithm Configuration , 2010, ECAI.

[21]  Kevin Leyton-Brown,et al.  Automated Configuration of Mixed Integer Programming Solvers , 2010, CPAIOR.

[22]  Carlos Ansótegui,et al.  A Gender-Based Genetic Algorithm for the Automatic Configuration of Algorithms , 2009, CP.

[23]  F. Hutter,et al.  ParamILS: An Automatic Algorithm Configuration Framework , 2014, J. Artif. Intell. Res..

[24]  Gilles Pesant,et al.  Distributed search for supply chain coordination , 2009, Comput. Ind..

[25]  Yoav Shoham,et al.  Empirical hardness models: Methodology and a case study on combinatorial auctions , 2009, JACM.

[26]  Mauro Birattari,et al.  Tuning Metaheuristics - A Machine Learning Perspective , 2009, Studies in Computational Intelligence.

[27]  Nicholas R. Jennings,et al.  Decentralised coordination of low-power embedded devices using the max-sum algorithm , 2008, AAMAS.

[28]  Thomas Stützle,et al.  Automatic Algorithm Configuration Based on Local Search , 2007, AAAI.

[29]  G. Box,et al.  Response Surfaces, Mixtures and Ridge Analyses , 2007 .

[30]  Simon de Givry,et al.  Existential arc consistency: Getting closer to full arc consistency in weighted CSPs , 2005, IJCAI.

[31]  Boi Faltings,et al.  A Scalable Method for Multiagent Constraint Optimization , 2005, IJCAI.

[32]  J. Larrosa,et al.  In the quest of the best form of local consistency for Weighted CSP , 2003, IJCAI.

[33]  Rina Dechter,et al.  Mini-buckets: A general scheme for bounded inference , 2003, JACM.

[34]  Thomas Stützle,et al.  A Racing Algorithm for Configuring Metaheuristics , 2002, GECCO.

[35]  L. Breiman Random Forests , 2001, Encyclopedia of Machine Learning and Data Mining.

[36]  Rina Dechter,et al.  Bucket Elimination: A Unifying Framework for Reasoning , 1999, Artif. Intell..

[37]  Paul Shaw,et al.  Using Constraint Programming and Local Search Methods to Solve Vehicle Routing Problems , 1998, CP.

[38]  Thomas Schiex,et al.  Valued Constraint Satisfaction Problems: Hard and Easy Problems , 1995, IJCAI.

[39]  Vipin Kumar,et al.  Algorithms for Constraint-Satisfaction Problems: A Survey , 1992, AI Mag..

[40]  E. L. Lawler,et al.  Branch-and-Bound Methods: A Survey , 1966, Oper. Res..

[41]  Thomas Schiex,et al.  Positive multistate protein design , 2020, Bioinform..

[42]  Tanja Hueber,et al.  Gaussian Processes For Machine Learning , 2016 .

[43]  M. Helmert,et al.  FD-Autotune: Domain-Specific Configuration using Fast Downward , 2011 .

[44]  Frank Hutter,et al.  Automated configuration of algorithms for solving hard computational problems , 2009 .

[45]  Roman Barták,et al.  Constraint Processing , 2009, Encyclopedia of Artificial Intelligence.

[46]  Philippe Castagliola,et al.  Response Surfaces, Mixtures, and Ridge Analyses , 2008 .

[47]  Weixiong Zhang,et al.  Distributed stochastic search and distributed breakout: properties, comparison and applications to constraint optimization problems in sensor networks , 2005, Artif. Intell..

[48]  M. Laguna,et al.  Fine-Tuning of Algorithms Using Fractional Experimental Designs and Local Search , 2005 .