CSPLIB: A Benchmark Library for Constraints
暂无分享,去创建一个
Constraint satisfaction algorithms are often benchmarked on hard, random problems. There are, however, many reasons for wanting a larger class of problems in our benchmark suites. For example, we may wish to benchmark algorithms on more realistic problems, to run competitions, or to study the impact on modelling and problem reformulation. Whilst there are many other constructive benefits of a benchmark library, there are also several potential pitfalls. For example, if the library is small, we run the risk of over-fitting our algorithms. Even if the library is large, certain problem features may be rare or absent. A model benchmark library should be easy to find and easy to use. It should contain as diverse and large a set of problems as possible. It should be easy to extend, and as comprehensive and up to date as possible. It should also be independent of any particular constraint solver, and contain neither just hard (nor just easy) problems.
[1] Toby Walsh,et al. Random Constraint Satisfaction: Theory Meets Practice , 1998, CP.