ModRef 2022 is the 21st in a series of workshops on Constraint Modelling and Reformulation organized as part of FLoC 2022, the Federated Logic Conference. It will be held on July 31st, preceding CP 2022, the 28th International Conference on Principles and Practice of Constraint Programming.
Recent years have witnessed significant research devoted to modelling and solving problems with constraints. The importance of modelling and model reformulation is widely recognized. There have been developments in systematic and automated ways of improving aspects of modelling and model reformulation. Tools and techniques which provide the ability to target multiple kinds of solvers were also developed.
The key goals of this workshop are to extend the understanding of constraint modelling and to automate aspects of modelling or model reformulation to extend the reach of constraint solvers on difficult problems and ease the task of modelling. We solicit original papers that contribute to either or both of these goals. Workshop topics include:
The detailed program can also be found on EasyChair
Time | Event |
---|---|
Session 1: 9:00 - 10:30 | Invited Talk: Constraint modelling and solving: Learning from observing people – Ruth Hoffmann (slides) |
Paper: Solving XCSP3 constraint problems using tools from software verification – Martin Mariusz Lester (paper) (slides) | |
Paper: Constraint-based Part-of-Speech Tagging – Neng-Fa Zhou (paper) (slides) | |
Break: 10:30 - 11:00 | |
Session 2: 11:00 - 12:30 | Paper: A portfolio-based analysis method for competition results – Nguyen Dang (paper) (slides) |
Paper: Efficiently Explaining CSPs with Unsatisfiable Subset Optimization – Emilio Gamba, Bart Bogaerts and Tias Guns (paper) (slides) | |
Paper: Automatic Generation of Dominance Breaking Nogoods for Constraint Optimization – Jimmy H. M. Lee and Allen Z. Zhong (paper) (slides) | |
Lunch: 12:30 - 14:00 | |
Session 3: 14:00 - 15:30 | Invited Talk: A Constraint-Based Tool for Generating Benchmark Instances – Nguyen Dang (slides) |
Modelling competition | |
Break: 15:30 - 16:00 | |
Session 4: 16:00 - 17:15 | Modelling competition |
17:00: Competition results |
Research in constraint programming typically focuses on problem-solving efficiency. However, the way users conceptualise problems and communicate with constraint programming tools is often sidelined. How humans think about constraint problems can be important for the development of efficient tools and representations that are useful to a broader audience. Although many visual programming languages exist, as an alternate for textual representation, for procedural languages, visual encoding of problem specifications has not received much attention. Thus, visualization could provide an alternative way to model and understand such problems. We present an initial step towards better understanding of the human side of the constraint modelling and solving process. We executed a study that catalogs how people approach three parts of constraint programming (1) the problem representation, (2) the problem-solving, and (3) the programming of a solver. To our knowledge, this is the first human-centred study addressing how people approach constraint modelling and solving. We studied three groups with different expertise: non-computer scientists, computer scientists and constraint programmers and analyzed their marks on paper (e.g., arrows), gestures (e.g., pointing), the mappings to problem concepts (e.g., containers, sets) and any strategies and explanations that they provided. We will discuss results and future research this study will hopefully inspire.
Benchmarking is fundamental for assessing the relative performance of alternative solving approaches. For an informative benchmark we often need a sufficient quantity of instances with different levels of difficulty and the ability to explore subsets of the instance space to detect performance discrimination among solvers. In this talk, I will present AutoIG, an automated tool for generating benchmark instances for constraint solvers. AutoIG supports generating two types of instances: graded instances (i.e., solvable at a certain difficult level by a solver), and discriminating instances (i.e., favouring a solver over another). The usefulness of the tool in benchmarking is demonstrated via an application on five problems taken from the MiniZinc Challenges. Our experiments show that the large number of instances found by AutoIG can provide more detailed insights into the performance of the solvers rather than just a ranking. Cases where a solver is weak or even faulty can be detected, providing valuable information to solver developers. Moreover, discriminating instances can reveal parts of the instance space where a generally weak solver actually performs well relative to others, and therefore could be useful as part of an algorithm portfolio.
This year, ModRef will host a modelling competition. Teams of up to three members will compete to be the quickest to find the best solutions for different instances of described problems. Teams can compete either on location at FLoC 2022 or remotely. We will organize a chat server to communicate with the teams competing remotely. The problem descriptions will be available from circa 14:45 (UTC+3) and solutions can be submitted until 17:00 (UTC+3), after which the winners will be directly announced.
The official Rules of the competition can be found here
Registration: Google Form, or in-person at FLoC 2022
Rank | Score | Team Name | Members |
---|---|---|---|
1 | 203 | Zayenz | Mikael Zayenz Lagerkvist |
2 | 166 | Picat | undisclosed |
3 | 138 | Crispy | undisclosed |
4 | 102 | Helmut | Helmut Simonis |
5 | 62 | University of Reading | undisclosed |
6 | 18 | Quokka | undisclosed |
Paper Abstract Submission | |
Paper Full Submission | |
Notification of acceptance/rejection | |
Camera ready version | |
Workshop day | July 31st, 2022 |
For questions about the workshop, please contact the chairs dr. Jip J. Dekker and dr. Guido Tack: modref@a4cp.org
This year ModRef will again paper submissions. In addition to the presentation of research results, we especially welcome submissions of novel (ongoing) work, recent breakthroughs, future directions, and descriptions of interesting aspects of existing systems.
There are two types of paper submissions: extended abstracts (at most two pages) and full papers (at most fifteen pages). References are not part of the page limit. Papers are submitted through EasyChair, as a PDF file following LIPIcs guidelines.
We also accept (and encourage) non-traditional electronic submissions, such as interactive works/tool demonstrations. In this case, please contact the chairs to discuss the suitability of your submission for ModRef.
All submissions will be reviewed and those that are well-written and make a worthwhile contribution to the topic of the workshop will be accepted for publication in the workshop proceedings. The proceedings will be available electronically at CP 2022. Accepted contributions will be allowed a time slot for a presentation at the workshop. At least one author of each accepted paper must attend and present at the workshop. Please note that every workshop participant needs to be registered for the workshop.
Jip J. Dekker (Chair) | Monash University |
Guido Tack (Chair) | Monash University |
Emir Demirović | TU Delft |
María Andreína Francisco Rodríguez | Uppsala University |
Jimmy H.M. Lee | Chinese University of Hong Kong |
Kevin Leo | Monash University |
The ModRef workshop has been running for 21 years and has hosted many interesting presentations.