Computer Science Computer Networks and Communications

Distributed Constraint Optimization Problems and Algorithms

Description

This cluster of papers focuses on distributed constraint optimization problems and algorithms, including topics such as random satisfiability, local search, spatial reasoning, algorithm selection, phase transitions, soft constraints, temporal reasoning, and graph coloring.

Keywords

Constraint Optimization; Distributed Algorithms; Random Satisfiability; Local Search; Spatial Reasoning; Algorithm Selection; Phase Transitions; Soft Constraints; Temporal Reasoning; Graph Coloring

The basic stochastic approximation algorithms introduced by Robbins and MonroandbyKieferandWolfowitzintheearly1950shavebeenthesubject of an enormous literature, both theoretical and applied. This is d The basic stochastic approximation algorithms introduced by Robbins and MonroandbyKieferandWolfowitzintheearly1950shavebeenthesubject of an enormous literature, both theoretical and applied. This is d
Abstract It is often assumed that the semantics of temporal expressions is directly related to the linear time concept familiar from high-school physics—that is, to a model based on the … Abstract It is often assumed that the semantics of temporal expressions is directly related to the linear time concept familiar from high-school physics—that is, to a model based on the number line. However, there are good reasons for suspecting that such a conception is not the one that our linguistic categories are most directly related to. When-clauses provide an example of the mismatch between linguistic temporal categories and a semantics based on such an assumption.
We study the satisfiability of random Boolean expressions built from many clauses with K variables per clause (K-satisfiability). Expressions with a ratio alpha of clauses to variables less than a … We study the satisfiability of random Boolean expressions built from many clauses with K variables per clause (K-satisfiability). Expressions with a ratio alpha of clauses to variables less than a threshold alphac are almost always satisfiable, whereas those with a ratio above this threshold are almost always unsatisfiable. We show the existence of an intermediate phase below alphac, where the proliferation of metastable states is responsible for the onset of complexity in search algorithms. We introduce a class of optimization algorithms that can deal with these metastable states; one such algorithm has been tested successfully on the largest existing benchmark of K-satisfiability.
Planning is a notoriously hard combinatorial search problem. In many interesting domains, current planning algorithms fail to scale up gracefully. By combining a general, stochastic search algorithm and appropriate problem … Planning is a notoriously hard combinatorial search problem. In many interesting domains, current planning algorithms fail to scale up gracefully. By combining a general, stochastic search algorithm and appropriate problem encodings based on propositional logic, we are able to solve hard planning problems many times faster than the best current planning systems. Although stochastic methods have been shown to be very effective on a wide range of scheduling problems, this is the first demonstration of its power on truly challenging classical planning instances. This work also provides a new perspective on representational issues in planning.
We introduce a greedy local search procedure called GSAT for solving propositional satisfiability problems. Our experiments show that this procedure can be used to solve hard, randomly generated problems that … We introduce a greedy local search procedure called GSAT for solving propositional satisfiability problems. Our experiments show that this procedure can be used to solve hard, randomly generated problems that are an order of magnitude larger than those that can be handled by more traditional approaches such as the Davis-Putnam procedure or resolution. We also show that GSAT can solve structured satisfiability problems quickly. In particular, we solve encodings of graph coloring problems, N-queens, and Boolean induction. General application strategies and limitations of the approach are also discussed. GSAT is best viewed as a model-finding procedure. Its good performance suggests that it may be advantageous to reformulate reasoning tasks that have traditionally been viewed as theorem-proving problems as model-finding tasks.
Although the problem of determining the minimum cost path through a graph arises naturally in a number of interesting applications, there has been no underlying theory to guide the development … Although the problem of determining the minimum cost path through a graph arises naturally in a number of interesting applications, there has been no underlying theory to guide the development of efficient search procedures. Moreover, there is no adequate conceptual framework within which the various ad hoc search strategies proposed to date can be compared. This paper describes how heuristic information from the problem domain can be incorporated into a formal mathematical theory of graph searching and demonstrates an optimality property of a class of search strategies.
Approximation algorithms have developed in response to the impossibility of solving a great variety of important optimization problems. Too frequently, when attempting to get a solution for a problem, one … Approximation algorithms have developed in response to the impossibility of solving a great variety of important optimization problems. Too frequently, when attempting to get a solution for a problem, one is confronted with the fact that the problem is NP-hard. This, in the words of Garey and Johnson, means "I can't find an efficient algorithm, but neither can all of these famous people." While this is a significant theoretical step, it hardly qualifies as a cheering piece of news.If the optimal solution is unattainable then it is reasonable to sacrifice optimality and settle for a "good" feasible solution that can be computed efficiently. Of course, we would like to sacrifice as little optimality as possible, while gaining as much as possible in efficiency. Trading-off optimality in favor of tractability is the paradigm of approximation algorithms.The main themes of this book revolve around the design of such algorithms and the "closeness" to optimum that is achievable in polynomial time. To evaluate the limits of approximability, it is important to derive lower bounds or inapproximability results. In some cases, approximation algorithms must satisfy additional structural requirements such as being on-line, or working within limited space. This book reviews the design techniques for such algorithms and the developments in this area since its inception about three decades ago.
A new global optimization algorithm for functions of continuous variables is presented, derived from the “Simulated Annealing” algorithm recently introduced in combinatorial optimization. The algorithm is essentially an iterative random … A new global optimization algorithm for functions of continuous variables is presented, derived from the “Simulated Annealing” algorithm recently introduced in combinatorial optimization. The algorithm is essentially an iterative random search procedure with adaptive moves along the coordinate directions. It permits uphill moves under the control of a probabilistic criterion, thus tending to avoid the first local minima encountered. The algorithm has been tested against the Nelder and Mead simplex method and against a version of Adaptive Random Search. The test functions were Rosenbrock valleys and multiminima functions in 2,4, and 10 dimensions. The new method proved to be more reliable than the others, being always able to find the optimum, or at least a point very close to it. It is quite costly in term of function evaluations, but its cost can be predicted in advance, depending only slightly on the starting point.
A large class of problems can be formulated in terms of the assignment of labels to objects. Frequently, processes are needed which reduce ambiguity and noise, and select the best … A large class of problems can be formulated in terms of the assignment of labels to objects. Frequently, processes are needed which reduce ambiguity and noise, and select the best label among several possible choices. Relaxation labeling processes are just such a class of algorithms. They are based on the parallel use of local constraints between labels. This paper develops a theory to characterize the goal of relaxation labeling. The theory is founded on a definition of con-sistency in labelings, extending the notion of constraint satisfaction. In certain restricted circumstances, an explicit functional exists that can be maximized to guide the search for consistent labelings. This functional is used to derive a new relaxation labeling operator. When the restrictions are not satisfied, the theory relies on variational cal-culus. It is shown that the problem of finding consistent labelings is equivalent to solving a variational inequality. A procedure nearly identical to the relaxation operator derived under restricted circum-stances serves in the more general setting. Further, a local convergence result is established for this operator. The standard relaxation labeling formulas are shown to approximate our new operator, which leads us to conjecture that successful applications of the standard methods are explainable by the theory developed here. Observations about con-vergence and generalizations to higher order compatibility relations are described.
In recent years research in the planning community has moved increasingly toward s application of planners to realistic problems involving both time and many typ es of resources. For example, … In recent years research in the planning community has moved increasingly toward s application of planners to realistic problems involving both time and many typ es of resources. For example, interest in planning demonstrated by the space res earch community has inspired work in observation scheduling, planetary rover ex ploration and spacecraft control domains. Other temporal and resource-intensive domains including logistics planning, plant control and manufacturing have also helped to focus the community on the modelling and reasoning issues that must be confronted to make planning technology meet the challenges of application. The International Planning Competitions have acted as an important motivating fo rce behind the progress that has been made in planning since 1998. The third com petition (held in 2002) set the planning community the challenge of handling tim e and numeric resources. This necessitated the development of a modelling langua ge capable of expressing temporal and numeric properties of planning domains. In this paper we describe the language, PDDL2.1, that was used in the competition. We describe the syntax of the language, its formal semantics and the validation of concurrent plans. We observe that PDDL2.1 has considerable modelling power --- exceeding the capabilities of current planning technology --- and presents a number of important challenges to the research community.
The goal of the research described in this paper is to develop an application-independent presentation tool that automatically designs effective graphical presentations (such as bar charts, scatter plots, and connected … The goal of the research described in this paper is to develop an application-independent presentation tool that automatically designs effective graphical presentations (such as bar charts, scatter plots, and connected graphs) of relational information. Two problems are raised by this goal: The codification of graphic design criteria in a form that can be used by the presentation tool, and the generation of a wide variety of designs so that the presentation tool can accommodate a wide variety of information. The approach described in this paper is based on the view that graphical presentations are sentences of graphical languages. The graphic design issues are codified as expressiveness and effectiveness criteria for graphical languages. Expressiveness criteria determine whether a graphical language can express the desired information. Effectiveness criteria determine whether a graphical language exploits the capabilities of the output medium and the human visual system. A wide variety of designs can be systematically generated by using a composition algebra that composes a small set of primitive graphical languages. Artificial intelligence techniques are used to implement a prototype presentation tool called APT (A Presentation Tool), which is based on the composition algebra and the graphic design criteria.
Abstract Practical needs in geographic information systems (GIS) have led to the investigation of formal and sound methods of describing spatial relations. After an introduction to the basic ideas and … Abstract Practical needs in geographic information systems (GIS) have led to the investigation of formal and sound methods of describing spatial relations. After an introduction to the basic ideas and notions of topology, a novel theory of topological spatial relations between sets is developed in which the relations are defined in terms of the intersections of the boundaries and interiors of two sets. By considering empty and non-empty as the values of the intersections, a total of sixteen topological spatial relations is described, each of which can be realized in R 2. This set is reduced to nine relations if the sets are restricted to spatial regions, a fairly broad class of subsets of a connected topological space with an application to GIS. It is shown that these relations correspond to some of the standard set theoretical and topological spatial relations between sets such as equality, disjointness and containment in the interior.
Boolean Satisfiability is probably the most studied of combinatorial optimization/search problems. Significant effort has been devoted to trying to provide practical solutions to this problem for problem instances encountered in … Boolean Satisfiability is probably the most studied of combinatorial optimization/search problems. Significant effort has been devoted to trying to provide practical solutions to this problem for problem instances encountered in a range of applications in Electronic Design Automation (EDA), as well as in Artificial Intelligence (AI). This study has culminated in the development of several SAT packages, both proprietary and in the public domain (e.g. GRASP, SATO) which find significant use in both research and industry. Most existing complete solvers are variants of the Davis-Putnam (DP) search algorithm. In this paper we describe the development of a new complete solver, Chaff, which achieves significant performance gains through careful engineering of all aspects of the search - especially a particularly efficient implementation of Boolean constraint propagation (BCP) and a novel low overhead decision strategy. Chaff has been able to obtain one to two orders of magnitude performance improvement on difficult SAT benchmarks in comparison with other solvers (DP or otherwise), including GRASP and SATO.
It has been widely observed that there is no single "dominant" SAT solver; instead, different solvers perform best on different instances. Rather than following the traditional approach of choosing the … It has been widely observed that there is no single "dominant" SAT solver; instead, different solvers perform best on different instances. Rather than following the traditional approach of choosing the best solver for a given class of instances, we advocate making this decision online on a per-instance basis. Building on previous work, we describe SATzilla, an automated approach for constructing per-instance algorithm portfolios for SAT that use so-called empirical hardness models to choose among their constituent solvers. This approach takes as input a distribution of problem instances and a set of component solvers, and constructs a portfolio optimizing a given objective function (such as mean runtime, percent of instances solved, or score in a competition). The excellent performance of SATzilla was independently verified in the 2007 SAT Competition, where our SATzilla07 solvers won three gold, one silver and one bronze medal. In this article, we go well beyond SATzilla07 by making the portfolio construction scalable and completely automated, and improving it by integrating local search solvers as candidate solvers, by predicting performance score instead of runtime, and by using hierarchical hardness models that take into account different types of SAT instances. We demonstrate the effectiveness of these new techniques in extensive experimental results on data sets including instances from the most recent SAT competition.
A large number of problems in AI and other areas of computer science can be viewed as special cases of the constraint-satisfaction problem. Some examples are machine vision, belief maintenance, … A large number of problems in AI and other areas of computer science can be viewed as special cases of the constraint-satisfaction problem. Some examples are machine vision, belief maintenance, scheduling, temporal reasoning, graph problems, floor plan design, the planning of genetic experiments, and the satisfiability problem. A number of different approaches have been developed for solving these problems. Some of them use constraint propagation to simplify the original problem. Others use backtracking to directly search for possible solutions. Some are a combination of these two techniques. This article overviews many of these approaches in a tutorial fashion.
This is the second in a series of three papers that empirically examine the competitiveness of simulated annealing in certain well-studied domains of combinatorial optimization. Simulated annealing is a randomized … This is the second in a series of three papers that empirically examine the competitiveness of simulated annealing in certain well-studied domains of combinatorial optimization. Simulated annealing is a randomized technique proposed by S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi for improving local optimization algorithms. Here we report on experiments at adapting simulated annealing to graph coloring and number partitioning, two problems for which local optimization had not previously been thought suitable. For graph coloring, we report on three simulated annealing schemes, all of which can dominate traditional techniques for certain types of graphs, at least when large amounts of computing time are available. For number partitioning, simulated annealing is not competitive with the differencing algorithm of N. Karmarkar and R. M. Karp, except on relatively small instances. Moreover, if running time is taken into account, natural annealing schemes cannot even outperform multiple random runs of the local optimization algorithms on which they are based, in sharp contrast to the observed performance of annealing on other problems.
The identification of performance-optimizing parameter settings is an important part of the development and application of algorithms. We describe an automatic framework for this algorithm configuration problem. More formally, we … The identification of performance-optimizing parameter settings is an important part of the development and application of algorithms. We describe an automatic framework for this algorithm configuration problem. More formally, we provide methods for optimizing a target algorithm’s performance on a given class of problem instances by varying a set of ordinal and/or categorical parameters. We review a family of local-search-based algorithm configuration procedures and present novel techniques for accelerating them by adaptively limiting the time spent for evaluating individual configurations. We describe the results of a comprehensive experimental evaluation of our methods, based on the configuration of prominent complete and incomplete algorithms for SAT. We also present what is, to our knowledge, the first published work on automatically configuring the CPLEX mixed integer programming solver. All the algorithms we considered had default parameter settings that were manually identified with considerable effort. Nevertheless, using our automated algorithm configuration procedures, we achieved substantial and consistent performance improvements.
This paper presents the fundamental principles underlying tabu search as a strategy for combinatorial optimization problems. Tabu search has achieved impressive practical successes in applications ranging from scheduling and computer … This paper presents the fundamental principles underlying tabu search as a strategy for combinatorial optimization problems. Tabu search has achieved impressive practical successes in applications ranging from scheduling and computer channel balancing to cluster analysis and space planning, and more recently has demonstrated its value in treating classical problems such as the traveling salesman and graph coloring problems. Nevertheless, the approach is still in its infancy, and a good deal remains to be discovered about its most effective forms of implementation and about the range of problems for which it is best suited. This paper undertakes to present the major ideas and findings to date, and to indicate challenges for future research. Part I of this study indicates the basic principles, ranging from the short-term memory process at the core of the search to the intermediate and long term memory processes for intensifying and diversifying the search. Included are illustrative data structures for implementing the tabu conditions (and associated aspiration criteria) that underlie these processes. Part I concludes with a discussion of probabilistic tabu search and a summary of computational experience for a variety of applications. Part II of this study (to appear in a subsequent issue) examines more advanced considerations, applying the basic ideas to special settings and outlining a dynamic move structure to insure finiteness. Part II also describes tabu search methods for solving mixed integer programming problems and gives a brief summary of additional practical experience, including the use of tabu search to guide other types of processes, such as those of neural networks. INFORMS Journal on Computing, ISSN 1091-9856, was published as ORSA Journal on Computing from 1989 to 1995 under ISSN 0899-1499.
We report results from large-scale experiments in satisfiability testing. As has been observed by others, testing the satisfiability of random formulas often appears surprisingly easy. Here we show that by … We report results from large-scale experiments in satisfiability testing. As has been observed by others, testing the satisfiability of random formulas often appears surprisingly easy. Here we show that by using the right distribution of instances, and appropriate parameter values, it is possible to generate random formulas that are hard, that is, for which satisfiability testing is quite difficult. Our results provide a benchmark for the evaluation of satisfiability-testing procedures.
Efficient query evaluation in databases and solving constraint satisfaction problems (CSPs) are crucial for improving performance in many real-world applications, from large-scale data management to decision-making systems. These problems naturally … Efficient query evaluation in databases and solving constraint satisfaction problems (CSPs) are crucial for improving performance in many real-world applications, from large-scale data management to decision-making systems. These problems naturally admit hypergraph representations, and are efficiently solvable using hypertree decomposition techniques, when the decomposition width is small. However, these techniques require finding small-width decompositions efficiently. This remains a significant challenge despite research from both the database and theory communities. In this work we present Ralph (Randomized Approximation using Linear Programming for Hypertree-Decompositions), a fast algorithm to compute low width fractional and generalized hypertree decompositions for input hypergraphs, as well as lower bounds for these widths. We build on the recent breakthrough by Korchemna et al. [FOCS 2024], which introduced the first polynomial time approximation algorithm for fractional (generalized) hypertree width. Our approach combines this theoretical advancement with practical heuristic improvements utilizing (mixed-integer) linear programs. Along the way, we present new algorithms with strong theoretical guarantees. Through empirical evaluation on the nearly 3700 instances of HyperBench, a well established benchmark suite for hypertree decompositions, we find near optimal decompositions for all previously solved instances and low width decompositions for all 500 previously unsolved instances, effectively pushing state-of-the-art.
ABSTRACT We propose a variant of the shortest path problem where the order in which vertices occur in the path is subject to precedence constraints. Precedence constraints are defined in … ABSTRACT We propose a variant of the shortest path problem where the order in which vertices occur in the path is subject to precedence constraints. Precedence constraints are defined in terms of vertex pairs which indicate that a vertex is the predecessor of a vertex . A feasible (not necessarily simple) path may visit a vertex only upon having covered all its predecessors. The problem generalizes the graphic TSP Path, which makes it APX‐hard. We propose a dynamic program and identify input classes for which the dynamic program yields an optimal solution in polynomial time. We also explore the limits of efficient solvability by proving that the problem remains hard even when significantly restricting the structure of the graph or the structure of the precedence constraints: Surprisingly, the problem remains hard even when restricted to spiders.
Nilsu Atılgan , Nazbanou Nozari | Journal of Experimental Psychology Learning Memory and Cognition
Possible spoken and written sequences of a language are determined by phonotactic and orthotactic rules, respectively. Adult speakers can learn both simple new phonotactic rules (e.g., "/k/ is always an … Possible spoken and written sequences of a language are determined by phonotactic and orthotactic rules, respectively. Adult speakers can learn both simple new phonotactic rules (e.g., "/k/ is always an onset in a syllable") and more complex second-order rules (e.g., "/k/ is an onset only if the vowel is /æ/, but a coda if the vowel is /ɪ/"). However, the learning timeline for more complex rules is less consistent across populations and languages. In this article, we investigate the learning of parallel orthotactic rules in typing. We first show that adults quickly learn new first-order constraints in typing similar to those in speaking (Experiment 1). Next, we show that they also learn second-order rules, with a timeline similar to learning such phonotactic rules in speaking (Experiment 2). We further find that the second-order constraint is learned for the coda, but not the onset, suggesting that learning new rules of sequencing is carried out by a chaining-type mechanism. Finally, we show that while phonology clearly influences orthography, orthotactic learning cannot be reduced to phonotactic learning (Experiment 3). Collectively, these data support strong similarities between the statistical learning of orthotactic and phonotactic constraints, pointing to the domain generality of the incremental learning principles across different modalities of language production. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
Timo Kötzing , Aishwarya Radhakrishnan | 2022 IEEE Congress on Evolutionary Computation (CEC)
Marcel Jackson | International Journal of Algebra and Computation
In response to the challenges of sustainable agricultural development posed by population growth and food demand, this study focuses on a rural area in North China, utilizing mathematical models and … In response to the challenges of sustainable agricultural development posed by population growth and food demand, this study focuses on a rural area in North China, utilizing mathematical models and optimization algorithms for planting decision optimization. After preprocessing and visualizing cultivation data, two optimization models were developed: The first model maximizes crop profits by treating excess production as unsold inventory constraints while considering planting quantities, land suitability, and crop rotation restrictions, resulting in an optimal planting scheme for 2024-2030 with a total seven-year revenue of 40.126 billion yuan. The second model incorporates a 50% price reduction strategy for excess production, solved using a simulated annealing algorithm, achieving an optimized plan with a total revenue of 41.834 billion yuan over seven years. Comparative analysis demonstrates that the model employing the price reduction strategy exhibits greater practical applicability, robustness, and utility.
We consider the set of solutions to M <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mi>M</mml:mi></mml:math> random polynomial equations whose N <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mi>N</mml:mi></mml:math> variables are restricted to the (N-1) <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:mo stretchy="false" form="prefix">(</mml:mo><mml:mi>N</mml:mi><mml:mo>−</mml:mo><mml:mn>1</mml:mn><mml:mo … We consider the set of solutions to M <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mi>M</mml:mi></mml:math> random polynomial equations whose N <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mi>N</mml:mi></mml:math> variables are restricted to the (N-1) <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:mo stretchy="false" form="prefix">(</mml:mo><mml:mi>N</mml:mi><mml:mo>−</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy="false" form="postfix">)</mml:mo></mml:mrow></mml:math> -sphere. Each equation has independent Gaussian coefficients and a target value V_0 <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:msub><mml:mi>V</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:math> . When solutions exist, they form a manifold. We compute the average Euler characteristic of this manifold in the limit of large N <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mi>N</mml:mi></mml:math> , and find different behavior depending on the target value V_0 <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:msub><mml:mi>V</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:math> , the ratio \alpha=M/N <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:mi>α</mml:mi><mml:mo>=</mml:mo><mml:mi>M</mml:mi><mml:mi>/</mml:mi><mml:mi>N</mml:mi></mml:mrow></mml:math> , and the variances of the coefficients. We divide this behavior into five phases with different implications for the topology of the solution manifold. When M=1 <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:mi>M</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:math> there is a correspondence between this problem and level sets of the energy in the spherical spin glasses. We conjecture that the transition energy dividing two of the topological phases corresponds to the energy asymptotically reached by gradient descent from a random initial condition, possibly resolving an open problem in out-of-equilibrium dynamics. However, the quality of the available data leaves the question open for now.
Abstract In this article, we show that for a typical non-uniformly expanding unimodal map, the unique maximising measure of a generic Lipschitz function is supported on a periodic orbit. Abstract In this article, we show that for a typical non-uniformly expanding unimodal map, the unique maximising measure of a generic Lipschitz function is supported on a periodic orbit.
Abstract The author’s dissertation, entitled “Scalable SAT Solving and its Application”, advances the efficient resolution of instances of the propositional satisfiability (SAT) problem, one of the prototypical “hard problems” of … Abstract The author’s dissertation, entitled “Scalable SAT Solving and its Application”, advances the efficient resolution of instances of the propositional satisfiability (SAT) problem, one of the prototypical “hard problems” of computer science with many scientific and industrial real-world applications. A particular focus is put on exploiting massively parallel computational environments, such as high-performance computing (HPC) systems or cloud computing. The dissertation has resulted in world-leading solutions for scalable automated reasoning and in a number of awards from the SAT community, and has most recently been acknowledged with a GI Dissertation Award. The article at hand summarizes the topic, approaches, and central results of the dissertation, estimates the work’s long-term impact and its role for future research, and closes with some personal notes.
Abstract This article presents a new typology of temporal relations suited for archaeological use. It discusses the properties and advantages of the proposed system and compares it with three other … Abstract This article presents a new typology of temporal relations suited for archaeological use. It discusses the properties and advantages of the proposed system and compares it with three other typologies of temporal relations: Allen's relations, Holst's relation, and the CIDOC‐CRM. It is argued that a more detailed typology of temporal relations in archaeology than currently available is called for, such as the one proposed in this paper. A final synoptic table is provided to help users navigate among the different typologies.
We present ‘NeuralConstraints,’ a suite of computer-assisted composition tools that integrates a feedforward neural network as a rule within a constraint-based composition framework. ‘NeuralConstraints’ combines the predictive generative abilities of … We present ‘NeuralConstraints,’ a suite of computer-assisted composition tools that integrates a feedforward neural network as a rule within a constraint-based composition framework. ‘NeuralConstraints’ combines the predictive generative abilities of neural networks trained on symbolic musical data with an advanced backtracking constraint algorithm. It provides a user-friendly interface for exploring symbolic neural generation, while offering a higher level of creative control compared to conventional neural generative processes, leveraged by the constraint solver. This article outlines the technical implementation of the core functionalities of ‘NeuralConstraints’ and illustrates their application through specific tests and examples of use.
We propose a new step-wise approach to proving observational equivalence, and in particular reasoning about fragility of observational equivalence. Our approach is based on what we call local reasoning. The … We propose a new step-wise approach to proving observational equivalence, and in particular reasoning about fragility of observational equivalence. Our approach is based on what we call local reasoning. The local reasoning exploits the graphical concept of neighbourhood, and it extracts a new, formal, concept of robustness as a key sufficient condition of observational equivalence. Moreover, our proof methodology is capable of proving a generalised notion of observational equivalence. The generalised notion can be quantified over syntactically restricted contexts instead of all contexts, and also quantitatively constrained in terms of the number of reduction steps. The operational machinery we use is given by a hypergraph-rewriting abstract machine inspired by Girard's Geometry of Interaction. The behaviour of language features, including function abstraction and application, is provided by hypergraph-rewriting rules. We demonstrate our proof methodology using the call-by-value lambda-calculus equipped with (higher-order) state.
To address the issue that existing solution algorithms for Continuous Distributed Constraint Optimization Problems (C-DCOPs) are prone to getting trapped in local optima and failing to escape, this paper proposes … To address the issue that existing solution algorithms for Continuous Distributed Constraint Optimization Problems (C-DCOPs) are prone to getting trapped in local optima and failing to escape, this paper proposes a C-DCOP algorithm based on the historical feature distribution of agents (HFDA). In HFDA, agents' historical values are utilized to detect when the algorithm falls into a local optimum. Upon detection, the historical features of neighboring agents are constructed based on constraint functions. By leveraging the distribution of these features, a probabilistic constraint selection mechanism is introduced to explore new solutions that cannot be obtained under full constraints, thereby guiding the algorithm towards optimization and overcoming local optima or stagnation. The probabilistic neglect of neighbors based on their feature distribution increases the chances of finding better solutions while effectively suppressing noise. Comparative experiments on two benchmark problems demonstrate that HFDA achieves faster convergence and higher solution quality compared to state-of-the-art C-DCOP algorithms.
A simple hypergraph H with vertex set X and edge set E is representable by Von Neumann–Morgenstern (VNM)-stable sets—or VNM—if there exists an irreflexive simple digraph D with vertex set … A simple hypergraph H with vertex set X and edge set E is representable by Von Neumann–Morgenstern (VNM)-stable sets—or VNM—if there exists an irreflexive simple digraph D with vertex set X such that each edge of H is a VNM-stable set of D. It is shown that a simple hypergraph H is VNM if and only if each edge of H is a maximal clique of the conjugation graph of H. A related algorithm that identifies finite VNM hypergraphs is also provided.
Although state-of-the-art (SOTA) SAT solvers based on conflict-driven clause learning (CDCL) have achieved remarkable engineering success, their sequential nature limits the parallelism that may be extracted for acceleration on platforms … Although state-of-the-art (SOTA) SAT solvers based on conflict-driven clause learning (CDCL) have achieved remarkable engineering success, their sequential nature limits the parallelism that may be extracted for acceleration on platforms such as the graphics processing unit (GPU). In this work, we propose FastFourierSAT, a highly parallel hybrid SAT solver based on gradient-driven continuous local search (CLS). This is achieved by a parallel algorithm inspired by the fast Fourier transform (FFT)-based convolution for computing the elementary symmetric polynomials (ESPs), which is the major computational task in previous CLS methods. The complexity of our algorithm matches the best previous result. Furthermore, the substantial parallelism inherent in our algorithm can leverage the GPU for acceleration, demonstrating significant improvement over the previous CLS approaches. FastFourierSAT is compared with a wide set of SOTA parallel SAT solvers on extensive benchmarks including combinatorial and industrial problems. Results show that FastFourierSAT computes the gradient 100+ times faster than previous prototypes on CPU. Moreover, FastFourierSAT solves most instances and demonstrates promising performance on larger-size instances.
Lagrangian decomposition (LD) is a relaxation method that provides a dual bound for constrained optimization problems by decomposing them into more manageable sub-problems. This bound can be used in branch-and-bound … Lagrangian decomposition (LD) is a relaxation method that provides a dual bound for constrained optimization problems by decomposing them into more manageable sub-problems. This bound can be used in branch-and-bound algorithms to prune the search space effectively.In brief, a vector of Lagrangian multipliers is associated with each sub-problem, and an iterative procedure (e.g., a sub-gradient optimization) adjusts these multipliers to find the tightest bound. Initially applied to integer programming, Lagrangian decomposition also had success in constraint programming due to its versatility and the fact that global constraints provide natural sub-problems. However, the non-linear and combinatorial nature of sub-problems in constraint programming makes it computationally intensive to optimize the Lagrangian multipliers with sub-gradient methods at each node of the tree search. This currently limits the practicality of LD as a general bounding mechanism for constraint programming. To address this challenge, we propose a self-supervised learning approach that leverages neural networks to generate multipliers directly, yielding tight bounds. This approach significantly reduces the number of sub-gradient optimization steps required, enhancing the pruning efficiency and reducing the execution time of constraint programming solvers. This contribution is one of the few that leverage learning to enhance bounding mechanisms on the dual side, a critical element in the design of combinatorial solvers. This work presents a generic method for learning valid dual bounds in constraint programming. We validate our approach on two challenging combinatorial problems: The multi-dimensional knapsack problem and the shift scheduling problem. The results show that our approach can solve more instances than the standard application of LD to constraint programming, reduce execution time by more than half, and has promising generalization ability through fine-tuning.
One of the major techniques to tackle temporal planning problems is heuristic search augmented with a symbolic representation of time in the states. Augmenting the problem with composite actions (macro-actions) … One of the major techniques to tackle temporal planning problems is heuristic search augmented with a symbolic representation of time in the states. Augmenting the problem with composite actions (macro-actions) is a simple and powerful approach to create "shortcuts" in the search space, at the cost of augmenting the branching factor of the problem and thus the expansion time of a heuristic search planner. Hence, it is of paramount importance to select the right macro-actions and minimize the number of such actions to optimize the planner performance. In this paper, we first discuss a simple, yet powerful, model similar to macro-actions for the case of temporal planning, and we call these macro-events. Then, we present a novel ranking function to extract and select a suitable set of macro-events from a dataset of valid plans. In our ranking approach, we consider an estimation of the hypothetical search space for a blind search including a candidate set of macro-events under four different exploitation schemata. Finally, we experimentally demonstrate that the proposed approach yields a substantial performance improvement for a state-of-the-art temporal planner.
Majority illusion is a phenomenon in social networks wherein the decision by the majority of the network is not the same as one's personal social circle's majority, leading to an … Majority illusion is a phenomenon in social networks wherein the decision by the majority of the network is not the same as one's personal social circle's majority, leading to an incorrect perception of the majority in a large network. We present polynomial-time algorithms which completely eliminate majority illusion by altering as few connections in the network as possible. Eliminating majority illusion ensures each neighbourhood in the network has at least a 1/2-fraction of the majority winner. This result is surprising as partially eliminating majority illusion is NP-hard. We generalize the majority illusion problem to an arbitrary fraction p and show that the problem of ensuring all neighbourhoods in the network contain at least a p-fraction of nodes consistent with a given preference is NP-hard, for nearly all values of p.
In recent years, semantic segmentation has flourished in various applications. However, the high computational cost remains a significant challenge that hinders its further adoption. The filter pruning method for structured … In recent years, semantic segmentation has flourished in various applications. However, the high computational cost remains a significant challenge that hinders its further adoption. The filter pruning method for structured network slimming offers a direct and effective solution for the reduction of segmentation networks. Nevertheless, we argue that most existing pruning methods, originally designed for image classification, overlook the fact that segmentation is a location-sensitive task, which consequently leads to their suboptimal performance when applied to segmentation networks. To address this issue, this paper proposes a novel approach, denoted as Spatial-aware Information Redundancy Filter Pruning (SIRFP), which aims to reduce feature redundancy between channels. First, we formulate the pruning process as a maximum edge weight clique problem (MEWCP) in graph theory, thereby minimizing the redundancy among the remaining features after pruning. Within this framework, we introduce a spatial-aware redundancy metric based on feature maps, thus endowing the pruning process with location sensitivity to better adapt to pruning segmentation networks. Additionally, based on the MEWCP, we propose a low computational complexity greedy strategy to solve this NP-hard problem, making it feasible and efficient for structured pruning. To validate the effectiveness of our method, we conducted extensive comparative experiments on various challenging datasets. The results demonstrate the superior performance of SIRFP for semantic segmentation tasks.
For many real-world problems, users are often interested not only in finding a single solution but in obtaining a sufficiently diverse collection of solutions. In this work, we consider the … For many real-world problems, users are often interested not only in finding a single solution but in obtaining a sufficiently diverse collection of solutions. In this work, we consider the Diverse SAT problem, aiming to find a set of diverse satisfying assignments for a given propositional formula. We propose a novel and effective local search algorithm, DiverSAT, to solve the problem. To cope with diversity, we introduce three heuristics and a perturbation strategy based on some relevant information. We conduct extensive experiments on a large number of public benchmarks, collected from semiformal hardware verification, logistics planning, and other domains. The results show that DiverSAT outperforms the existing algorithms on most of these benchmarks.
Constraint Acquisition (CA) aims to widen the use of constraint programming by assisting users in the modeling process. However, most CA methods suffer from a significant drawback: they learn a … Constraint Acquisition (CA) aims to widen the use of constraint programming by assisting users in the modeling process. However, most CA methods suffer from a significant drawback: they learn a single set of individual constraints for a specific problem instance, but cannot generalize these constraints to the parameterized constraint specifications of the problem. In this paper, we address this limitation by proposing GenCon, a novel approach to learn parameterized constraint models capable of modeling varying instances of the same problem. To achieve this generalization, we make use of statistical learning techniques at the level of individual constraints. Specifically, we propose to train a classifier to predict, for any possible constraint and parameterization, whether the constraint belongs to the problem. We then show how, for some classes of classifiers, we can extract decision rules to construct interpretable constraint specifications. This enables the generation of ground constraints for any parameter instantiation. Additionally, we present a generate-and-test approach that can be used with any classifier, to generate the ground constraints on the fly. Our empirical results demonstrate that our approach achieves high accuracy and is robust to noise in the input instances.