|
|
Detailed schedule
Click on a link for more details
Show all the abstracts
Thursday 30 January:
Thursday 11:15-12:30 TA-1: COMEX - Optimization 1 Room Vesale 023 - Chair: M. Schyns
- p-center problem with failure foresight and capacity constraints
Antonio M. Rodriguez-chia (Université Libre de Bruxelles (ULB)) Co-authors: I. Espejo (U. de Cádiz); A. Marín (U. de Murcia) Abstract: This paper deals with a generalized capacitated $p$-center problem taking into account the possibility
that a center might suffer a disruption being unavailable for serving any demand and assuming that every site will be covered by its closest available center. The problem is of interest when locating emergency
centers and, at the same time, taking precautions against emergencies which can cause failure of the center itself.
We consider different formulations for the problem and extensive
computational tests are reported, showing the potentials and limits of each formulation on
several types of instances. Finally, a preprocessing phase for fixing variables has been developed
as well as different families of valid inequalities have been proposed for strengthening the most promising formulations,
obtaining in some cases much better resolution times.
- Improved Results for the Traveling Umpire Problem
Sam Van Malderen (KU Leuven, Department of Computer Science, CODeS & iMinds-ITEC) Co-authors: Tony Wauters, Greet Vanden Berghe Abstract: We investigate the Traveling Umpire Problem (TUP). The TUP is the academic version of the real-world major league baseball umpire scheduling problem. It defines the way in which umpires should be assigned to the games in a fixed Traveling Tournament Problem schedule. The TUP turns out to be very challenging to solve, even for small problem sizes. The current study results in a decrease in optimality gap by improving lower bounds as well as upper bounds for many benchmark instances.
- Integrated task scheduling and personnel rostering problems
Pieter Smet (KU Leuven, campus Gent) Co-authors: Greet Vanden Berghe Abstract: The present contribution discusses three related problems that combine task scheduling and personnel rostering. Common elements in the problem descriptions include qualification requirements and the need to assign all tasks. However, the scope of the decisions to be made differs. We first consider shift assignments to be fixed, then we consider only a single isolated day, and finally we consider a planning period of multiple days. Efficient algorithms for all problems are presented and evaluated.
Thursday 11:15-12:30 TA-2: Software and Implementation Room Vesale 020 - Chair: M. Mezmaz
- A high-level, modular and declarative modeling framework for routing problems
Christophe Ponsard (CETIC) Co-authors: Renaud De Landtsheer, Yoann Guyot Abstract:
Nowadays, the technology and algorithms for solving routing problems are widespreading.
This paper proposes an approach for modeling routing problems that aims at cutting the cost required for developing a solver dedicated to specific routing problems. Our approach also ensures that the resulting models include the minimal number of modeling artifacts such as variables, constraints, and invariants, all of which require processing power that should be spared.
Our approach for modeling routing problem relies on the mechanism of \emph{traits} of the Scala programming language \cite{scala} which supports partial implementation of interfaces. A scala class can also inherit from multiple traits. We have developed a library of traits, each of them includes an aspect of routing problem (e.g. nodes predecessors, capacities, time), bundling the necessary variables, invariants, and constraints. Any combination of routing problem features can be represented by assembling them on top of a base routing class. Our library is integrated in the CBLS engine of the OscaR open source optimisation framework.
- Cloud optimization with OptaPlanner
Geoffrey De Smet (Red Hat) Abstract: Each process in the cloud requires an amount of CPU power, RAM memory, network bandwidth, hard disk space, etc. Multiple processes can share the same physical machine, as long as that machine has enough resources (CPU, RAM, etc) to accommodate for all those processes. Which processes do we put on the same machine? How do we organize our machines more efficiently? How do we run a better a service, with less hardware?
- Using factorial number system to solve permutation optimization problems
Mohand Mezmaz (University of Mons) Co-authors: R. Leroy, M. Mezmaz, N. Melab and D. Tuyttens Abstract: Our paper proposes a pioneer multi-core parallel branch-and-bound (B&B) algorithm using factorial number system. This special numbering system is a mixed radix numeral system adapted to numbering permutations. To the best of our knowledge, all the other parallel B&B algorithm published in the literature are based on using pools to store subproblems. Our new parallel factoradic-based B&B approach aims, on the one hand, to reduce the size of the memory used to store B&B pools, and on the other hand, to accelerate the management of these pools. To validate our new factoradic-based B&B, this original method is compared with a typical multi-core pool-based B&B.
Thursday 11:15-12:30 TA-3: COMEX - Smart mobility Room Vesale 025 - Chair: A. Caris
- The City Bike Repositioning Problem
Darpana Dilip (Unversity of Antwerp) Co-authors: Prof. dr. Kenneth Sörensen Abstract: “Where’s my bike ? ” is the question asked by community websites like www.
wheresmyvillo.be, demonstrating that the success of city bike sharing systems
(BSSs) crucially depends on the availability — in each rental station and at all
time — of both some bikes and some free parking spaces. BSS operators attempt
to achieve this by employing a fleet of light repositioning vehicles to move bikes
between stations. Although some approaches in the literature exist to support an
automated planning of the repositioning activities, they all fall short of delivering
a comprehensive solution for this challenging problem. We argue that the main
reason for this is that all current approaches attempt to combine two distinct
problems that are best handled separately.
- Exact and meta-heuristic algorithm for the multi-depot heterogeneous dial-a-ride problem
Kris Braekers (Universiteit Hasselt) Co-authors: An Caris, Gerrit K. Janssens Abstract: Dial-a-ride problems are concerned with the design of efficient vehicle routes for transporting individual persons from specific origin to specific destination locations. Most research on these problems has concentrated on problems with a single user type and a homogeneous fleet of vehicles located at a single depot. In this paper, a general dial-a-ride problem with heterogeneous users, heterogeneous vehicles and multiple depots is introduced (MD-H-DARP). To solve the problem, an existing branch-and-cut algorithm is adapted and a new deterministic annealing meta-heuristic algorithm is proposed.
- Operational Effects of Variations in Service Level Criteria for the Dial-a-Ride Problem
Yves Molenbruch (Hasselt University) Co-authors: An Caris, Kris Braekers Abstract: A dial-a-ride system is an application of demand-dependent transportation, often focusing on people with specific needs. The service provider attempts to create efficient vehicle routes to satisfy all requests, taking into account specific service level criteria. This study quantifies the operational effect of varying these criteria. Its magnitude and pattern is analyzed, both for the global dataset and for specific parts, created on the basis of user’s and service provider’s characteristics.
Thursday 11:15-12:30 TA-4: Systems Room Pentagone 0A11 - Chair: P. Kunsch
- ILS for water distribution network optimisation
Annelies De Corte (University of Antwerp) Co-authors: Kenneth Sörensen Abstract: The water distribution network design optimisation problem is a mixed integer, non-linear combinatorial optimisation problem which is shown to be NP-hard. Therefore, metaheuristic techniques have been applied numerously to find satisfying solutions in reasonable time. An iterated local search algorithm has been developed, which is able to find reported optimal solutions for the available benchmark networks.
- Effects of Carbon Footprint Reduction on Supply Chain Network Design
Jean-sébastien Tancrez (Louvain School of Management, Université catholique de Louvain) Co-authors: Joana M. Comas-Marti, Ralf W. Seifert Abstract: In this work, our goal is to investigate how companies should modify their supply chain network, cost effectively, when aiming at reducing their carbon footprint. For this, we propose a supply chain network design model that simultaneously considers the emissions and costs related to both facility location and transport mode decisions, while taking into account the innovative or functional nature of products through the explicit consideration of demand uncertainty and inventory costs. The model is illustrated using numerical experiments and managerial insights are derived.
- A system-dynamics analysis of the energy transition in Belgium after Fukushima
Pierre L. Kunsch (ULB) Co-authors: Jean Friesewinkel Abstract: The nuclear phase-out law has been passed in 2003 in Belgium for closing down nuclear power plants (NPP's) after 40 operating years. In 2012 NPP's produce about 50% of the domestic electricity. Green parties argue without any quantitative elements that the phase-out is a contribution for stimulating renewable energy sources (RES), and promoting the rational use of energy (RUE). A system-dynamics model brings rational elements to the passionate debate, which is more pertinent than ever after the Fukushima disaster in March 2011. It appears that delaying the nuclear phase-out by 20 years would be favourable to both RES and RUE, as otherwise natural gas would rapidly get the lion share of electricity production in Belgium, creating new unwanted dependencies regarding the safety of supply, the dependency on gas suppliers, and the increased use of non-renewable and CO2-emitting fossil fuels.
Thursday 14:00-15:40 TB-1: Data Analysis 1 Room Vesale 023 - Chair: X.Siebert
- Early detection of university students in potential difficulty
Anne-sophie Hoffait (HEC - University of Liege) Co-authors: M;Schyns Abstract: Access to the Belgian higher education system is easier and cheaper than in most foreign countries. Moreover, the quality of our Universities is acknowledged and the degrees they deliver could be considered as a must.
As a consequence, lots of candidates apply.
However, while access is relatively open, it is not a free lunch and a large proportion of candidates do not satisfy the requirements at the end of the first year. This has a cost for the students (fees, one year lost, frustration...) but also for the collectivity (the Universities are funded by the Government) and for the Universities (lots of ressources are needed).
It is therefore important to try to identify as early as possible the students that could potentially be in difficulty during the first year. The Universities might then take appropriate measures to alleviate the problem.
There are indeed various reasons that could explain a failure and some that could be dealt with: a wrong orientation (e.g. due to a lack of correct information), a misinterpretation of the requirements and expectations of a university degree, a huge difficulty in making the transition from the high education system to the higher education system, an insufficient mastery of some prerequisite concepts... The University of Li\`ege, as other Universities, has already taken different initiatives. But, if we were able to early identify with a high probability those students, the Universities might develop adapted methods to attack the problem with more emphasis where it is more needed and when it is still possible.
Our contribution is multiple. First, the aim is to develop a decision tool able to identify the students who have a high probability to face difficulties if nothing is done to help them. For that, we consider three standard datamining methods: logistic regression, artificial neural networks and decision trees and focus on early detection, i.e. before starting at the University.
Secondly, we suggest to adapt these three methods as well as the classification framework in order to increase the probability of correct identification of the students. In our approach,we do not restrict the classification to two extreme classes, e.g. failure or succes, but we create subcategories for different levels of confidence: high risk of failure, risk of failure, expected success or high probability of success. The algorithms are modified accordingly and to give more weight to the class that really matters.
Note that this approach remains valid for any other classification problems for which the focus is on some extreme classes; e.g. fraud detection, credit default...
Finally, simulations are conducted to measure the performances of the three methods, with and without the suggested adaptation. We check if the factors of success/failure we can identify are similar to those reported in the literature.
We also make a ``what-if sensitivity analysis''. The goal is to measure in more depth the impact of some factors and the impact of some solutions, e.g., a complementary training or a reorientation.
- MediaCycle : A tool for interactive browsing of multimedia libraries.
Xavier Siebert (Faculté Polytechnique de Mons) Co-authors: Thierry Ravet, Christian Frisson, Stéphane Dupont Abstract: This paper presents a software tool for browsing and organizing multimedia databases (sounds, images, videos, ...). Media are analyzed and transformed into a vector of features in a high-dimensional space. Data mining techniques are then used to group similar media, whose visualization consists in a two-dimensional mapping that aims at preserving the similarity relationships that have been detected. We show how nonlinear dimensionality reduction approaches can be used to achieve this task.
- First species counterpoint music generation with VNS and vertical viewpoints
Dorien Herremans (Universiteit Antwerpen) Co-authors: Kenneth Sörensen, Darrell Conklin Abstract: In a previous paper, a VNS algorithm was developed that can generate counterpoint music based on an objective function manually coded from music theory. In this research, machine learning is used to automatically generate the objective function for a VNS that generates first species counterpoint. The VNS converges to a good solution within very little computing time. It appears to be a valid and flexible sampling method and was successfully combined with the vertical viewpoints method.
- A Choquet integral for the comparison of decisional maps
Valerie Brison (UMONS) Co-authors: Marc Pirlot Abstract: We consider maps, representing a region under study, and divided into geographic units. Each unit is assessed on an ordinal scale describing e.g. its degree of suitability for some usage, for instance housing, or its state of degradation with respect to sustainable development criteria. After a while, and for example, after the application of some policy aiming at improving the situation, the state of the units has evolved. For some units, the state has improved, and for some others, it has deteriorated. The main question we want to address is to know whether the global state of the region has improved. To answer this, we proposed in a previous work several models to represent a decision maker's preferences on a set of maps. One of these allows to take into account the contiguity of the units on a map. This model is based on the Choquet integral. In this work, we propose an axiomatic characterization of preference relations on maps that can be represented by such a model.
Thursday 14:00-15:40 TB-2: Multiple Objectives Room Vesale 020 - Chair: Y. de Smet
- Motivations for the development of a multi-objective algorithm configurator
Nguyen Thi Thanh Dang (KU Leuven KULAK) Co-authors: Patrick De Causmaecker Abstract: In the single-objective automated algorithm configuration problem, given an algorithm with a set of parameters that need to be configured and a distribution of problem instances, the automated algorithm configurator will try to search for a good parameter configuration based on a pre-defined performance measure. In this paper, we point out two motivations for the development of a multi-objective algorithm configurator, in which more than one performance measure are considered at the same time. The first motivation is a parameter configuration case study for a deterministic single machine scheduling algorithm with two performance measures: minimization of the average running time and maximization of the total number of optimal solutions. The second one is the configuration problem for non-exact multi-objective optimization algorithms. In addition, a discussion of solving approach for the first motivating problem is also presented.
- Multi-Objective Optimization for the Design of 3D-Stacked Integrated Circuits
Nguyen Anh Vu Doan (Université libre de Bruxelles) Co-authors: Dragomir Milojevic, Frederic Robert, Yves De Smet Abstract: In order to constantly improve the performances of integrated circuits, manufacturers are compelling themselves to follow the Moore's law. However, it will probably be impossible to follow this trend in the future. In order to overcome this problem, 3D-stacked integrated circuits have been proposed. This technology brings numerous advantages but are more complex to design. We show in this work that multi-objective optimization can be of help when designing such 3D-circuits.
- Design safer and greener road projects by using a multiobjective optimization approach
Renaud Sarrazin (University of Brussels) Co-authors: Yves De Smet Abstract: Until recently, designing road projects was mainly motivated by the economic aspect despite the collective nature of road infrastructure and its impact on the environment. However, several measures had been taken over the past decades in order to integrate road designing into a more integrated and sustainable approach. In this research, we address the design of safer and greener road projects as a combinatorial and multicriteria problem. We generate the solutions of the problem by using a multiobjective optimization approach. Finally, we analyse the nature of the Pareto frontier.
- Deconstructing Multi-Objective Evolutionary Algorithms
Manuel LÓpez-ibaÑez (Université Libre de Bruxelles) Co-authors: Leonardo C. T. Bezerra and Thomas Stützle Abstract: Many studies in the literature have applied multi-objective
evolutionary algorithms (MOEAs) to multi-objective combinatorial
optimization problems. Few of them analyze the actual
contribution of the basic algorithmic components of MOEAs.
These components include the underlying EA structure, the
fitness and diversity operators, and their policy for
maintaining the population. In this paper, we compare seven MOEAs from the
literature on three bi-objective and one tri-objective variants of
the permutation flowshop problem. The overall best and worst
performing MOEAs are then used for a path-relinking-based iterative
analysis, where each of the main components of these algorithms is
analyzed to determine their contribution to the algorithms'
performance. Results show some components not to be effective, while
others only work well when simultaneously used.
Thursday 14:00-15:40 TB-3: Logistics Room Vesale 025 - Chair: D. De Wolf
- A relax-and-fix heuristic for the three-dimensional bin-packing model with transportation constraints
Célia Paquay (HEC - University of Liege) Co-authors: M. Schyns and S. Limbourg Abstract: Our problem is to pack a set of boxes into a selection of containers of various shapes. This problem belongs to the family of cutting and packing problems, which represent a key topic in operations research. We can label it as a three dimensional Multiple Bin Size Bin Packing Problem (MBSBPP) using the typology defined by [5]. Indeed, this is an input minimisation problem for which the dimensions of all objects are fixed, the small items being strongly heterogeneous and the assortment of large objects, i.e. the containers, weakly heterogeneous.
Since our problem is a packing problem in three dimensions, it therefore also
belongs to the family of Container Loading Problems according to the definition given in [1].
In this work, we extend the denition of the MBSBPP to include the situations in which the large objects may be truncated parallelepipeds. This is of particular importance in the field of air transportation. In this context, containers are called unit load devices (ULD). A ULD is an assembly of components consisting of a
container or of a pallet covered with a net, so as to provide standardised size units for individual pieces of baggage or cargo, and to allow for rapid loading and unloading ([4]). ULDs may have specic shapes to fit inside aircraft.
Our first aim was to provide a mathematical linear model for this problem ([2]). This model takes into account the following set of constraints: the geometric constraints (the boxes lie entirely and without overlap inside the containers) the container weight limit (the total weight of the contents of a container cannot exceed a dened capacity), the orientation constraints (boxes can rotate inside
the containers), the load stability, the load-bearing strength or fragility of items
and the weight distribution within a container. As announced, we also take the specic shape of the containers into account.
This model has been implemented in Java, using CPLEX library, and tested this model on small instances. It includes a 3D interface that allows us to display the results from dierent angles and to zoom in and out.
Our next step is now to develop a constructive heuristic in order to start with a good initial solution. We choose to try the relax-and-fix method developed in [3].
References
[1] Bortfeldt, A. and G. Wäscher (2013). Constraints in container loading - A State-of-the-Art Review. European Journal of Operational Research 229, 1-20.
[2] Paquay, C., Schyns, M., and Limbourg, S. A Mixed Integer Programming formulation for the three dimensional bin packing problem deriving from an air cargo application. Submitted to International Transactions in Operational Research.
[3] Pochet, Y. and L. A. Wolsey (2006). Production Planning by Mixed Integer Programming. Springer Series in Operations Research and Financial Engineering.
[4] Limbourg, S., Schyns, M., and Laporte, G. (2011). Automatic Aircraft Cargo Load Planning. Journal of the Operations Research Society (0), 113.
[5] Wäscher, G., H. Hauÿner, and H. Schumann (2007). An improved typology of cutting and packing problems. European Journal of Operational Research 183,1109-1130.
- A Heuristic Approach to Minimize the Number of Shifts in Container Ship Stowage Plans
Jan Christiaens (KU Leuven) Co-authors: J. Verstichel, G. Vanden Berghe Abstract: The container ship stowage planning problem was introduced to the research group by the container transport company Euroports. The goal of this so called Master Bay Plan Problem (MBPP) is to minimize the total stowage time to transport a given set of cargo. We developed a model (Sliced Bar Packing Model) and heuristic to produce such stowage pans. Results show that the SBPM heuristic outperforms the Suspensory Heuristic (SH) introduced by Avriel et al.
- Time Development of new Hydrogen Transmission Pipeline Networks for Franc
Daniel De Wolf (Université du Littoral Côte d'Opale) Co-authors: Jean ANDRE, Stephane AURAY,Mohamed-Mahmoud MEMMAH Abstract: The development of an hydrogen economy will need a transportation infrastructure to deliver hydrogen from production sites to end users. For the long term, the least cost infrastructure for transmission and distribution is expected to be a pipelines network. For the short term, other transportation modes could be more competitive compared to the pipelines. In this paper, we deal with the temporal deployment of a new hydrogen transportation infrastructure. Starting from the hydrogen pipelines transmission network, we propose a backward heuristic approach for the deployment over time of the designed networks by considering alternate transportation modes. This approach will then be illustrated on the regional hydrogen transportation case for France.
- Solution methods for the cycle trip planning problem
Cédric Verbeeck (Ghent University) Co-authors: P. Vansteenwegen, E.-H. Aghezzaf Abstract: We present a novel approach to path finding in a directed graph, namely finding the route with the highest score, starting at one of the possible starting locations and with a length between a minimum and maximum distance. This setting is motivated by the problem that a recreational cyclist faces when searching the nicest route of a certain length. Our contributions are a clear definition and a mathematical model of the cycle trip planning problem (CTPP), a branch-and-cut approach for small instances and a metaheuristic that solves CTPP instances to near optimality in on average 0.6 seconds of computation time.
Thursday 14:00-15:40 TB-4: COMEX - Applications to Economy Room Pentagone 0A11 - Chair: W. Brauers
- Heuristics for Varian's Index
Bart Smeulders (KU Leuven) Co-authors: Frits C.R. Spieksma Abstract: Varian's index (VI) is a measure for the amount of irrationality displayed in purchasing decisions. In this paper, we look at the problem of computing Varian's Index. The problem is known to be NP-Hard, and previous work has focussed on heuristics. We will propose several new heuristics, partially inspired by the close link with both feedback arc and vertex set problems, which improve on known algorithms.
- Investment in Shares by MULTIMOORA Multiple Objectives Optimization
Willem K. M. Brauers (Vilnius Gediminas Technical University and University of Antwerp) Abstract: Willem K. M. Brauers, Vilnius Gediminas Technical University and University of Antwerp, Faculty of Applied Economics.
Romualdas Ginevicius, Faculty of Business Management, Dept. of Economics and Management of Enterprises, Vilnius Gediminas Technical University.
Abstract:
Shareholders participate in the capital of a company. In a company quoted on a Stock Exchange, this ownership is rather passive as the shareholders are excluded from any form of management, with exception for the reference shareholders.
Investments in stocks from a selection of companies with a different activity or a different location will diminish the risk after the saying: "you must not put all your eggs in one basket". In addition, the advices of experts will help too. These advices may have different forms, such as from:
Credit Rating Agencies like Moody, Standard&Poor's and Fitch, a general theory about investment in stocks such as the Buffet philosophy, the advice of one or different experts based on: their personal experience, company balance analyses, interviews with managers or sampling. Why not on basis of Multiple Objectives Optimization, which would be new and preferably for leading World Indices such as: Dow Jones?Industrial New York, FTSE100 London or Nikkei225 Tokyo?
Taking into consideration for instance 6 objectives would lead to a matrix of 600 elements for the FTSE100 London and for Nikkei225 Tokyo even more. Therefore a MOO approach which can handle large matrices has to be chosen. MULTIMOORA responds to this condition and, even more, is composed of three different methods which can control each other and is based on dimensionless measures, excluding the difficult problem of normalization. In order to summarize the 3 outcomes a Theory of Domination is applied.
To make the application much simpler the Belgian Bel20 Index was used but with 8 objectives, specific for Belgium. With the three Bank shares excluded, due to the instability of banks at the moment, the matrix involved still counted 102 elements. As a measure of importance a 9th objective was added under the form of the opinion of the analysts for each share which stresses a general opinion on each share.
The analysts lead to the following advices:
1 ? 1.49 buy
1.5. – 2.49 increase your stock
2.5 – 3.49 hold
3.5 – 4.49 decrease your stock
4.5 – 5 sell.
General Remarks: The application concerns the past. Pure extrapolation has no sense for such a fluctuating market. Many other factors have to be taken into consideration. Regularly revisions are needed. Companies with an effective management are assumed. Finally, there are the Unknown Unknowns or may we say the Economics of Uncertainty? For instance, a combination of Earthquake, Tsunami and Atomic Plants disasters is certainly fatal for insurance companies.
Concerning the Belgian BEL20 Index changes in composition are typical, due to severe regulations and the rather small size of the Belgian companies. In addition, since the birth of NYSE?EURONEXT, grouping the places of New York, Brussels, Paris, Amsterdam and Lisbon, the importance of the place of Brussels diminished considerably. Therefore this investigation is just an exercise how to operate similar studies. It would be preferable that a larger group than our two researchers would repeat the application for a larger Stock Exchange Index such as those of Tokyo, London or New York.
- Estimating collaborative profits under varying partner characteristics and strategies
Christine Vanovermeire (University of Antwerp) Co-authors: Daniel Palhazi Cuervo; Kenneth Sörensen Abstract: Horizontal logistic collaboration leads to large profits when the right coalition is formed. Most of the previous research however explains the profit differences using market conditions on coalitional level. We show that the level of profit is highly dependent on finding a correct combination of partners. By estimating a model using the average order size, number of orders and maximal delay of each partner separately, as well as the interaction effects of these parameters, the possible number of profitable situations that can be identified significantly increases. Finally, it is
demonstrated that cost allocation plays an important role when selecting a partner to collaborate with
- Gain sharing in a collaborative selective vehicle routing problem
Christof Defryn (University of Antwerp) Co-authors: Kenneth Sörensen Abstract: The increasingly popular concept of horizontal collaboration in logistics creates the need of an alternative approach that describes the collaborative context of the underlying vehicle routing problems. Three subproblems are identified by the authors: positioning and strategic behaviour of the individual partners with respect to the group, the solving of the vehicle routing problem and gain sharing. Omitting one of these aspects, or neglecting the existing interdependencies will result in an incomplete solution representation of the collaborative environment. The behaviour of two cost allocation approaches (Shapley Value and a problem specific weighted allocation) were studied for different scenarios of the selective vehicle routing problem (also called team orienteering problem).
Thursday 14:00-15:40 TB-5: Networks Room Pentagone 0A07 - Chair: B. Fortz
- A Branch-and-Price Algorithm for the Network Pricing Problem with Connected Toll Arcs
Alessia Violin (Université Libre de Bruxelles) Co-authors: Bernard Fortz, Martine Labbé Abstract: We propose a branch-and-price approach to solve a pricing problem on a network with connected toll arcs. The model is obtained with a Dantzig-Wolfe reformulation, and a column generation algorithm is used to solve the linear relaxation. Various techniques have been considered to improve the efficiency of the solving algorithm, such as stabilisation and different branching strategies. Numerical results show the effectiveness of this approach.
- Maximizing User Benefits by Changes in Incomplete Networks
Corrinne Luteyn (KU Leuven) Co-authors: Pieter Vansteenwegen Abstract: In this research a number of Vehicle Routing Problems, in which only a subset of the customers has a demand, are considered in an incomplete network. We have investigated what would be the best improvement of this incomplete network, such that the total travel time of the vehicles in these routing problems is minimized. Both a Mixed Integer Programming formulation and a heuristic are presented to determine this best improvement.
- Models for traffic engineering with multiple spanning tree protocols
Martim Moniz (Université Libre de Bruxelles) Co-authors: Bernard Fortz; Luís Gouveia Abstract: With the increasing demand for Internet and cloud computing services,
the need for large scale data centers has become paramount. This
emphasizes the critical necessity of improving the performance of
these telecommunication networks. In these data centers, switched
Ethernet networks are becoming popular, as they are more effective in
the management of traffic.
Loops in the network's topology can result in broadcast radiation. As
such, Ethernet networks only activate, at a given time, a cycle-free
subset of the existing links. To ensure this, these networks implement
the IEEE 802.1d standard which defines the topology of the network
as a spanning tree. One of the drawbacks of this protocol is that the
network only ends up using a small number of the existing links
(#nodes-1).
To overcome this drawback, Ethernet networks began using the Multiple
Spanning Tree Protocol, which maintains a set of spanning trees that
are used for routing the demands in the network. Each spanning tree is
allocated to a pre-defined set of demands. This is highly advantageous
for the traffic performances of Ethernet networks, as the traffic can
be spread throughout a bigger number of links.
We present two mixed integer programming models for the Traffic
Engineering problem of optimally designing a network implementing the
Multiple Spanning Tree Protocol, such that link utilization is
minimized. Although some variants of this problem have been treated in
the literature, this is the first approach that focuses on using exact
methods.
Both formulations are based on models that have been previously used
for the design of single spanning trees. The Multi-commodity Flow
Formulation (MFF) makes use of flows to design the spanning trees,
while the Rooted Directed Formulation (RDF) uses variables to orient
the edges into arborescences rooted at each node of a VLAN.
We present tests in order to compare the two formulations, in terms of
linear relaxation strength and computing time.
We also propose a binary search algorithm that has proven to be
efficient in obtaining quasi-optimal solutions for this problem.
- Optimal capacitated ring trees
Alessandro Hill (University of Antwerp) Co-authors: Stefan Voss Abstract: We study a new network design model based on ring trees which are the union of trees and 1-trees. The objective is the minimization of edge costs while two customer types have to be connected to a depot ensuring single and double node connectivity, respectively, by using optional Steiner nodes. Ring tree customer and ring tree bounds apply. We present computational results based on two mathematical formulations and a local search heuristic for various reliability scenario test instances.
Thursday 16:10-17:25 TC-1: Mixed-integer nonlinear programming Room Vesale 023 - Chair: Y. Crama
- Quadratic reformulations of nonlinear binary optimization problems
Yves Crama (HEC - University of Liege) Co-authors: M. Anthony, E. Boros, A. Gruber Abstract: We consider the problem of minimizing a pseudo-Boolean function f(x), i.e., a real-valued function of 0-1 variables. Several authors have recently proposed to reduce this problem to the quadratic case by expressing f(x) as min g(x,y) s.t. y = 0, 1, where g is a quadratic function of x and of additional binary variables y. We establish lower and upper bounds on the number of additional y-variables needed in such a reformulation, both for the general case and for the special case of symmetric functions.
- A Conic Optimization Approach to SKU Rationalization
Tanguy Kegelart (Université catholique de Louvain, Louvain School of Management) Co-authors: Mathieu Van Vyve Abstract: Expanding variety and the number of offered products is attractive for a firm to fit customer needs. Nevertheless, the greater complexity and the proliferation of stock-keeping units (SKUs) without substantial differentiation may not substantially improve customer satisfaction while raising costs.
Based on the principle that the product-line size involves operational implications and particularly manufacturing and holding costs, this paper develops a mixed-integer nonlinear mathematical program (MINLP) to support efficient product portfolio reductions.
Basically, we balance the demand contraction due to customer dissatisfaction by the fixed costs elimination and the risk-pooling effects. Off-the-shelve Mixed-Integer Quadratic Problem (MIQP) solver provides optimal solution to the proposed conic quadratic reformulation, and a real-life industrial case illustrates the program and the algorithm efficiency.
Findings show that mathematical programming subject to various assumptions and estimations is efficient to rationalize portfolios up to at least 400 SKUs.
This paper also opens up further researches on algorithmic improvements, on different modelling of the mentioned observations or on the inclusion of other product range effects on the global profit.
- A hybrid piecewise-linear/Newton's method with corrected Jacobian matrix for multi-period water production and distribution in large networks
Derek Verleye (Ghent University) Co-authors: Derek Verleye Abstract: The typical underlying optimisation model for the planning problem of water production and distribution operations is an MINLP, that is very hard to solve in practice. The nonlinearities are imposed by hydraulic pressure losses in pipes and power output in pumps. Additionally, switching the pumps on/off and allowing reverse flow at water towers introduce binary variables in the model. Here, we use a PWL method to to find fast (slightly inaccurate) solutions that is then improved using a corrected Newton’s method.
Thursday 16:10-17:25 TC-2: Decision Analysis 1 Room Vesale 020 - Chair: S. Eppe
- Learning a majority rule model from large sets of assignment examples
Olivier Sobrie (University of Mons / Ecole Centrale Paris) Co-authors: Olivier Sobrie Abstract: In decision analysis, sorting procedures aim at assigning alternatives
defined on multiple criteria in categories. We consider the MR-Sort
procedure, a simpler version of ELECTRE Tri. Eliciting explicitly the parameters of such a procedure is not easy for a decision maker. This is why several researches have been done to learn the parameters of such model on the basis of assignment examples. To our knowledge, none of these algorithms are designed to deal with large sets of assignment examples. We address this problematic by proposing a new metaheuristic. Our research includes experimental results on artificial and real data sets.
- Understanding people’s preferences: an integrated algorithm for the optimal design of discrete choice experiments with partial profiles
Daniel Palhazi Cuervo (University of Antwerp) Co-authors: Peter Goos, Roselinde Kessels and Kenneth Sörensen Abstract: Discrete choice experiments are conducted to identify the attributes that drive people’s preferences when choosing between competing options of products or services. A large number of attributes might increase the complexity of the choice task, and might have a detrimental effect on the quality of the results obtained from the choice experiment. In order to reduce the cognitive effort required by the experiment, researchers resort to experimental designs where the levels of some attributes are held constant in a choice set. These designs are called partial profile designs. In this paper, we propose an integrated algorithm for the generation of optimal designs for discrete choice experiments with partial profiles. This algorithm optimizes the set of constant attributes and the levels of the varying attributes simultaneously. An extensive computational experiment shows that the designs produced by the integrated algorithm outperform those obtained by existing algorithms, and match the optimal designs for a number of benchmark instances. We also evaluate the performance of the algorithm under varying experimental conditions and analyse the differential impact on the structure of the designs generated.
- An empirical distribution-based approximation of PROMETHEE II's net flow scores
Stefan Eppe (Université libre de Bruxelles) Co-authors: Yves De Smet Abstract: PROMETHEE II is a well known outranking method. However, as it relies on the pairwise comparison of all alternatives, the overall computational load to determine a ranking increases quadratically with the instance size. We propose an empirical distribution-based approximation model that reaches a significantly higher approximation quality than a recently published approach, at the relatively small initial cost of computing this empirical distribution function of evaluations on each criterion.
Thursday 16:10-17:25 TC-3: Routing Room Vesale 025 - Chair: K. Sörensen
- An iterative approach to generate k dissimilar VRP solutions
Luca Talarico (University of Antwerp) Co-authors: K. Sörensen, J. Springael Abstract: In this work we define a new problem, the aim of which is to find a set of k dissimilar alternative solutions for a vehicle routing problem. This problem has several practical applications in the cash in transit sector and in the transportation of hazardous materials. A min-max mathematical formulation is developed that minimizes the cost of the worst solution. A distance measure is defined based on the edges shared between pairs of alternative solutions. An effective iterative algorithm is presented and tested using large and medium size instances for the vehicle routing problem.
- Improved Model for the Maximum Covering and Patrol Routing Problem
Reginald Dewil (KU Leuven) Co-authors: P. Vansteenwegen, D. Cattrysse, D. Van Oudheusden Abstract: This work shows how the maximum covering and patrol routing problem (MCPRP) can be modelled as a minimum cost network flow problem (MCNFP). Using the MCNFP model, all available benchmark instances of the MCPRP are solved to optimality in less than 0.4s per instance. Several practical additions to the MCPRP, such as different start and end locations of patrol cars and overlapping shift durations can be modelled by a multi-commodity minimum cost network flow model.
- The impact of preprocessing for solving the train routing problem
Thijs Dewilde (KU Leuven) Co-authors: S. Burggraeve, T. Ameye, P. Sels, D. Cattrysse, P. Vansteenwegen Abstract: When a train approaches a station, a route that traverses the switch zone and leads towards a platform is determined. Once this route is set for that particular train it becomes unchangeable and the only option to avoid accidents is to let that train wait until its whole route is cleared. For large and complex stations where many train lines merge and intersect, blocked routes are one of the main causes of delay. To reduce this delay, one should carefully consider the routing of trains through station areas when constructing a railway timetable. However, even for medium stations with a relatively simple infrastructure lay-out, the number of possible routes for each train can be very high.
Thursday 16:10-17:25 TC-4: Graphs Room Pentagone 0A11 - Chair: H. Mélot
- Number of non-equivalent colorings of a graph
Hadrien Mélot (Université de Mons, Departement d'Informatique, Faculté des Sciences) Co-authors: A. Hertz Abstract: We study the number of non-equivalent ways of properly coloring a given graph $G$. We show some similarities and differences between this graph invariant and the well known chromatic polynomial. Relations with Stirling numbers of the second kind and with Bell numbers are also given. We then determine the value of this invariant for some classes of graphs. We finally prove some bounds on the number of non-equivalent colorings of graphs with fixed maximum degree but let other bounds as open problems.
- Price on symmetrisation and average distance
Romain Absil (UMons) Abstract: Price of symmetrisation is a concept recently introduced by Absil and Mélot and aims to compare fundamental differences (gap and quotient) between directed and undirected graphs. Moreover, it is focussed on the following problem, called price of symmetrisation problem : ``What is the maximum difference between the value of an invariant computed in a directed graph G and its value computed in the underlying undirected graph of G ''.
We present here some partial results regarding price of symmetrisation and average distance, proving some classes of directed graphs non optimal. We illustrate as well some graph transformations strictly increasing price of symmetrisation in directed graphs.
- Solving the Elementary Longest Path as a Mixed Integer Program
Quoc Trung Bui (Université catholique de Louvain ) Co-authors: Yves Deville, Pham Quang Dung Abstract: The Elementary Longest Path Problem (ELPP) consists of determining the longest path on a given graph such that every node appears at most once. The pro- blem arises in a variety of context, e.g., information retrieval, high-performance printed circuit board design, multi-robot patrolling. This problem is known to be NP-hard.
In the literature, an exact algorithm for the ELPP is proposed in [7], where a Constraint Programming model is described together with intelligent search strategies to solve ELPP to optimality. One can easily show that, by adding some artificial nodes, the ELPP and the Elementary Shortest Path (between two specified nodes) Problem (ESPP) are equivalent. Two distinct mathematical for ESPP have been proposed in [2].
In this work, we propose a branch-and-cut algorithm based on the classi- cal formulation in [2]. First, we show a transformation from the ESPP to the Asymmetric Traveling Salesman Problem (ASTP) and vice versa. Then we adapt the valid inequalities of the ASTP (the Dk inequalities, Tk inequalities, and 2- matching inequalities) for the ESPP. Simple valid inequalities called maximum outflow inequalities for the ESPP are also proposed. We additionally develop- ped a simple and efficient algorithm for the separation of sub-tour elimination constraints. If many inequalities are generated in the separation step and are all added into the model, then the size of the model increases and the computation time for the linear relaxation may become costly. We therefore propose an in- equality filter to only keep the best constraints generated in the separation step. The filter is based on the idea of classes of inequalities and on violation threshold.
In the experimental part of this work, we compare our branch-and-cut algo- rithm with two other existing algoritms, the Constraint Programming algorithm from [7], and the branch-and-cut algorithm from [2], but adapted to ESPP. The computational result shows that our branch-and-cut algorithm is faster than the one adapted from [2] and strongly outperforms the algorithm based on Constraint Programming.
Thursday 16:10-17:25 TC-5: Scheduling Room Pentagone 0A07 - Chair: S. Hanafi
- Dynamic Programming for Scheduling Locks in Sequence
Ward Passchyn (KU Leuven) Co-authors: F.C.R. Spieksma Abstract: Goods transport over inland waterways is a promising mode of transportation. Locks, which are needed to regulate the water level, constitute bottlenecks along many canals and inland waterways. In an attempt to optimize the performance of this transport mode, we investigate the scheduling aspect of operating these locks. We consider the setting where multiple locks are arranged in series (as in a canal). Our aim is to minimize the total waiting of ships travelling through the canal.
- A Combinatorial Benders' decomposition for the lock scheduling problem
Jannes Verstichel (KU Leuven campus Gent) Co-authors: Joris Kinable, Patrick De Causmaecker, Greet Vanden Berghe Abstract: Ships must often pass one or more locks when entering or leaving a tide independent port or when travelling on a network of waterways. These locks control the flow and the level of inland waterways, or provide a constant water level for ships while loading or unloading at the docks.
We consider locks with a single chamber or several (possibly different) parallel chambers, which can transfer one or more ships in a single operation. The resulting lock scheduling problem consists of three strongly interconnected sub problems: scheduling the lockages, assigning ships to chambers, and positioning the ships inside the chambers. By combining the first two problems into a master problem and using the packing problem as a sub problem, a decomposition is achieved for which an efficient Combinatorial Benders approach has been developed. The master problem is solved first, thereby sequencing the ships into a number of lockages. Next, the feasibility of each lockage is verified by solving the corresponding packing sub problem, possibly returning a number of combinatorial inequalities (cuts) to the master problem.
Experiments on a large test set show that this decomposition method strongly outperforms an existing monolithic approach, especially for instances with a complex packing sub problem. New optimal results for instances with up to 90 ships are generated in less than 12 hours, while a heuristic version of the algorithm generates (near)optimal results for instances with up to 50 ships in less than 10 minutes.
- A dynamic programming algorithm for a robotic cell problem with batch transfer
Nacira Chikhi (USTHB /AMCD&RO UVHC/LAMIH) Co-authors: M. Abbas, A. Bekrar, R. Benmansour and S.Hanafi Abstract: In this paper, we study a robotic cell problem and precisely a robotic ?ow shop problem with two dedicated machines at the ?rst stage and a common machine at the second stage. We have two types of jobs where each job has to be processed on one dedicated machine at the ?rst stage and then on the common machine. The jobs
are transported from the ?rst stage to the second one and the robot can carry up at most c jobs in one shipment. There are several applications of this problem in ?exible manufacturing systems. The main contribution of this study, is to propose exact and approximate methods for the robotic cell problem. The objective is to ?nd a joint schedule of production and transportation such that the makespan is minimized. We prove the NP-hardness of a special case of the general problem. We also provide polynomial time algorithms for some particular problems where the processing times are identical on the ?rst stage. Then, we develop a dynamic programming algorithm to solve optimally this general case. The experiments show the e?ciency of our algorithms.
Friday 9:00-10:15 FA-1: Queuing Room Vesale 023 - Chair: S. Wittevrongel
- Diffusion approximation of the virtual delay in two queueing systems coupled by their service process
Ben Lauwens (Koninklijke Militaire School) Abstract: In this paper the Fokker-Planck equations, also known as the Kolmogorov forward equation, of the two-dimensional elementary return process on the positive quart-plane is rigorously derived using the semigroup formulation of continuous time Markov processes. The result is used to model the transient behaviour of the virtual delay in two queuing systems coupled by their service processes. The main advantage of this modelling technique is the inclusion of the behaviour at the boundaries of the domain, i.e. classic diffusion models based on heavy traffic scenarios fail to model the probability of an empty queue whereas the proposed one includes the exact behaviour.
- Analysis of a discrete-time queue with constant service capacity
Sabine Wittevrongel (Ghent University) Co-authors: H. Bruneel, W. Rogiest, J. Walraevens and S. Wittevrongel Abstract: We study a single-server discrete-time queueing model with general independent arrivals and a two-component service process. Each customer represents a service demand: a random, arbitrarily distributed number of work units. The server disposes of a constant service capacity, i.e., a fixed number of work units that can be executed per time slot. We obtain closed-form results for the probability generating functions of the unfinished work in the system, the queueing delay of an arbitrary customer and, in a number of special cases, the number of customers in the system.
- A discrete-time queueing model with constant service times and blocking
Willem Mélange (Ghent University) Co-authors: Herwig Bruneel, Dieter Claeys, Joris Walraevens Abstract: We analyze a discrete-time queueing model where two types of customers, each having their own dedicated server, are accommodated in one single FCFS queue. Service times are deterministically equal to s time slots each. The types of consecutive arriving customers may be nonindependent. As a result, customers may (or may not) have the tendency to cluster according to their types, which may result in more (or less) blocking of one type by the opposite type. The paper reveals the impact of this blocking phenomenon on the performance measures.
Friday 9:00-10:15 FA-2: Decision Analysis 2 Room Vesale 020 - Chair: R. Bisdorff
- On weak ordering by choosing from valued outranking relations
Raymond Bisdorff (University of Luxembourg) Abstract: Polarised outrankings with considerable performance differences (weighted majority margins with vetoes and counter-vetoes) appear as weakly complete (and reflexive) relations (Bisdorff 2013). Constructing weak orderings (rankings with potential ties) from such outranking relations consists in computing, hence, a transitive closure of the given outranking relation. Determining optimal transitive closures is a computational difficult problem ( (Hudry 2013). However, global scoring methods based on average ranks (like Borda scores) or average netflows like in the PROMETHEE approach, may easily deliver such a heuristic closure. Now, such weak orderings may also result from the iterated application of a certain choice procedure, an approach called "ranking-by-choosing" (Bouyssou 2004). In this contribution we shall present such a new ranking-by-choosing approach based on the Rubis best choice method (Bisdorff 2008).
- An interpretable axiomatization of the h-index
Thierry Marchant (Ghent University) Co-authors: Denis Bouyssou Abstract: In the last ten years, many bibliometric indices have been proposed for comparing and/or evaluating scientists. Among theses indices, the Hirsch-index (or h-index) is probably the most popular one. Since it is not clear which index is best, some researchers have tried to enrich the debate by analyzing various indices from an axiomatic perspective. This stream of research has delivered various axiomatizations of the h-index. They pave the way towards a better understanding of the h-index, but they are not completely satisfactory. That is why we propose a new axiomatization.
- Multicriteria clustering: a summary of recent investigations
Yves De Smet (Université Libre de Bruxelles) Abstract: The application of traditional clustering techniques to multicriteria problems is not appropriate. Indeed, these techniques are often based on a distance measure that is, by definition symmetric. On the contrary, relations that allow the pairwise comparison between alternatives are most of the time asymmetric. Therefore, new methods have to be developed to take into account this distinctive feature. The goal of this presentation is to summarize three recent techniques( based on binary or valued preferences matrices) for totally or partially ordered clustering.
Friday 9:00-10:15 FA-3: COMEX - Optimization 2 Room Vesale 025 - Chair: M. Labbé
- Competitive analysis of on-line algorithms : applications to Operations Research.
Claudio Telha Cornejo (Université Catolique de Louvain) Co-authors: Mathieu Van Vyve Abstract: On-line algorithms are procedures designed to operate over streams of requests. When new requests arrive, they must take action to satisfy them immediately. These actions often turn out to be non-optimal because of the uncertainty on future requests.
On-line algorithms became an important algorithmic paradigm in computer science, because of the intrinsic serial nature of several computational processes, and the need for quick procedures that permanently operate over them. For optimization problems, researchers developed a formal framework for on-line algorithms, with the notion of competitive analysis as indicator of performance of these algorithms.
In this talk, we motivate the competitive analysis of on-line algorithms from the perspective of operational research, by describing some applications to capacity acquisition, maintenance scheduling and lot-sizing problems.
- Novel Formulations for Stackelberg Security Games
Carlos Casorrán-amilburu (ULB) Co-authors: Bernard Fortz, Martine Labbé, Fernando Ordóñez Abstract: Stackelberg Games confront contenders with opposed objectives sequentially. The Leader acts first and the Follower reacts to the Leader’s strategy. The objective of the game is for the Leader to commit to a reward-maximizing strategy anticipating the Follower’s best response.
In a Bayesian Stackelberg Game, which is NP-hard [3], the Leader faces one out of a group of Followers, otherwise the game is called a Single-type-of-Follower Stackelberg Game, which is polynomial [3]. Moreover, games in which the respective strategies of the Leader and Follower consist in covering and attacking targets are called Stackelberg Security Games.
We present novel tight formulations for the Single-type-of-Follower Stackelberg Game and for the Single-type-of-Attacker Stackelberg Security Game, significantly improving the current formulations present in the literature [1], [2] . Further, we show that both formulations provide a complete linear description of the convex hull of the sets of feasible solutions of the corresponding problems and show that one formulation is the projection of the other on the appropriate space. The formulations presented for the Bayesian case improve the continuous relaxations of existing formulations. Computational experiments are carried out to compare our formulations with those in the literature.
Références
[1] Paruchuri, P., Pearce, J. P., Marecki, J., Tambe, M., Ordonez, F. & Kraus, S. (2008), Playing games for security : an efficient exact algorithm for sol- ving Bayesian Stackelberg games, Lin Padgham, David C. Parkes, Muller & Parsons, ed., AAMAS (2), IFAAMAS, 895-902.
[2] Kiekintveld, C., Jain, M., Tsai, J., Pita, J., Ordonez, O. & Tambe, M. (2009), Computing Optimal Randomized Resource Allocations for Massive Security Games, AAMAS-09.
[3] Conitzer, V. & Sandholm, T., (2006) Computing the Optimal Strategy to Commit to. Computer Science Department. Paper 1456.
- On the complexity of separation: the three-index assignment problem
Frits Spieksma (KU Leuven) Co-authors: T. Dokka and I. Mourtos Abstract: A fundamental step in any cutting plane algorithm is separation: deciding whether a violated inequality exists. It is customary to express the complexity of a separation algorithm in the number of variables. Here, we argue that the input to a separation algorithm can be alternatively expressed using the support of the vector x, ie, in the number of variables having a positive value.
We apply this idea to two known classes of valid inequalities for the three-index assignment problem, and show that faster separation algorithms for these classes exist.
Friday 9:00-10:15 FA-4: Production Room Pentagone 0A11 - Chair: D. Tuyttens
- CP Approach for the Multi-Item Discrete Lot-Sizing Problem with Sequence-Dependent Changeover Costs
Vinasetan Houndji (Université Catholique de Louvain) Co-authors: Pierre Schaus, Laurence Wolsey Abstract: The Discrete Lot Sizing and Scheduling Problem (DLSP) is a production planning problem which involves determining a minimal cost production schedule in which the machine capacity restrictions are not violated, and the demand for all products is satisfied. The planning horizon is discrete and finite. The variant considered here, called the Pigment Sequencing Problem [1], is a multi-item, single machine problem with production capacity limited to one item per period. There are storage costs and sequence-dependent changeover costs satisfying the triangle inequality. Without loss of generality, we can assume that demands are normalized and so are binary.
Pigment Sequencing Problem is an NP-Hard combinatorial optimization problem for which medium-sized instances can be solved effectively using an appropriate Mixed Integer Program formulation (such as those introduced in [1]). However, as far as we know, no CP model have been proposed for it.
We propose two CP models for the problem:
- the first mainly uses a global cardinality constraint [2] to enforce the production of all items required and some "atleast" constraints so as to respect each demand deadline ;
- the second is similar to the successor CP model for the Traveling Salesman Problem (TSP) where the cities to be visited represent the different demands and the distance between them are the corresponding transition (changeover) costs. The distinction with the classical TSP lies in the storage costs which must be taken into account.
We use Large Neighborhood Search (LNS) [3] with some variants of relaxation procedures and heuristic search in order to quickly discover close to optimal solutions.
The results show that Constraint Programming is potentially an effective approach for tackling Discrete Lot Sizing Problems.
References
[1] Pochet, Y, Wolsey, L. : Production Planning by Mixed Integer Programming.
Springer, New York (2005)
[2] Régin, J-C. : Generalized arc consistency for global cardinality constraint.
Proceedings of Association for the Advancement of Artificial Intelligence (AAAI) conference on Artificial Intelligence, Portland, USA. 209--215 (1996)
[3] Shaw, P. : Using constraint programming and local search methods to solve vehicle routing problems.
Proceedings of Principles and Practice of Constraint Programming conference, Pisa, Italy. 417--431 (1998)
- Local search approaches for 2D nesting problems
Tony Wauters (KU Leuven, Department of Computer Science, CODeS - iMinds-ITEC) Co-authors: Sam Van Malderen, Greet Vanden Berghe Abstract: Cutting complex shapes from an (often expensive) piece of material (e.g. leather hides, plastic, textile, etc.) is a recurring problem in practice. Manufacturers strive to use the material as efficiently as possible. This paper addresses 2D nesting problems, where a set of patterns must be cut from one or multiple sheets of raw material, such that the material usage is maximized (or waste is minimized). A local search procedure is conducted on industrial data, which operates in the input space of an underlying first-fit constructive algorithm. Experiments have been conducted and their results enable to critically assess the importance of all algorithmic components.
- Exact resolution of the Cover Printing Problem
Arnaud Vandaele (UMONS, Faculté Polytechnique) Co-authors: Daniel Tuyttens Abstract: We begin this work by comparing different modelizations of the Cover Printing Problem and the results obtained using optimization softwares. Thereafter, we present the development of a specific optimal branch and bound algorithm to solve the CPP. We also discuss the use of different bounds and their effectiveness. The results obtained are presented and are very promising as first exact results for the CPP.
Friday 14:00-15:40 FB-1: Data Analysis 2 Room Vesale 023 - Chair: P. Fortemps
- The carry-over effect does exist in tennis
Dries Goossens (Ghent University) Co-authors: Frits Spieksma Abstract: We study whether the carry-over effect affects the outcome of a tennis match. We collect data from Grand Slam tournaments from 1992 till 2011. For each match, we determine a priori chances of winning for both players, based on the ranking of the players. We compare the result of each match with the result that could be expected when the players would have played an equal number of sets in their previous match to obtain insight in the influence of the carry-over effect and its significance.
- Alternative Pairwise Decomposition Techniques for Label Ranking
Massimo Gurrieri (UMONS) Co-authors: Philippe Fortemps, Xavier Siebert Abstract: This work focuses on Label Ranking, a particular task of preference
learning, wherein the problem is to learn a mapping from instances to
rankings over a finite set of labels. This paper discusses and proposes alternative reduction techniques that decompose the original problem into binary classication related to pairs of labels and that can take into account label correlation during the learning process while limiting computational complexity.
- Constrained Clustering using Column Generation
Behrouz Babaki (KU Leuven) Co-authors: Tias Guns, Siegfried Nijssen Abstract: In recent years, it is realised that many problems in data mining can be seen as pure optimisation problems. In this work, we investigate the problem of constraint-based clustering from an optimisation
point of view. The use of constraints in clustering is a recent development and allows to encode prior believes about desirable clusters. This paper proposes a new solution for constrained minimum-sum-of-squares clustering. Contrary to most earlier approaches, it is exact and provides a fundamental approach for including constraints. The proposed approach uses column generation in an integer linear programming setting. The key insight is that clustering constraints can be pushed into a branch-and-bound algorithm for solving subproblems of a column generation process. Experimental results show the feasibility of the approach, while at the same time raising a number of challenges related to the master problem. We believe this problem can act as a challenging benchmark for branch-and-price techniques.
- Geometrical lower bound for the nonnegative rank of slack matrices
Julien Dewez (Université catholique de Louvain) Co-authors: Nicolas Gillis, François Glineur Abstract: The nonnegative rank of an element-wise nonnegative matrix is the minimum number of nonnegative rank-one factors needed to reconstruct it exactly. Computing this rank and a related factorization is NP-hard. However it can be used to describe an extension of a convex polytope, i.e., a higher-dimensional polytope which projects onto the original polytope, with minimum number of facets. These extensions for polytopes have important applications in combinatorial optimization.
More precisely, many combinatorial optimization problems can be formulated as the optimization of a linear objective function over a polytope which may have a very large number of facets (possibly increasing exponentially with the size of the problem). Because of the high number of facets, computing the optimal solution of this linear optimization problem can be very time consuming. However, if one can find an extension for this polytope with a moderate number of facets (growing for example polynomially with the size of the problem), one can solve the optimization problem over the extension efficiently, and project its optimal solution back to the original polytope of feasible solutions.
Therefore, we would like to know the extension complexity of the polytope of feasible solutions, i.e., the minimum number of facets of any extension of this polytope. In particular, if the extension complexity grows exponentially with the size of the problem, then the polytope of feasible solutions does not admit any extension with a polynomial number of facets and we will not able to solve the problem in polynomial time with this approach.
Crucially, the extension complexity of a polytope is also the nonnegative rank of a particular matrix called the slack matrix. The extension complexity is then also NP-hard to compute in general.
In this talk, we introduce a new geometrical lower bound on the extension complexity of a polytope which relies on the monotone behavior of the f-vector of a convex polytope under projections (where the f-vector is the collection of the numbers of k-faces of the polytope). This new lower bound improves upon existing lower bounds and, in some cases, implies optimality of the best known extension.
Friday 14:00-15:40 FB-2: Heuristics Room Vesale 020 - Chair: T. Stützle
- Hyper-Criticism: A critical reflection on todays Hyper-heuristics.
Willem Van Onsem (KU Leuven) Co-authors: Bart Demoen and Patrick De Causmaecker Abstract: The results of a comparative study among 16 implementations of the Cross-domain Heuristic Search Challenge (CHeSC’11) are presented. The studied hyper-heuristics were reimplemented and diagnostic results were generated. Issues with several implementations are addressed and explanations are given why some hyper-heuristics perform better than others.
- Towards a Lightning-Inspired Meta-heuristic
Mohand Mezmaz (University of Mons) Co-authors: M. Mezmaz, A. Vandaele, D. Tuyttens, S. Itani and N. Melab Abstract: This paper gives an overview of our new meta-heuristic inspired by the lightning phenomenon. Lightning is a massive electrostatic discharge between the electrically charged regions belonging to clouds or the surface of a planet. The lightning algorithm takes two parameters, namely Discharges to control the exploration or diversification capacity of the algorithm, and Size to control the exploitation or intensification capacity of the algorithm. Exploration is the ability of a meta-heuristic to explore different regions of the solution space. While exploitation is the ability of a meta-heuristic to intensify research in a specific region of the solution space. With these two parameters, it is possible to find a good balance between diversification and intensification.
- A simple and high-performing competitive--cooperative hybrid algorithm for black-box continuous optimization
Thomas Stützle (Université Libre de Bruxelles (ULB)) Co-authors: Tianjun Liao Abstract: In this talk, we present a hybrid algorithm for black-box
continuous optimization that combines a Covariance Matrix Adaptation Evolution Strategy variant, embedded in a restart scheme, with an iterated local search algorithm. The main aspect is the way of how to combine the two algorithm in a rather simple but effective way that is akin to an ad-hoc algorithm selection. We experimentally examine the impact specific design choices have on performance and that the proposed hybrid is superior to various other hybrid designs. Our algorithm also has won the recent benchmark competition carried out in the 2013 special session on real-parameter optimization at the Congress on Evolutionary Computation.
- Some Heuristic Algorithms for the 0-1Knapsack Sharing Problem
Christophe Wilbaut (University of Valenciennes) Co-authors: S. Hanafi, A. Fréville,B. Haddar, M. Khemakhem Abstract: In this paper we propose new hybrid heuristics to solve the binary Knapsack Sharing Problem (KSP). The KSP is a max-min optimization problem in which the items are divided into groups. A linear function is associated with each class of items and we have to find the subset of items to put in a knapsack in order to maximize the minimal value of the set of linear functions. Our approaches are based on relaxation techniques and local search methods to converge quickly to near optimal solutions.
Friday 14:00-15:40 FB-3: COMEX - Transportation Room Vesale 025 - Chair: F. Spieksma
- Horizontal cooperation in logistics : Order sharing through joint route planning
Lotte Verdonck (Hasselt University) Co-authors: An Caris, Katrien Ramaekers, Gerrit K. Janssens Abstract: Transportation companies operating at the same level of the supply chain may cooperate horizontally to increase their efficiency levels. Analysis of existing scientific research on horizontal carrier collaboration reveals that the majority of logistics cooperation literature may be divided into two main research streams: order sharing and capacity sharing. In this paper one approach to order sharing is presented: joint route planning. Applying joint route planning, customer orders from all carriers are combined and collected in a central pool and efficient route schemes are set up for all requests simultaneously using appropriate vehicle routing techniques. The routing problem associated with horizontally cooperating carriers may be defined as a multi-depot pickup and delivery problem with time windows (MDPDPTW). The benefits of cooperative optimisation are demonstrated by means of a small numerical example. To conclude, solution techniques suitable to solve instances with a large number of requests are discussed.
- Multi-period vehicle assignment with stochastic load availability
Thierry Pironet (HEC - University of Liege) Co-authors: Yves CRAMA Abstract: We investigate optimization techniques for a vehicle-load assignment problem. A company owning a limited fleet of vehicles wants to maximize its operational profit over an infinite horizon divided into periods. The profit stems from revenues for transporting full truckloads and costs derived from waiting idle and moving empty. The stochastic component of the problem arises from the uncertainty on the effective materialization of each transportation order. The methodology is based on optimizing decisions for deterministic scenarios. Several policies are generated in this way, from simple heuristics to more complex approaches, such as consensus and restricted expectation algorithms, up to policies derived from network flow models formulated over subtrees of scenarios. Myopic and a-posteriori deterministic optimizations models are used to compute bounds allowing for performance evaluation. Tests are performed on various instances featuring different number of loads, graph sizes, sparsity, and probability distributions. Performances are compared statistically over paired samples. The robustness of various policies with respect to erroneous evaluations of the probability distributions is also analyzed.
- An Exact Bi-directional Dynamic Programming Algorithm for an Elementary Shortest Path Problem with Variable Service Start Time
Hande Küçükaydin (University of Liège) Co-authors: Yasemin Arda, Yves Crama Abstract: We consider an elementary shortest path problem with resource constraints (ESPPRC) having variable service start time, where the transportation cost depends on the distance traveled and the amount of time that the vehicle spends. We develop exact dynamic programming algorithms with appropriate dominance rules to deal with an infinite number of Pareto-optimal states. In addition, a column generation approach is devised using the ESPPRC as a subproblem.
- Incorporating axle weight restrictions in a two-dimensional CVRP
Hanne Pollaris (Hasselt University) Co-authors: Kris Braekers, An Caris, Gerrit K. Janssens Abstract: Distributors of goods have to take loading constraints into account to make a realistic planning for their vehicles, while current planning tools generally do not include these constraints. A survey conducted by the authors among several Belgian logistics service providers pointed out that the most common loading problems that are encountered by distributors are multi-dimensional packing constraints, unloading sequence constraints, stability constraints, load-bearing strength constraints and axle weight constraints. Until now, axle weight limits have not yet been considered in a VRP. A problem formulation for a two-dimensional CVRP (2L-CVRP) with sequence based loading and axle weight constraints will be presented.
Friday 14:00-15:40 FB-4: Health Room Pentagone 0A11 - Chair: G. Vanden Berghe
- IP Techniques for Nurse Rostering Problem
Tulio Toffolo (KU Leuven) Co-authors: Haroldo G. Santos, Rafael Martinelli, Rafael A.M. Gomes, Greet Vanden Berghe Abstract: This work presents Integer Programming (IP) techniques to tackle the problem of the International Nurse Rostering Competition. Starting from a compact and monolithic formulation in which the current generation of solvers performs poorly, improved cut generation strategies and primal heuristics are proposed and evaluated. A large number of computational experiments with these techniques produced the following results: the optimality of the vast majority of instances was proved, the best known solutions were improved by up to 15\% and strong dual bounds were obtained.
- Using computer simulation to test a short stay unit
Gilles Sinnaeve (Université Catholique de Louvain) Co-authors: Claire Beguin, Philippe Chevalier & Marianne Philippe Abstract: A creative solution for managing the patient's flow consists in directing them based on their predicted length of stay. If a patient is thought to stay a short time, he will be directed to a specific unit, specialized in short stays.
This new system implies 2 major changes. Firstly, it means a revision of the process in use in order to dedicate a specific unit to short-term stays. With more efficient and standardized processes, a short stay unit would be able to deal with the same number of patients in weekdays time as in a complete week. Secondly, it implies a long-term unit in charge of more severe cases and for potential transfers from one unit to the other if needed.
The aim of this paper consists in studying the feasibility of this new organizational system by using a computer simulation model. It has been conducted in three surgical departments (gynecology, urology and ophthalmology), which are located in 2 adjacent units. This simulation study has been carried at the Saint-Luc Hospital of Bruxelles, using patient data collected in 2011.
- SIMEDIS disaster management simulator: aeronautical catastrophe
Christophe Ullrich (Royal Military Academy) Co-authors: C. Ullrich, F. Van Utterbeeck, E. Dhondt and M. Debacker Abstract: Medical disaster management research tries to identify methodologies and rules of best practice and evaluates performance and outcome indicators for medical disaster management.
We generate realistic victim profiles for medical disaster simulations based on medical expertise. We apply these profiles in a medical disaster model where victim entities evolve in parallel through a medical response model and a victim pathway model. The medical response model focuses on the pre-hospital phase which includes triage procedures, evacuation processes and medical processes. Medical decisions such as whether to evacuate or to treat the current victim are based on the RPM parameters of the victim. We apply this model to simulate the catastrophe plan of Zaventem airport.
- Integrating patient-to-room assignment planning with elective admission scheduling
Wim Vancroonenburg (KU Leuven) Co-authors: Patrick De Causmaecker, Frits Spieksma, Greet Vanden Berghe Abstract: The patient-to-room assignment (PA) problem is an operational planning problem found at hospital admission offices. It aims at finding good quality patient-to-room assignments that meet medical requirements, adhere to hospital policies and respect patient preferences. However, a key input to this problem are the admission dates of patients, which are determined as part of an elective admission scheduling (AS) process. In this work, we look at the integration of the PA problem with the AS process in order to increase planning flexibility, and thus to improve operational efficiency.
|
|