|
|
Detailed schedule
PDF Version: overview
PDF Version: detailed program
Click on a link for more details
Show all the abstracts
Thursday 7 February:
Thursday 11:15-12:30 TA-1: COMEX Decision Making Room C.611 - Chair: B. Fortz
- Testing Rationality of Collective Household Consumption (PDF)
Fabrice Talla Nobibon (KU Leuven) Co-authors: L. Cherchye, Y. Crama, T. Demuynck, B. De Rock, and F.C.R. Spieksma Abstract: The purpose of this paper is to derive algorithms for testing rationality of collective household consumption on large data sets. We consider both the extension of the Generalized Axiom of Revealed Preference and the extension of the Strong Axiom of Revealed Preference to collective households with two decision makers or members. We establish that testing these extensions is NP-complete. We present exact algorithms based on mixed-integer programming formulations of the axioms, and we propose simulated annealing heuristics for the solution of global optimization formulations.
- On the statistical analysis of overlapping observations in large networks (PDF)
Yuyi Wang (KULeuven) Co-authors: Jan Ramon Abstract: Many statistical techniques for decision making assume that both training examples and unseen examples are drawn identically and independently (i.i.d.). However, in a network observations may violate the i.i.d. assumption as nodes are connected through edges. To address this problem, we introduce a framework describing these dependencies and extend statistical methods to deal with them. Under certain assumptions, our methods have great statistical power of the data in a network.
- Dantzig-Wolfe Reformulation for the Network Pricing Problem with Connected Toll Arcs (PDF)
Alessia Violin (Université Libre de Bruxelles) Co-authors: Bernard Fortz, Martine Labbé Abstract: This work considers a pricing problem on a network with connected toll arcs and proposes a Dantzig-Wolfe reformulation for it. The model is solved with column generation and the gap between the optimal integer value and the linear relaxation optimal value is shown to be at least as good as the one from the mixed-integer formulation proposed in the literature. Numerical results show that in many cases it performs strictly better.
Thursday 11:15-12:30 TA-2: Production 1 Room C.601 - Chair: E.h. AGhezzaf
- Dynamic Product Portfolio Management with Life Cycle Considerations (PDF)
Jean-sébastien Tancrez (Louvain School of Management, Université catholique de Louvain) Co-authors: A.C. Zeballos, R.W. Seifert Abstract: In order to be profitable, companies have to continuously manage their portfolio of product lines. At the same time, demand for products varies across different life cycle stages. Disregarding these variations may result in high operational costs due to inadequate ordering decisions and revenue shortfalls. In a portfolio of product lines, the product life cycle acquires even more relevance because the upper and lower extremes of demand and revenue may occur simultaneously in multiple product lines and thus be magnified. Therefore, companies may want to smooth the overall levels of demand and revenue by actively managing their portfolio, properly timing product upgrade launches and adequately choosing the amount and timing of marketing support. This is particularly true for products with short life cycles such as high technology goods, consumer electronics, and fashion apparel, for which the life cycle is the main source of demand dynamics. When firms are financially constrained in making needed investments yet still have to manage multiple products through various life cycle stages, achieving optimal inventory, product launch and marketing support decisions becomes complex.
In this research, to understand how a company should combine these decisions to successfully coordinate its operations and finances, we propose a stylized model that combines three subjects: product portfolio management, product life cycle and financial constraints. The model aims to help dynamically manage a portfolio of product lines that probabilistically transition through various life cycle stages (introduction, growth, maturity, decline and end-of-life). For this, we develop a Markov decision process to find the optimal decisions depending on the actual composition of the portfolio. Products are characterized by varying operational cost parameters (procurement, holding, and lost sales) and can have specific life cycle characteristics. The evolution through the life cycle stages is impacted by both marketing support and product launch decisions. These decisions involve costs and impact the life cycle stages transitions. The inventory, launching and marketing decisions and the related portfolio expenditures are bound by a financial constraint in the form of restricted working capital. The working capital is a constant level of cash and inventory held by the company, which is supposed to be decided a priori by the company. Therefore, in our model, companies can, within limits, actively manage their product portfolio to achieve more favorable demand realizations. In fact, product life cycle management acquires real meaning at the product portfolio level since harmonizing decisions can be made to smooth aggregated demand and thus income.
Applying our tool, several managerial insights can be provided. Firstly, the decisions regarding the launch timing, the marketing support and the WC level are complex and dynamic, justifying the use of a comprehensive model, including the complex and multiple trade-offs involved. Cautious and greedy policies lead to much lower profit. Secondly, product portfolios benefit greatly from joint management, which reduces the amount of WC used per product and increases the benefits per allocated WC. Thirdly, the optimal policy aims to reach a stable aggregate demand level and thus stable resource requirements, rather than temporarily reaching the highest aggregate demand level. Optimal decisions smooth not only aggregated demand levels but also cash flows and the length of time in and outside the market. This is important to ensure optimal use of resources and avoid high costs. Finally, portfolio composition is a strategic decision that can also be made with the help of the proposed portfolio management tool. The optimal composition depends closely on the characteristics of admitted product types.
- Energy-Neutral Demand Response from a Large Population of batch-process loads (PDF)
Arnaud Latiers (UCL Louvain-la-Neuve) Co-authors: Francois Glineur, Emmanuel De Jaeger Abstract: Large Electricity grids are nowadays at a milestone of their development. In Europe, the organisation of electricity supply is being profoundly re-organised. Since electricity liberalisation, a constant effort towards integration of electricity markets has been undertaken, and is not yet in place. At the same time, the European Union has committed to develop renewable energies, particularly in the power sector. Power production is more and more decentralised, more volatile, and more uncertain.
In this context, new challenges appear for electricity grid operators. They need to provide greater efforts to perform their most critical daily duty : balancing at every second the power production with the power demand.
Therefore,transmission system operators (TSO’s) are searching for new sources of flexibility. This flexibility would be used in daily operations, and serve as frequency restoration reserve (FRR - secondary control) or frequency containment reserve (FCR - primary control), more generally as ancillary services.
One possible answer may be found in flexible electricity demand, or Demand Response. Demand Response represents all ways that may influence and change electricity consumption in response to external signals, such as high electricity prices, or for electricity system security.
Historically, Demand Response was developed for emergency purpose or for peak demand reduction. Participating loads were controlled very infrequently, in case of large contingencies, or during high demand periods.
However, there are nowadays examples of Demand Response initiatives that are able to provide constant balancing services to the TSO. Mainly, heat/ventilation/airconditioning (HVAC) applications or other thermostatically controlled loads are managed in different ways in order to change consumption and provide flexibility.
There exist other kinds of possible sources of flexibility. We study the dynamic and control of the instantaneous power consumption of a large group of batch-process loads (pumps, conveyor belts, etc.). The purpose of the control is to provide constant balancing services to the electricity grid operator without compromising the initial needs of the process the load is part of. Essentially, this means that the control will have to be energy neutral, on a certain time period
(e.g. 1 hour quarter).
In our context, the power consumption of each participating load may be increased/decreased proportionally to a single, common to all loads, control parameter (i.e. the common modulation factor ). This parameter may evolve over a limited range (i.e 10%) around the nominal power of the load. In other words, each load will be allowed to change its instantaneous consumption, but not to shut down completely or being postponed in time.
We firstly show, in a simple example, what are the important parameters that govern the total consumption of the group, and also its dynamic response to the modulation factor. In a second step, we develop an optimisation model, that will choose the modulation factor such as to minimize the total cost of balancing the grid.
For this purpose, a very simplified model of the Western-European interconnected Electricity system is used, were the Belgian system is a little more accurately described, especially in terms of ancillary services. A State-Queuing model of the total consumption of the group of load is developed. This leads to a non-linear optimisation model that performs system balancing at the lowest possible cost.
The results of the study are threefold.
1. The benefits for the system to implement centralised, energy neutral control are assessed, for a single day of the year.
2. The dynamic of the consumption of the load population is described, and lead to interesting controllability limits.
3. Implementing this control would require the development of additional market mechanisms. They are also described at the end of the study.
The next steps of this study will consist in generalising the approach to a more diversified group of loads, to integrate more realistic figures (based on real-life survey) and to assess the benefits on a yearly basis.
- This research is sponsored by the Brussels Region -
- Measuring Complexity in Mixed-Model Assembly Workstations (PDF)
Luiza Zeltzer (Department of Industrial Management Faculty of Engineering and Architecture Ghent University Ghent, Belgium) Co-authors: Luiza Zeltzer, Veronique Limère, El-Houssaine Aghezzaf, Hendrik Van Landeghem Abstract: The large number of product variants increases significantly the complexity of manufacturing systems. As a consequence, new approaches to deal with production processes are required. The research work to be presented defines complexity in a production environment and it proposes two different models to characterize Mixed-Model Assembly Workstations as high complex or low complex.
Thursday 11:15-12:30 TA-3: Global Optimization Room C.602 - Chair: D. Claeys
- On jet-convex functions and their tensor products (PDF)
Vladimir Shikhman (Université Catholique de Louvain) Co-authors: O. Stein Abstract: We introduce necessary and sufficient conditions for the tensor product of two convex functions to be convex. For our analysis we introduce the notions of true convexity, jet-convexity, true jet-convexity as well as true log-convexity. The links between jet-convex and log-convex functions are elaborated. As an algebraic tool, we introduce the jet product of two symmetric matrices and study some of its properties. We illustrate our results by an application from global optimization, where a convex underestimator for the tensor product of two functions is constructed as the tensor product of convex underestimators of the single functions.
- Hybrid approach for generating laser cutting tool paths (PDF)
Reginald Dewil (Katholieke Universiteit Leuven) Co-authors: Pieter Vansteenwegen, Dirk Cattrysse Abstract: The objective of the tool path problem is to find a tool path that minimizes the total time required to cut all parts nested on a sheet using a laser. The cutter head needs to cut all elements of all parts and return to its starting location. It is not required that a part needs to be cut in one pass, pre-empting is allowed. The problem is complicated by the existence of precedence constraints coming from inner-outer contour relations and common cuts.
Our approach consists in modelling the problem as a partitioning problem where a minimum spanning tree is solved between partitions and generalized traveling salesman problems are solved within the partitions.
- Optimal group sizes for dynamic group screening (PDF)
Dieter Claeys (Ghent University) Co-authors: Joris Walraevens, Bart Steyaert, Herwig Bruneel Abstract: Classification of items as good or bad can often be achieved more economically by screening the items in groups rather than individually.
The underlying reason is that when a test on a group returns good, it can be concluded (after one test only) that all items within the group are good.
Dorfman was the first to introduce the paradigm of group screening and he found an immediate application in the detection of syphilitic men drafted into military service during WWII.
He suggested to apply this procedure also to manufacturing processes where the defective goods have to be eliminated from the collection of all produced goods.
Later on, many researchers applied this paradigm to screen blood for the presence of HIV, Influenza and West Nile Virus (group screening is in this context generally referred to as blood pooling).
The range of application even streches further, among which DNA screening and drug discovery.
When group screening is feasible, the selection of the group size is crucial: the larger a group size, the more items can be screened by only one test, but the more likely it becomes that one or more items of the group are bad, inferring that retesting becomes necessary.
This can, for instance, be achieved by retesting all items of the group individually, which is often referred to as group-individual screening policy.
However, in many occasions, a group-subgroup screening policy is adopted, whereby the group is divided into subgroups which are each subjected to a new group test.
For several decades, a mathematical model from Robert Dorfman was the de facto standard to determine the optimal group size.
This model is essentially static: it postulates that a population consisting of a predetermined very large number of items has to be screened whereby all items are present from the beginning.
However, the practical context is usually dynamic: items are not all present from the start, and arrive at random moments in time, possibly in groups of variable sizes.
For instance, vans from various regions arrive at a blood screening laboratory at random moments of the day, with a variable number of blood samples to be screened.
Also, in manufacturing processes, when goods have to be checked for their quality, they arrive at the screening facility when they are produced, which is a process that is spread over time.
An important disadvantage of dynamic models and analyses however, is that they are much harder to implement and the processing time is slow due to the numerical work that is involved, such as repeatedly calculating zeroes of functions and solving sets of equations.
A natural question, crucial to practitioners, is thus the following: under which circumstances does the static model yield correct results in a dynamic context?
This is the question we wish to answer.
More specifically, we examine under which circumstances the optimal group size in the static model from Dorfman remains optimal in a dynamic context.
However, when the conclusion is that, in some situation, static results cannot be applied, we still suffer from long processing times.
This leads to the second objective: to develop algorithms that decrease the processing time drastically in such situations.
Thursday 11:15-12:30 TA-4: Transportation 1 Room C.603 - Chair: C. Vanovermeire
- Intermodal network design for freight transportation in Belgium - location of rail/road and road/inland waterway terminals (PDF)
Martine Mostert (University of Liege) Co-authors: Sabine LIMBOURG Abstract: In the recent years, growing concerns about environmental, societal and economic issues have emerged in the society. The improvement and the expansion of the multimodal network is one way of solving those kinds of problems. The objective of this work is therefore the integration of inland waterway, rail and road transport into the linear modeling of intermodal terminal location problems, based on transportation costs functions which are nonlinear with the distance traveled. The model should also take into account the capacity constraints linked to the different modes of transport.
- Using Contraction Hierarchies to Find Dissimilar Paths in Transportation Networks (PDF)
Sofie Demeyer (Ghent University) Co-authors: Maarten Houbraken, Thijs Walcarius, Pieter Audenaert, Didier Colle, Mario Pickavet Abstract: We present a novel technique to find a set of dissimilar paths in a transportation network. The theory of contraction hierarchies is combined with a known algorithm, in order to find a set of possible paths fast. Subsequently, this set is filtered, as only loopless and locally optimal paths are feasible.
- Analysis of different cost allocation methods in a collaborative transport setting (PDF)
Christine Vanovermeire (University of Antwerp) Co-authors: Dries Vercruysse, Kenneth Sörensen Abstract: In collaborative transport, dividing the total cost of the coalition between its different partner is a key issue. However, as each coalition has its own set of preferences and has partners with different characteristics, a cost allocation method suitable in all situations does not exist.
In this paper, a set of cost allocation methods are evaluated against different situations. We investigate how well these methods behave when partners have different characteristics. We provide an overview of which cost allocation methods suffice in which situations, showing that a right cost allocation is highly dependent on the characteristics of the coalition.
Thursday 14:00-15:40 TB-1: COMEX Health Room C.611 - Chair: G. Vanden Berghe
- Planning the caregivers at Landelijke Thuiszorg (PDF)
Marco Castro (University of Antwerp) Co-authors: Pablo Maya, Kenneth Sorensen, Peter Goos Abstract: In this talk, the core optimisation component of a decision support system for the regional service planning at Landelijke Thuiszorg is described.
Landelijke Thuiszorg is a "social profit" organisation that provides home care services in several Belgian regions. Underlying this decision support system is an optimisation problem that aims to maximise the service level and to minimise the distance travelled by the group of caregivers.
This problem is tackled as a bi-objective mathematical problem. A flexible two-phase solution strategy is designed to efficiently solve this problem. Computational tests, as well as extensive pilot runs performed by the organisation's personnel, show that this approach achieves an excellent performance, both in terms of the service level and the total travelled distance. Moreover, computing times are small, allowing for the weekly planning to be automated to a large extent.
The organisation is currently in the process of implementing our solution approach.
- Automatic Constraint Weight Extraction for Nurse Rostering: A Case Study (PDF)
Mihail Mihaylov (CODeS, KAHO Sint-Lieven) Co-authors: Pieter Smet, Greet Vanden Berghe Abstract: Manually defining constraint weights for automatic scheduling is often unintuitive even for the most experienced practitioners. In a case study we compare the amount of constraint violations of real-world manual rosters to the importance of constraints, as defined by nurses, and observe a mismatch between the two. Based on this real-world data we attempt to automatically extract constraint weights to allow for a more efficient and straightforward transition from manual to automatic scheduling.
- The overlapping case in an integrated staffing and rostering formulation (PDF)
Komar Komarudin (Vrije Universiteit Brussel) Co-authors: M.-A. Guerry, G. Vanden Berghe and T. De Feyter Abstract: Staffing which aims to determine the suitable personnel structure is one of the
human resource management activities. The personnel structure represents the
number of personnel in distinct personnel subgroups, distinguished by specific
work-related and individual characteristics. Occasionally, the staffing is then
followed by the rostering activity to allocate the personnel to shifts subject to
various constraints. However, in a situation where the suitability of the personnel
structure depends on the rostering result, the two activities are not appropriate
to be performed in a sequential way. Instead, staffing and rostering activities
need to be performed in an integrated manner. In this situation, the existence of
an undesired staffing situation (such as overstaffing and work overload) can only
be known accurately after solving the rostering problem.
- Hardness analysis and a new approach for the shift minimisation personnel task scheduling problem (PDF)
Pieter Smet (CODeS, KAHO Sint-Lieven) Co-authors: Tony Wauters, Greet Vanden Berghe Abstract: In the shift minimisation personnel task scheduling problem, a set of tasks with fixed start and end times need to be assigned to employees who are restricted by availabilities and qualification requirements. We present a state of the art two-phase hybrid heuristic for this problem, which solves all instances from a benchmark dataset from the literature. In the second part of this study, we investigate what properties of the problem influence algorithmic performance.
Thursday 14:00-15:40 TB-2: Routing Room C.601 - Chair: G.K. Janssens
- Elementary shortest path problems for a multiple-trip vehicle routing problem (PDF)
Hande KÜÇÜkaydin (University of Liège) Co-authors: Yasemin Arda, Yves Crama Abstract: We consider a multiple-trip rich vehicle routing problem with driver shifts encountered at a Belgian transportation company servicing supermarkets and hypermarkets belonging to a franchise. We use a column generation procedure and obtain two pricing subproblems where each subproblem can be formulated as an elementary shortest path problem with resource constraints. We develop dynamic programming algorithms with suitable dominance rules.
- State of the art of vehicle routing problems with loading constraints (PDF)
Hanne Pollaris (Hasselt University) Co-authors: Kris Braekers, An Caris, Katrien Ramaekers, Gerrit K. Janssens Abstract: This paper deals with the results of an extensive literature study about the Vehicle Routing Problem with loading constraints, which is a fairly recent domain of research. Besides an overview of topics that have already been studied in this field, research gaps and possibilities for future research are discussed.
- The eect of customer characteristics on coalition gains in collaborative vehicle routing (PDF)
Christof Defryn (University of Antwerp) Co-authors: C.Vanovermeire, K.Sörensen Abstract: In the conducted study the impact of three characteristics on the gains of a horizontal logistics collaboration is investigated. These characteristics are the average order size, the order size standard deviation and the number of customers, and are included in a statistical experiment as explanatory variables. The regression analysis is based on artificially generated, theoretical test instances, containing 80 companies and 256 two-party collaborations with two settings for each variable -- low and high.
- A time-dependent vehicle routing problem in the service area of intermodal terminals (PDF)
Kris Braekers (Universiteit Hasselt) Co-authors: An Caris, Gerrit K. Janssens Abstract: This paper deals with the operational planning of drayage operations in the service area of intermodal container terminals. The underlying routing problem may be formulated as an asymmetric multiple vehicle Traveling Salesman Problem with Time Windows (am-TSPTW). For the first time, a time-dependent problem setting is considered to take into account the hourly variations of travel times due to congestion. A two-phase deterministic annealing algorithm is proposed to solve the problem.
Thursday 14:00-15:40 TB-3: Meta-Heuristics Room C.602 - Chair: P. Vansteenwegen
- Multi-Objective Large Neighborhood Search (PDF)
Renaud Hartert (Université Catholique de Louvain) Co-authors: Pierre Schaus Abstract: The LNS framework (Shaw98) provides a powerful hybridization of Local Search and Constraint Programming. This framework was successfully applied to tackle several large-scale single objective industrial problems.
We extend the LNS framework into the so called Multi-Objective LNS (MO-LNS) to tackle Multi-Objective Combinatorial Optimization problems.
Our experiments show that this new framework is competitive with state-of-the-art methods on bTSP, while being able to tackle complex industrial problems.
- OscaR.cbls: an open source framework for constraint-based local search (PDF)
Renaud De Landtsheer (CETIC) Co-authors: Christophe Ponsard Abstract: Oscar.cbls is a framework for constraint based local search. Constraint-based local search is a technique for solving combinatorial problems including both constraint and optimization problems. Oscar.cbls is based on the reference book. Oscar.cbls is part of the OscaR project also featuring constraint and linear programming. Oscar is freely available under the LGPL V2.1 Open Source license. Oscar.cbls is around 17k LOC big, mostly written in Scala.
- The Budget-Constrained Min Cost Flow Problem (PDF)
Sofie Coene (KULeuven) Co-authors: P. Maya, K. Sörensen, P. Goos, F.C.R. Spieksma Abstract: In this paper we describe a problem that we define as the
Budget-Constrained Minimum Cost Flow (BCMCF) problem. The BCMCF
problem is a natural extension of the well-known Minimum Cost Flow (MCF) problem, with a fixed cost related to the use of arcs and a budget constraint.
The BCMCF has, to the best of our knowledge, not been described in literature before. We show that the Accessibility Arc Upgrading Problem (AAUP) is a special case of the BCMCF problem. We propose an exact solution method based on Lagrange relaxation and a heuristic approach based on variable neighborhood search.
- A Memetic Algorithm for the Orienteering Problem with Intermediate Facilities (PDF)
Ali Divsalar (KU Leuven) Co-authors: Pieter Vansteenwegen, Dirk Cattrysse Abstract: The Orienteering Problem with Intermediate facilities (OPIF) is a new variant of the orienteering problem. In this variant, the objective is to find a given number of connected trips while maximizing the sum of collected scores. The MA is applied on 158 instances of OPIF with various sizes. Comparing these results with our previous algorithm (VNS) [5] shows an improvement in both solution quality and computational time specially for larger instances with higher number of feasible combinations of intermediate facilities.
Thursday 14:00-15:40 TB-4: Transportation 2 Room C.603 - Chair: F.C.R Spieksma
- Optimisation model for empty container repositioning. (PDF)
Frederic Salmon (HEC-ULg QuantOM) Co-authors: S.Limbourg Abstract: Empty container management is a transportation issue relating to the imbalance of container demand and supply in various parts of the world. Hence the necessity for shippers to move empty containers to supply areas. The purpose of this research is to develop a decision support tool applying to intermodal freight transport. This project aims at minimizing the overall cost of empty container management in the hinterlands of the ports of Antwerp and Rotterdam. The proposed model is a two-stage stochastic network model which takes account of transit time between ports and terminals, shipping cost, the carrying capacity of the various modes of transport, the stochastic demand and supply of each terminal and port as well as other parameters such as substitution or holding costs. In addition, the proposed model considers the possibility of intermodality with road transportation. Along with the optimal solution, our experimental data should yield an understanding of the impact of costs on the repartition of ows and inventories.
- The Red-Blue Transportation Problem (PDF)
Wim Vancroonenburg (KU Leuven - KAHO Sint-Lieven) Co-authors: Federico Della Croce, Dries Goossens, Frits Spieksma Abstract: The present study considers the Red-Blue Transportation Problem (RBTP), a generalization of the transportation problem where supply nodes are partitioned into two sets and so-called exclusionary constraints are imposed. We establish the problem's complexity, and we compare two integer programming formulations. Furthermore, a maximization variant of RBTP is presented, for which we give two 1/2-approximation algorithms. This result naturally extends to a K-color Max RBTP variant, giving two 1/K-approximation algorithms. We conclude with a computational study on the performance of the integer programming formulations and the approximation algorithms.
- The ship placement problem: Decision support through exact decomposition (PDF)
Jannes Verstichel (KU Leuven - KAHO Sint-Lieven) Co-authors: Patrick De Causmaecker, Greet Vanden Berghe Abstract: The contribution considers the ship placement problem, which is closely related to 2D bin packing, and entails the positioning of a set of ships into as few lockages as possible while satisfying a number of general and specific placement constraints. A decomposition model is presented that allows for computing optimal solutions in a reasonable time. Experiments on simulated and real-life instances show that the model generates feasible solutions, while maintaining acceptable calculation times.
- The lockmaster's problem: a computational study (PDF)
Ward Passchyn (KU Leuven) Co-authors: S. Coene; F. C. R. Spieksma; G. Vanden Berghe; D. Briskorn; J. L. Hurink Abstract: Transportation of goods by ship is a promising alternative for transport over land. We focus on transport by ships over inland waterways and transport by sea ships entering a harbor/waterway network by passing through one or multiple locks. Locks are needed to control the water level so that large and heavy ships can continue to access the corresponding waterways. The lockmaster's problem concerns the optimal strategy for operating such a lock.
We describe a very basic situation that will act as our core problem. Consider a lock consisting of a single chamber. Ships arrive both upstream and downstream of the lock at known arrival times. The lockage duration T is the time between closing the lock for entering ships, and opening the lock so that ships can leave. We assume that all data are integral and that the lockage duration is constant. Our goal is to find a feasible lock-strategy that minimizes total waiting time of all ships. Thus, we need to determine at which moments in time the lock should move and transport ships to the other side.
The optimal solution is obtained by finding a shortest path in a graph. We first note that in an optimal schedule, all lock moves start at the arrival of a ship or at the arrival of the lock. We define a block as a sequence of consecutive up and down movements followed by a waiting period of less than 2T time units. It is shown that an optimal schedule exists that is a sequence of blocks. We build the graph by adding an edge for each of these possible blocks with a weight equal to the waiting time of the ships in this block. This gives an acyclic graph with O(n^2) edges where n is the total number of ships. The shortest s-t path then corresponds to a solution with the lowest possible waiting time. A straightforward implementation of this graph yields an algorithm with O(n^3) time complexity. A further speed-up of the algorithm can be achieved to O(n^2).
This algorithm can also be used to solve certain single machine batch scheduling problems more efficiently than the current algorithms from literature do. Relevant extensions that take into account the lock's capacity, the different priorities of ships, the water usage of a lock, and the possibility of multiple chambers, can be dealt with by slightly modifying the basic algorithm. Further, we prove that the problem with multiple non-identical chambers is strongly NP-hard when the number of chambers is part of the input.
A computational study is performed to evaluate some basic heuristics and the optimal solution algorithm developed here. The heuristics are a number of straightforward strategies to operate the lock, assuming near complete lack of information. A lock operator knows only the number of ships waiting on either side of the lock. We use a set of problem instances from literature for similar lock scheduling problems.
Thursday 16:10-17:25 TC-1: COMEX Routing Room C.611 - Chair: K. Sôrensen
- An Adaptive Large Neighborhood Search for a Vehicle Routing Problem with Multiple Trips and Driver Shifts (PDF)
Véronique François (HEC-Management School of the University of Liège) Co-authors: Yasemin Arda and Yves Crama Abstract: This study analyzes a rich vehicle routing problem with multiple trips and driver shifts. The considered problem features are inspired from the practical case of a Belgian distribution company. Along with the multi-trip component, characteristics of this particular problem include time windows, pickup and delivery customers, and site-vehicle dependencies. Internal and external fleets are considered with different cost structures and driver shift constraints. An adaptive large neighborhood search is used to treat the problem.
- New Metaheuristics for the Risk Constraint Cash-in-Transit Vehicle Routing Problem (PDF)
Luca Talarico (University of Antwerp) Co-authors: K. Sörensen and J. Springael Abstract: In this work we propose two new metaheuristic algorithms to
cope with the Risk Constraint Cash-in-Transit Vehicle Routing Problem (RCTVRP for short) already introduced in the literature by (Talarico et al., 2012).
The RCTVRP is a variant of the well-known capacitated vehicle routing problem that models the problem of routing vehicles in the cash-in-transit industry by introducing a risk constraint. In the RCTVRP the risk associated with a robbery, which is assumed to be proportional both to the amount of cash being carried and the time or the distance covered by the vehicle carrying the cash, is limited by a certain risk threshold. In this work we propose two metaheuristic algorithms
named respectively m-ACO and p-ACO . Both the m-ACO and the p-ACO metaheuristics use the well known Ant Colony heuristic for the Traveling Salesman Problem introduced by (Dorigo, 1996). The risk constraint is relaxed and for this reason, the Ant Colony heuristic creates a single (TSP) tour. The best found giant TSP tour is subjected to a variant of the splitting procedure described in
Prins (2004). The solution obtained after the splitting heuristics serve as input for the variable neighbourhood search block, which is composed of seven of the most common local search operators for vehicle routing problems, modified for the RCTVRP.
The Ant Colony heuristic and the variable neighbourhood search block are embedded in two different global metaheuristic structures: a multistart and a destroy-and-repair (or perturbation) structure.
Both the m-ACO and p-ACO metaheuristics, in their optimal setting, are
used to solve all the instances contained in a specific set of RCTVRP instances named set O for which the optimal solutions are known (see (Talarico et al., 2012) for more details of the instances characteristics). The resulting methods are able to obtain solutions of excellent quality in very limited computing times.
- Routing for couriers: a multi-objective tabu search method to rebalance a tactical route plan with microzones (PDF)
Jochen Janssens (University of Antwerp) Co-authors: Joos Van Den Bergh Abstract: Courier companies face a highly dynamic vehicle routing problem with many thousands of pick-ups and deliveries. Currently, even the best vehicle routing algorithms are not up to the challenges posed by such routing problems because: (1) courier routing problems are too large for most algorithms, and (2) processes that rely on the final routing plan, such as the sorting and loading of parcels in vehicles, need to start long before all the data required for such a routing plan is available. As a result, the planning processes in most courier companies are currently poorly supported by automatic planning tools. Instead, they rely on more stable organisational structures for the distribution of their parcels.
A popular technique is to partition the distribution area into zones, based on, e.g., zip codes. In this talk we propose to create smaller microzones that can be used as building blocks, and are more or less permanently assigned to vehicles in a so-called tactical plan. The final vehicle routing should be based on this plan, but the plan is adapted when capacity violations or workload imbalances occur due to the stochastic nature of the problem.
In this talk we tackle the problem of rebalancing unbalanced tactical plans. In order to do this we assume that a tactical plan is given, and that estimates on the workload per region are known. We focus on combining the zones into feasible vehicle routes while taking into account three objectives : the total transportation cost, the number of reallocated microzones, and the workload balance. By solving this problem, couriers are able to convert their tactical route plan into
an efficient operational plan with balanced workloads, without reallocating too many microzones.
The algorithm presented in this talk produces a set of non-dominated solutions,and leaves it to the decision maker to choose the final solution according to his/her preferences. As he/she should not be overwhelmed with choices, we argue that the algorithm should only present a limited number of interesting solutions. In practice, this means the set of produced non-dominated solutions should be filtered so that only the most diverse ones remain.
Thursday 16:10-17:25 TC-2: Sets, Relations and Rankings Room C.601 - Chair: B. De Baets
- A Set-Based Approach to the Decomposition of Linear Mixtures Using Quasi-Convex Programming (PDF)
Jan Verwaeren (Ghent University) Co-authors: Michael Rademaker, Bernard De Baets Abstract: A multivariate process can often be interpreted as a mixture of multiple source processes. Several applications require an estimate of the proportional contribution of at least one of these sources to such a mixture. To be able to estimate the proportional contribution of a source of interest to a given mixture, we need a formal representation of both the mixture and the sources. In a classical setting, where the represenations are points in some vector space, point estimates for the proportions are readily obtained. We generalize this case, to a setting in which the sources are represented by sets in stead of points. Here, one can only obtain an interval estimate for the proportions. We formulate (and solve) a quasi-convex opimization problem that can be used to obtain these interval estimates.
- Dice representability of reciprocal relations (PDF)
Karel De Loof (Universiteit Gent) Co-authors: K. De Loof, B. De Baets and H. De Meyer Abstract: The transitivity of winning probability relations has been studied extensively by some of the present authors, resulting in a new framework for studying transitivity of reciprocal relations in general, referred to as the cycle transitivity framework. The condition of dice transitivity is not sufficient for an arbitrary rational-valued reciprocal relation to be the winning probability relation of a dice model. We therefore introduce an additional necessary condition that is a stronger version of the so-called Ferrers property.
- Using a natural monotonicity constraint as the basis of a ranking procedure (PDF)
Michael Rademaker (UGent) Co-authors: Bernard De Baets Abstract: Consider what it means for a ranking to correspond to a group opinion, or for a ranking procedure to be appropriate. Much like in some other disciplines, there is no ground truth (automatic image annotation is another example in which the ground truth is not clearly defined). Rather, there are certain axioms that are considered as natural and desirable. On the basis of those, a methodology can be constructed. The methodology we will present is built on the idea of monotonicity: When I prefer x to y and prefer y to z, the extent to which I prefer x to z cannot be smaller than the extent to which I prefer x to y or y to z.
We consider a ranking problem where each voter has expressed a linear
order relation over the candidates. We consider a ranking procedure
that chooses as a winning ranking one which adheres best to the constraint of monotonicity: For a ranking x
Thursday 16:10-17:25 TC-3: Logistics Room C.602 - Chair: S. Demeyer
- Automatic Aircraft Cargo Load Planning with Pick-up and Deliveries (PDF)
Virginie Lurkin (HEC- Université de Liège) Co-authors: Lurkin Virginie and Schyns Michael Abstract: The objective of this research is the development of a new mixed integer linear programming (MILP) model designed for optimally loading a set of unit load devices into a cargo aircraft. The main contribution is to consider sequences of destinations with pick-up and delivery and to minimize the cost and environmental impacts. We take into account two major costs on which the loading activity can intervene: the cost of time (that includes wages of employees used for loading, fees to the airline for usage of the runway,...) and the cost of fuel as the fuel consumption depends on the way ULDs are loaded in the aircraft. Therefore, the challenge is to find the optimal assignment of ULDs into the aircraft so that the locations of the centers of gravity (CG) for each route induces minimal fuel consumption and simultaneously, the loading time at each destination is as short as possible.
- A tabu-embedded simulated annealing algorithm for a selective pickup and delivery problem (PDF)
Tabitha Maes (Hasselt University) Co-authors: An Caris, Katrien Ramaekers, Gerrit K. Janssens Abstract: Several actors are involved in the transport of goods. To model freight transport, the different actors who take part in the decision making process have to be represented. In (Maes et al., 2012) a conceptual framework is presented to model freight transport. The key actors in this framework are firms, carriers, and forwarders. This allows the model to work on an activity-based level, focusing on the different activities of each actor. The decision making process of carriers is one of the key aspects in modelling logistic decisions in a behaviour based transportation model. When modelling at an activity-based level, the behaviour of carriers has to be taken into account. The framework of Maes et al (2012) formulates decisions of the carrier as a selective pickup and delivery problem (PDSP). This is a novel approach to model logistic decisions in models with the objective to explain and predict freight flows. One of the important decisions a carrier has to make is whether or not to accept a transport request in order to maximize his profit. The selection of requests in a paired pickup and delivery problem is not often studied in literature. Next to this decision, he also needs to schedule the transport orders that are accepted into the different vehicles and construct a routing plan, given time and capacity limitations.
- An insertion-based heuristic for the constrained pickup and delivery problem (PDF)
Maarten Houbraken (UGent) Co-authors: T. Walcarius, S. Demeyer, P. Audenaert, D. Colle, M. Pickavet Abstract: We outline a heuristic algorithm that produces locally optimised feasible solutions for the on-line capacitated multi-vehicle pickup and delivery problem with time windows. Our algorithm incrementally builds the solution by inserting the events in the current schedule and optimising intermediate solutions. The algorithm was preliminarily evaluated with industrial input and will be further used to develop a genetic algorithm.
Friday 9:00-10:15 FA-1: COMEX Logistics Room C.611 - Chair: Y. Crama
- The cooperative facility location problem (PDF)
Lotte Verdonck (Hasselt University, Research Foundation Flanders (FWO)) Co-authors: Patrick Beullens, An Caris, Katrien Ramaekers, Gerrit K. Janssens Abstract: In order to survive under the ever increasing pressure to operate more efficiently, transportation organisations are obliged to adopt a collaborative focus. One approach to horizontal carrier collaboration is the sharing of warehouses or distribution centres with partnering companies. In this context, this paper develops a mathematical formulation for the cooperative facility location problem and demonstrates the benefits of joint optimisation by means of a case study. In addition, total logistics costs are fairly allocated with the use of Shapley value in order to ensure cooperation stability.
- Two- and three-dimensional strip packing: a shaking procedure (PDF)
Tony Wauters (CODeS, KAHO Sint-Lieven) Co-authors: Jannes Verstichel, Greet Vanden Berghe Abstract: In the present work we propose a shaking procedure for the two- and three-dimensional strip packing problems (2SP and 3SP). A set of rectangular items of given dimensions have to be packed into a strip with fixed base and open height such that the covered height is minimized. The items can be rotated by 90 degrees. Both 2SP and 3SP are NP-hard. The proposed procedure builds upon the common bottom-left-fill methods (BLF), and employs multiple sorting criteria to improve the solutions. Large improvements are observed on well known benchmarks sets.
- Multiperiod vehicle loading with stochastic release dates (PDF)
Thierry Pironet (University of Liège) Co-authors: Y. Arda, Y. Crama, D. Kronus, Th. Pironet, P. Van Hentenryck Abstract: This research investigates a multi-period vehicle loading problem with stochastic information regarding the release dates of items to be transported. The deterministic version of the problem can be formulated as a large-scale set covering problem. Several heuristic
algorithms are proposed to generate decision policies for the stochastic optimization model over a long rolling horizon. The resulting policies have been extensively tested on instances
which display the main characteristics of the industrial case-study that motivated the research. The tests demonstrate the benefits of the multi-period stochastic model over simple myopic strategies. A simple and efficient heuristic is shown to deliver good policies and to be robust against errors in the estimation of the probability distribution of the release dates.
Friday 9:00-10:15 FA2: Production 2 Room C.611 - Chair: D. Tuyttens
- Lexicographic optimization of a bi-objective permutation flow shop with time lags (PDF)
Jacques Teghem (Faculté Polytechnique de Mons) Co-authors: E.Dhouib;D.Tuyttens;T.Loukil Abstract: Lexicographic optimization of a bi-objective permutation flow shop with time-lags.
E.Dhouib (*); D.Tuyttens(**); J.Teghem (**); T.Loukil (*)
(*) Laboratory LOGIQ, University of Sfax (Tunisia).
[email protected]; [email protected]
(**) Laboratory MathRO, University of Mons (Belgium)
[email protected]; [email protected]
ABSTRACT.
We consider a permutation flow shop with time lags constraints. These constraints consist to impose some restrictions on the time elapsed between two successive operations of a same job. In the considered model there exist minimal and maximal time-lags which are respectively lower and upper bounds of the delay between the end of the operation on one machine and the beginning of the operation on the following machine. At each job is associated a due date. Two criteria are taken into consideration in a lexicographical manner: the primary objective is the number of tardy jobs; the secondary one is the makespan. So the aim is to find the schedule within the minimal total completion time among the schedules with the minimal number of tardy jobs.
In a first step a mixed integer linear programming formulation is proposed and solved with CPLEX. It quickly appears that only small instances can be solved in a time limit of 5.000 seconds (experiments are made with 20 jobs and 3 or 5 machines).In a second step two local search metaheuristics are adapted to tackle the problem.
The first one is a Simulated Annealing (SA) based algorithm with some additional elements in regard with the classical SA scheme:
- Several solutions are generated randomly in the neighborhood and the best one is selected;
- Two acceptance tests of this solution are analyzed in regard of the lexicographical optimization;
- Several variants of a greedy heuristic are proposed to optimize the partial schedule of the tardy jobs, each time the first objective is improved;
- At the end of the SA procedure, an improvement scheme is applied to still trying to improve the final schedule.
The second metaheuristic is a GRASP algorithm. An iteration is composed of two phases:
- A greedy randomized search procedure is proposed to build a schedule taking into account the lexicographic optimization. At each step a list of possible jobs to schedule is based on the due date of the job and the generated total idle time of the machines.
- A local search to improve the schedule.
Extensive numerical experiments are made to compare both methods on instances with 60 or 80 jobs and 8 machines, and with 100 jobs and 5 machines. The aim of these experiments is not only to compare the performances of the two metaheuristics but also to analyze the impact of the respective values of the due dates and the time lags.
- general lotsizing problem in a closed-loop supply chain with uncertain returns (PDF)
Guillaume Amand (University of Liège) Co-authors: Yasemin Arda Abstract: We consider a stochastic version of the multi-product multi-level capacitated lotsizing and scheduling problem with sequence-dependent setups. A bottler needs to determine its production schedule over a finite horizon in order to satisfy a deterministic demand. The raw materials are supplied through two different sources: uncapacitated reserves of new bottles and the uncertain returns of used bottles. We present results for the single-item case.
- A branch and bound algorithm for a lexicographic optimization of a bi-objective permutation flowshop with time lags (PDF)
Arnaud Vandaele (UMONS, Faculté Polytechnique) Co-authors: Daniel Tuyttens Abstract: A permutation flowshop consists of finding an optimal sequence of $n$ jobs visiting $m$ machines where the order on each machine is the same. In this problem there are minimum and maximum time lags for any job between the end of its treatment on a machine and the start of its treatment on the next machine. The goal is to find the permutation minimizing a lexicographic ordering of two objectives. Each job has a due date so we first minimize the number of late jobs. The second objective we consider is minimizing the makespan.
In this work we develop an optimal branch and bound algorithm to solve this particular flowshop scheduling problem. We also discuss the use of different bounds for the objectives and their effectiveness. We present some promising results in comparison with other methods.
Friday 9:00-10:15 FA-3: MIP Room C.603 - Chair: T. Dokka
- The characterization of affine maximizers when there are only two alternatives\,: A technical note (PDF)
Thierry Marchant (Ghent University) Co-authors: Debasis Mishra Abstract: Roberts (1979) showed that every implementable social choice function satisfies a condition named PAD (Positive Association of Differences). Conversely, when there are at least three alternatives and the domains are unrestricted, he showed that PAD implies that the social choice function is an affine maximizer. When there are two alternatives only, it is well-known that Roberts' Theorem does not hold because there exist social choice functions satisfying PAD on unrestricted domains and that are not affine maximizers. In this paper, we show which conditions must be imposed on top of PAD in order to characterize the affine maximizers.
- Covering Linear Programs with Violations (PDF)
Laurence Wolsey (CORE, UCL) Co-authors: Feng Qiu, Shabbir Ahmed, Santanu S. Dey Abstract: We consider a class of linear programs involving a set of covering constraints of which at most k are allowed to be violated. We show that this covering linear program with violation is strongly NP-hard. In order to improve the performance of mixed-integer programming (MIP) based schemes for these problems,
we introduce and analyze a coefficient strengthening scheme, adapt and analyze an existing cutting plane technique, and present a branching technique.
- Approximation Algorithms for Multi-Dimensional Vector Assignment problems (PDF)
Trivikram Dokka (K U Leuven) Co-authors: Yves Crama, Frits Spieksma Abstract: We consider multi-dimensional vector assignment problem. We study natural heuristics and analyse their performance when the cost function is montone and submodular.
Friday 14:00-15:40 FB-1: COMEX automatic tuning and organization Room C.611 - Chair: T. Stützle
- From Grammars to Parameters: Automatic Design of Iterated Greedy Algorithms (PDF)
Manuel LÓpez-ibaÑez (Université Libre de Bruxelles) Co-authors: Franco Mascia, Jérémie Dubois-Lacoste and Thomas Stützle Abstract: Recent advances in automatic algorithm configuration make possible
to configure flexible algorithmic frameworks in order to fine-tune
them for particular problems. This is often done by using
automatic methods to set the values of algorithm parameters. A
rather different approach instantiates possible algorithms by
repeated application of the rules defined by a context-free
grammar. An instantiation of the grammar is represented either
explicitly by a derivation tree or implicitly by numerical codons,
as done in grammatical evolution.
In this work, we show how the process of instantiating such a
grammar can be described in terms of parameters. In particular, we
show how a grammar that generates iterated greedy (IG) algorithms
can be represented in terms of parameters. We compare the quality of
the IG algorithms generated by an automatic configuration tool using
the parametric representation versus using the codon-based
representation of grammatical evolution. The same experiments are
repeated for two problems: the permutation flow-shop problem with
weighted tardiness minimization and the one-dimensional binpacking
problem. In our experiments, the parametric approach leads to
significantly better IG algorithms.
- Automatic algorithm selection for multi-mode resource-constrained project scheduling problems (PDF)
Tommy Messelis (KU Leuven kulak) Co-authors: Patrick De Causmaecker Abstract: In this talk, we will present the results of an experimental study towards building an automatic algorithm selection tool for the combinatorial optimisation problem of project scheduling.
At the basis of this tool lies the concept of empirical hardness models. These models are mappings from problem instance features onto performance criteria of certain algorithms. Using such models, the performance of a set of algorithms can be predicted. Based on these predictions, the tool can automatically select the algorithm with best predicted performance. For such a tool to work properly, the predictions must be sufficiently accurate.
Many state-of-the-art algorithms perform very well on a majority of bench- mark instances, while performing worse on a smaller set of instances. The perfor- mance of one algorithm can be very different on a set of instances while another algorithm sees no difference in performance at all. Knowing in advance, with- out using scarce computational resources, which algorithm to run on a certain problem instance, can significantly improve the total overall performance.
We have applied this strategy to the classic problem of project scheduling with multiple execution modes. We selected two state-of-the-art algorithms that both perform relatively good on average. Combining these two algorithms in a portfolio with an automatic algorithm selection tool, we get a super-algorithm that outperforms any of it’s components individually.
- Predicting parameter configurations for tuning effective algorithms on very large instances (PDF)
Franco Mascia (Université Libre de Bruxelles (ULB)) Co-authors: Mauro Birattari and Thomas Stützle Abstract: Tuning stochastic local search algorithms on large instances is impractical due to the large amount of CPU-time testing algorithm configurations requires on such large instances. We define an experimental protocol that allows to tune an algorithm on small instances and extrapolate from the obtained configurations a parameter setting that is suited for tackling large instances. As a proof of concept, we study an iterated local search algorithm for the quadratic assignment problem.
- An Approach to the Automatic Configuration of a Generalized Metaheuristic Structures (PDF)
Marie-eléonore Marmion (Université Libre de Bruxelles) Co-authors: Franco Mascia, Manuel López-Ibáñez, Thomas Stützle Abstract: We propose a generalized metaheuristics structure that unifies different metaheuristics such as iterated local search, simulated annealing, variable neighborhood search, iterated greedy and greedy randomized adaptive search procedures. From this metaheuristic structure, we can instantiate each of the above mentioned
metaheuristics, but we also can generate novel combinations of the algorithmic components of those metaheuristics and embed metaheuristics within others. In fact, the possibility of embedding metaheuristics into others (or itself) leads to the ability to recursively call one metaheuristic within another one.
Friday 14:00-15:40 FB2: Disaster, Water and Biology Room C.602 - Chair: L. Porretta
- SIMEDIS disaster management simulator: major road traffic accident (PDF)
Christophe Ullrich (Royal Military Academy) Co-authors: F. Van Utterbeeck, M. Debacker, E. Dhondt Abstract: We generate realistic victim profiles for medical disaster simulations based on medical expertise. We apply these profiles in a medical disaster model where victim entities evolve in parallel through a medical response model and a victim pathway model. The medical response model focuses on the pre-hospital phase which includes triage procedures, evacuation processes and medical processes. Medical decisions such as whether to evacuate or to treat the current victim are based on the RPM (respiratory rate, pulse rate, motor response) parameters of the victim. We show how such a model can be implemented in ARENA. Special care is given to a flexible implementation in order to be able to handle a wide variety of scenarios.
- Two-stage stochastic integer programming models for strategic disaster preparedness (PDF)
Alper Döyen (University of Liege) Co-authors: Necati Aras,Yasemin Arda Abstract: We are interested in two distinct disaster management models which are modeled by two-stage stochastic integer programming. While the first model solution provides optimal post-disaster response related decisions and pre-disaster retrofitting decisions to minimize the total cost, in addition to the first model, the second model considers the recovery actions and the time of earthquake occurrence. In the models, retrofitting decisions are given for both buildings and bridges under a limited budget. Effective and efficient solution methods are proposed in the study.
- Generation of synthetic water distribution networks (PDF)
Annelies De Corte (University of Antwerp) Co-authors: Kenneth Sörensen Abstract: A lot of research has been done on the optimisation of water distribution networks (WDN). One recurrent criticism on the state of the art is that the metaheuristics developed in this field are not adequately tested in terms of robustness and performance. Unfortunately, the number of available, realistic instances on which algorithms can be tested is limited to only a handful. This need for realistic test networks caused us to develop a tool to algorithmically generate (synthetic) WDN.
- A Class Representative Model for Pure Parsimony Haplotyping under Uncertain Data (PDF)
Luciano Porretta (Université Libre de Bruxelles) Co-authors: D. Catanzaro, M. Labbe Abstract: The Pure Parsimony Haplotyping (PPH) problem is a NP-hard combinatorial optimization problem that consists of finding the minimum number of haplotypes necessary to explain a given set of genotypes. In this article we investigate, for the first time, a recent version of PPH called the Pure Parsimony Haplotype problem under Uncertain Data (PPH-UD). This version mainly arises when the input genotypes are not accurate, i.e., when some single nucleotide polymorphisms are missing or affected by errors.
Friday 14:00-15:40 FB-3: Decision Making Room C.603 - Chair: D. Goossens
- Progressive multi-objective optimization (PDF)
Kenneth Sörensen (Universiteit Antwerpen) Co-authors: Johan Springael Abstract: The multi-objective optimization paradigm prescribes that a decision maker should first generate a set of non-dominated solutions and then pick a solution from this set according to his/her preferences, using a multi-criteria method of his/her choice. In some situations, however, this approach is of limited practical use, mainly because of the fact that multi-criteria methods were not designed with the output of a typical multi-objective optimization algorithm in mind. Multi-objective optimization methods typically generate a large set of non-dominated solutions --- in fact the quality of a multi-objective method depends partially on the cardinality of the Pareto set it generates --- whereas multi-criteria methods were designed to compare only a handful of alternatives.
In ijitdm01 we introduced progressive multi-objective optimization (PMOO) that attempts to overcome the drawbacks of the multi-objective paradigm. PMOO is a novel technique that includes the decision maker's preferences into the multi-objective optimization process instead of tackling these steps sequentially.
pmoo integrates a method for multi-criteria decision making into a simple multi-objective metaheuristic by maintaining and updating a small reference archive of non-dominated solutions throughout the search. In this talk, we present a PMOO method for the multi-objective knapsack problem. This approach integrates the well-known PROMETHEE multi-criteria method into a simple tabu search method. By applying this novel technique to a set of instances of the multi-objective knapsack problem, the superiority of PMOO over the commonly accepted sequential approach of generating a Pareto set approximation first and selecting a single solution afterwards is demonstrated.
- Elicitation procedures for the comparison of decisional maps (PDF)
Valérie Brison (UMONS, Faculté Polytechnique) Co-authors: Marc Pirlot Abstract: In a previous work, we proposed several models to help a decision maker to compare the state of a region at different stages of its evolution. In this work, we provide an interactive elicitation procedure to determine all the parameters of the models developed. For this purpose, we transpose a widely used method in the theory of decision under risk, namely the comparison of lotteries. We interpret lotteries as maps and we formulate questions to the decision maker in terms of comparisons of well-chosen maps.
- Designing a combinatorial auction for Solids (PDF)
Dries Goossens (KU Leuven) Co-authors: Frits Spieksma, Sander Onderstal, Jan Pijnacker Abstract: In May 2011, our collaboration with housing corporation Stadgenoot culminated in the first combinatorial auction for housing space in a newly erected building in Amsterdam (Netherlands). Primary goal was to allocate space based on the preferences of many different potential users. This resulted in an allocation featuring restaurants, boutiques, and a dentist, alongside regular residential uses. The auction we designed maximized total rent while coping with various municipal and building regulations.
- On a queue with customer deadlines (PDF)
Tom Maertens (Ghent University) Co-authors: Herwig Bruneel Abstract: We study a discrete-time queueing system where each customer has a maximum allowed sojourn time in the system, referred to as the deadline of the customer. We obtain exact and approximate formulas for the mean system content, the mean customer delay and the deadline-expiration ratio. Possible applications of this type of queueing model are numerous: the deadlines could model, e.g., the fact that customers may become impatient and leave the queue unserved if they have to wait too long in line.
|
|