|
|
|
|
|
|
Schedule
Click on a link for more details
Show all the abstracts
Download the booklet of abtracts
Thursday 11:00 - 12:20 Routing Problems Room 138 - Chair: Pieter Vansteenwegen
- A comparative study of branch-and-price algorithms for a vehicle routing problem with time windows and waiting time costs
Stefano Michelini (HEC Liège - Management School of the University of Liège) Co-authors: Yasemin Arda, Hande Küçükaydin Abstract: We investigate the performance of several branch-and-price algorithms for a variant of the vehicle routing problem with time windows in which waiting times have a linear cost. Each one uses a different labeling algorithm to solve the related pricing problem. We employ a methodology in which we parametrize each algorithm, tune it automatically and perform statistical tests on the final configurations. We identify a class of labeling algorithms that outperforms the others under our consideration.
- A model for container collection in a local recycle network
Jens Van Engeland (KU Leuven Campus Brussel) Co-authors: Carolien Lavigne Abstract: This paper proposes a Mixed Integer Linear Programming (MILP) model to optimize the waste container collection problem at Civic Amenity (CA) sites using a solution method based on a priori constructed trips.
- Improving the Last Mile Logistics in a City Area by Changing Time Windows
Corrinne Luteyn (KU Leuven) Co-authors: Pieter Vansteenwegen Abstract: In this research, we investigate the possible savings that can be obtained when a delivery company has the ability to discuss possible changes in time windows with their customers. By tuning the time windows of customers, which are closely located to each other, the delivery company can save transportation costs. To solve this new variant of the Vehicle Routing Problem with Time Windows, a Two-Phase Heuristic is presented. A set of new benchmark instances is used to test this solution approach.
- A buffering-strategy-based solution method for the dynamic pickup and delivery problem with time windows
Farzaneh Karami (KU Leuven) Co-authors: Wim Vancroonenburg, Greet Vanden Berghe Abstract: This paper introduces an optimization approach for the dynamic pickup and delivery problem with time windows (DPDPTW) which first buffers arrival requests and then employs a cheapest insertion procedure and a local search algorithm for scheduling. The proposed approach’s solution is assessed by the best solution found by a Mixed Integer Linear Programming model. Computational results indicate how the solution costs vary by changing two characteristics of the DPDPTW: dynamism and urgency. In addition, the solution quality is slightly affected by changing these characteristics.
Thursday 11:00 - 12:20 Emergency operations scheduling Room 130 - Chair: El-Houssaine Aghezzaf
- Input data quality assessment in the context of emergency department simulations
Lien Vanbrabant (Hasselt University) Co-authors: Niels Martin, Katrien Ramaekers, Kris Braekers Abstract: Simulation techniques are frequently used to analyse and optimise emergency department operations. The basis of any simulation study is the construction of a realistic simulation model, and accurate data on the real system are a prerequisite within this regard. A comprehensive data quality framework and accompanying quality assessment techniques are provided with a focus on data quality problems present in electronic health record data used as input for emergency department simulations. Furthermore, a prototypical implementation to automate data quality assessment is developed.
- Expanding the SIMEDIS simulator for studying the medical responce in disaster scenarios
Selma Koghee (Royal Military Academy) Co-authors: F. Van Utterbeeck, M. Debacker, I. Hubloue, E. Dhondt Abstract: We present a re-created and extended version of the SIMEDIS simulation model, used to study the medical response in disaster scenarios. The scenario that was studied first is a aeroplane crash at Zaventem airport. By comparing the results with those of the previous simulator, the new implementation was verified. In addition, the added features allow for more detailed understanding of the results and demonstrate the stability of the results and recommendations under random variations in the victims and times required for the actions.
- Prioritization between boarding patients and patients currently waiting or under treatment in the emergency department
Kim De Boeck (KU Leuven) Co-authors: Raïsa Carmen, Nico Vandaele Abstract: Needy boarding patients confront the physicians in an emergency department with a challenging task: prioritizing between boarding patients and patients currently under treatment in the emergency department. Using discrete-event simulation, three static control policies (first-come, first-served and always prioritizing either boarding patients or the other patients) and two dynamic control policies (using threshold values and accumulating priorities) are studied.
- Protecting operating room schedules against emergency surgeries: outlook on new stochastic optimization models
Mathieu Vandenberghe (Ghent University) Co-authors: Stijn De Vuyst ; El-Houssaine Aghezzaf ; Herwig Bruneel Abstract: In today’s crowded hospitals, emergency patients must often wait until the next completed surgery until a room frees up. We have previously focused on minimizing the worst-case of this waiting time by spreading expected surgery completion times, taking their (often high) uncertainty into account.
We now present an extension where emergency arrivals are themselves modelled by a stochastic distribution. We present the current state of this variant, show its potential and propose solution methods.
Thursday 11:00 - 12:20 Algorithm design Room 126 - Chair: Gerrit Janssens
- Towards Algorithm Selection for the Generalised Assignment Problem
Hans Degroote (KU Leuven Kulak) Co-authors: Patrick De Causmaecker; José Luis González Velarde Abstract: In this abstract, the first steps towards an algorithm selection system for the generalised assignment problem (GAP) are made. Algorithm selection consists of selecting the best algorithm for each instance individually, instead of learning which algorithm is best on average, over all instances. General issues and challenges that arise when applying algorithm selection to an OR problem are discussed. Preliminary experiments show that algorithm selection has potential for the GAP.
- Searching the Design Space of Adaptive Evolutionary Algorithms
Ayman Srour (Ku leuven) Co-authors: Patrick De Causmaecker Abstract: We present a Grammatical Evolution framework for the automatic design of Adaptive Evolutionary Algorithms. The grammar adopted by this framework can generate a novel adaptive parameter control strategy, aiming to evolve the design of evolutionary algorithms. Results show that the proposed framework is capable of not only generating new adaptive evolutionary algorithms but also confirms that automating the design of adaptive evolutionary algorithm can outperform the standard evolutionary algorithm.
- Analysis of algorithm components and parameters using surrogate modelling: some case studies
Nguyen Dang (KU Leuven KULAK) Co-authors: Patrick De Causmaecker Abstract: Modern well-performing optimisation algorithms are usually highly parametrised. Algorithm developers are often interested in the questions of how different algorithm components and parameters influence performance of algorithms. We focus on a group of recently proposed algorithm parameter analysis techniques that utilise surrogate modelling, namely forward selection, fANOVA and ablation analysis with surrogates. The question when applying these methods is what kind of information we want to gain for a deeper understanding of how the analysed algorithm works. It might not always be straightforward for an algorithm developer to interpret analysis results and decide what to use for improving his/her algorithm. In this work, we illustrate the applications of these methods on a number of case studies in optimisation algorithms and discuss the advantages of combining them for thorough insights into the algorithms under study.
- Automatic configuration of metaheuristics for the Q3AP
Imène Ait Abderrahim (University of Oran 1 Ahmed Benbella, Algeria) Co-authors: Lakhdar Loukil, Thomas Stützle Abstract: In this paper, we build on the previous research results on the Q3AP (Quadratic Three-dimensional Assignment Problem) and develop a new hybrid method for the Q3AP using automated design ideas. In particular, in this work we focus on an iterated local search algorithm that exploits a tabu search as the local search for solving the Q3AP (ILS-TS). We configure the ILS -TS algorithm problem using the irace1 (Iterated Race for Automatic Algorithm Configuration) configuration tool and evaluate the performance improvements obtained.
Thursday 11:00 - 12:20 Multiple Objectives Room 120 - Chair: Filip Van Utterbeeck
- Exact solution methods for the bi-objective \{0, 1\}-quadratic knapsack problem
Jean-philippe Hubinont (ULB) Co-authors: José Rui Figueira, Yves De Smet Abstract: The binary quadratic knapsack problem has been intensively studied over the last decades. Despite most of real-world problems are of a multi-dimensional nature there is not a considerable body of research for multi-objective binary quadratic knapsack problems, even in the case when only two objective functions are considered. The purpose of this paper is twofold: First, it aims at giving some insights on the bi-objective quadratic knapsack problem in order to solve it faster by building adequate heuristics and bounds for being used in approximations methods; and, second, to study a new linearization that involves less constraints than the existing ones. This paper addresses thus the bi-objective binary quadratic knapsack problem and proposes new algorithms, which allow for generating the whole set of exact nondominated points in the objective space and one corresponding efficient solution in the decision space. Moreover, this paper proposes a new linearization of the binary quadratic knapsack problem, which appears, on average, to be faster than the existing ones for solving the single and bi-objective quadratic knapsack problem. Computational results are presented for several types of instances, which differ according to the number of items or variables and the type of correlation between the two objective functions coefficients. The paper presents thus a comparison, in terms of CPU time, of four linearizations over several types of 30 instances of the same type and size.
- Analyzing objective interaction in lexicographic optimization
Federico Mosquera (KU Leuven) Co-authors: Pieter Smet, Greet Vanden Berghe Abstract: Despite the multitude of optimization methods proposed throughout operational research literature, such approaches can be difficult to grasp for decision-makers who lack a technical background. Solving a multi-objective optimization problem requires expertise from the decision-makers to manage complex objectives which may interact unintuitively. This study focuses on improving lexicographic optimization, a type of approach which has the advantage of being easily implemented by decision-makers given that its only requirement is the hierarchical arrangement of objectives in terms of their relative importance.
A pure lexicographic strategy optimizes each objective sequentially, in order of importance, without deteriorating previous objectives. Algorithmic performance is, however, often hindered when a pure lexicographic optimization strategy is employed. Typically, heuristic methods quickly converge to a local optimum which, depending on the specifics of the solution space, results in poor solutions. Employing integer programming solvers may also prove difficult as lexicographic optimization is not implemented natively and often requires solving the integer programming problem several times. This study overcomes these difficulties by first analyzing objective interactions and categorizing the various objectives accordingly. This information is subsequently used to develop a new methodology which seeks to improve search algorithms for solving lexicographic optimization problems.
A real-world case study concerning home care scheduling demonstrates how the proposed methodology improves pure lexicographic optimization. Home care scheduling not only involves the scheduling of caregivers, but also their assignment and routing, both of which are necessary to deliver the necessary services to different clients. Among the objectives requiring optimization are task frequency, travel time, caregiver/client preference satisfaction and the weekly spreading of tasks. Despite the complexity of these objectives, lexicographic optimization offers a user-friendly approach but, as demonstrated by way of a series of computational experiments, may result in poor quality solutions. Computational results demonstrate how the new methodology based on inter-objective relations results in better solutions.
- Military manpower planning through a career path approach
Oussama Mazari Abdessameud (VUB) Co-authors: Filip Van Utterbeeck, Johan Van Kerckhoven, Marie-Anne Guerry Abstract: Military manpower planning aims to match the required and available staff. In order to tackle this challenge, we resort to military manpower planning, which involves two linked aspects, statutory logic and competence logic. The statutory logic aims to estimate the workforce evolution on the strategic level, whereas the competence logic targets the assignment of the right person to a suitable position. These two aspects are interdependent; therefore this article presents a technique to combine both logics in the same integrated model using career paths.
- Optimization of Emergency Service Center Locations Using an Iterative Solution Approach
Mumtaz Karatas (National Defense University, Turkish Naval Academy) Co-authors: Ertan Yakici Abstract: In this study, we consider the problem of determining the optimal number and locations of temporary emergency service centers for a regional natural gas distribution company in Turkey. For this purpose, we formulate the problem as a multi-objective optimization model and solve it via an iterative solution technique which is based on the combination of the branch & bound and iterative goal programming methods.
Thursday 13:30 - 14:50 Integrated logistics Room 138 - Chair: Kris Braekers
- Vertical supply chain integration - Is routing more important than inventory management?
Florian Arnold (University of Antwerp) Co-authors: Johan Springael, Kenneth Sörensen Abstract: Consider the following problem. A logistic service provider owns multiple depots that are located in different regions of a country. The company is responsible for delivering one or several products from the depots to fulfill the demand of customers. From historical data the company has information about the demand density per region. The company takes care of the inventory management in its depots as well as the distribution processes from the depots to the customers. In this paper we investigate how the total costs of both logistic activities can be minimized.
- Stochastic solutions for the Inventory-Routing Problem with Transshipment
Wouter Lefever (Ghent University) Co-authors: El-Houssaine Aghezzaf, Khaled Hadj-Hamou Abstract: In this work, we study the Inventory-Routing Problem with Transshipment (IRPT), a more flexible variant of the IRP, with stochastic demand. Because the stochastic program is hard to solve, we (1) propose improvements to the second-stage problem, (2) we investigate the upgradeability of the first-stage solution and (3) we introduce multiple replications with different scenario sets. The experimental results confirm the effectiveness of the presented sample average approximation algorithm.
- Solving an integrated order picking-vehicle routing problem with record-to-record travel
Stef Moons (UHasselt - Hasselt University) Co-authors: Kris Braekers, Katrien Ramaekers, An Caris, Yasemin Arda Abstract: In an integrated order picking-vehicle routing problem (I-OP-VRP), an order picking problem and a VRP are solved simultaneously by taking into account the requirements and constraints of both problems. Implementing an integrated approach to obtain simultaneously order picking schedules and vehicle routes can lead to better results. A record-to-record travel algorithm with local search operators is developed to solve the I-OP-VRP in a small amount of computational time.
- The simultaneous vehicle routing and service network design problem in intermodal rail transportation
Hilde Heggen (Universiteit Hasselt) Co-authors: An Caris, Kris Braekers Abstract: In this research, we aim to integrate operational aspects into tactical decisions on the service network design from the perspective of an intermodal transport operator who has to route orders throughout its intermodal truck-rail transport network. Vehicle routing of local drayage operations and more detailed capacity requirements are included. A real-life case study is investigated in which direct, scheduled and cyclical train services are available between two large-volume freight terminals.
Thursday 13:30 - 14:50 Person transportation Room 130 - Chair: Célia Paquay
- Local evaluation techniques in bus line planning
Evert Vermeir (KU Leuven) Co-authors: Pieter Vansteenwegen Abstract: In the bus line planning problem, there is a large computational cost for the evaluation of the quality of service. Typically, this involves calculating the shortest path for all origins to all destinations. In order to create a high quality line plan for (very) large networks, we developed an algorithm that uses local evaluation techniques to evaluate changes to a bus line plan. These local evaluation techniques are based on the proximity to the change, in both distance and amount of transfers.
- Integrating dial-a-ride services and public transport
Yves Molenbruch (Hasselt University) Co-authors: Kris Braekers Abstract: Dial-a-ride systems provide demand-responsive, collective transport to people with reduced mobility, such as elderly or disabled. Modern mobility policies also rely on these services to replace traditional public transport in rural areas, such that a user’s trip may involve a combination of both systems. This research develops a routing algorithm that allows dial-a-ride providers to determine itineraries and transfer locations such that operational costs are minimized, given the user requests and the timetables of the public transport.
- Benchmarks for the prisoner transportation problem
Jan Christiaens (KU Leuven) Co-authors: Tony Wauters, Greet Vanden Berghe Abstract: This paper formally introduces a new problem to the operational research academic community: the Prisoner Transportation Problem (PTP). This problem essentially concerns transporting prisoners to and from services such as hospitals, court and family visits. Depots are located across a geographic region, each of which is associated with several special prisoner transportation trucks which themselves are subdivided into compartments prisoners may share with others. The PTP constitutes a variant of the commonplace vehicle routing problem, but more specifically the dial-a-ride problem. Two unique constraints make it a particularly interesting problem from a research context: (i) simultaneous service times and (ii) inter-prisoner conflicts. Simultaneous service times mean that prisoners may be processed simultaneously rather than sequentially, as is standard throughout most vehicle routing and delivery problems. Meanwhile, compartment compatibility means that each individual prisoner may be either permitted to share the same compartment with another prisoner, unable to share the same compartment, or they may be unable to share even the same vehicle. The inclusion of this additional constraint results in a very computationally challenging problem. The algorithm developed and employed to solve this problem does not represent this work's primary contribution. Rather the much more fundamental task of introducing and modelling this problem's properties and unqiue constraints constitutes the primary research interest. Given that the problem has never been formalised by the academic community, it continues to be solved primarily by manual planners who spend multiple hours working on a solution for the following day. These solutions are often infeasible or violate hard constraints. The ruin & recreate algorithm proposed by the current paper significantly improves upon these manually-constructed solvers and only require approximately ninety seconds of runtime. Computational results will be presented and analysed in detail. Furthermore, the academic instances which were generated and employed for experimentation will be made publicly available with a view towards encouraging further follow-up research.
- Optimizing city-wide vehicle allocation for car sharing
Toni Wickert (KU Leuven) Co-authors: Federico Mosquera, Pieter Smet, Emmanouil Thanos Abstract: Car sharing is a form of shared mobility in which different users share a fleet of vehicles rather than all using their own vehicle.
Users make requests to reserve these shared vehicles and only pay for the time of use or distance driven.
Car sharing stations, where the vehicles are parked when they are not used, must be strategically located within a city such that as many users as possible can be serviced by the available fleet of vehicles.
Typically, these stations are also close to train or bus stations to facilitate combination with public transport.
This study proposes a decision support model for optimizing the allocation of car sharing stations within a city.
The goal is to distribute the fleet of available vehicles among zones such that as many user requests as possible can be assigned to a specific vehicle.
The optimization problem to be solved is thus a combination of two assignment problems: the assignment of vehicles to zones and the assignment of user requests to vehicles.
A request may be feasibly assigned to a vehicle if that vehicle is located in the request's home zone or, less preferable, in an adjacent zone.
Requests which overlap in time cannot be assigned to the same vehicle.
The objective is to minimize the total cost incurred by leaving requests unassigned and by assigning requests to vehicles in adjacent zones.
Computational experiments on both real-world data and computer-generated problem instances showed that medium-sized and large problems cannot be solved using integer programming.
To address such challenging problem instances, a heuristic decomposition approach is introduced.
The proposed algorithm is inspired by the successful application of this technique on various optimization problems such as the traveling umpire problem, nurse rostering, project scheduling and capacitated vehicle routing.
The heuristic decomposition algorithm starts by generating an initial feasible solution.
Then, iteratively, a subset of decision variables is fixed to their current values, and the resulting subproblem is solved to optimality using a general purpose integer programming solver.
Three types of decomposition are investigated: 1) fixing a subset of vehicles to zones, 2) fixing a subset of requests to vehicles and 3) fixing requests from a subset of days to vehicles.
A computational study showed that the proposed method outperforms both an integer programming solver and a simulated annealing metaheuristic.
Details regarding the heuristic decomposition implementation and computational results will be presented at the conference.
Thursday 13:30 - 14:50 Continuous models Room 126 - Chair: Nicolas Gillis
- On computing the distances to stability for matrices
Punit Sharma (University of Mons) Co-authors: Nicolas Gillis Abstract: The stability of a continuous linear time-invariant (LTI) system dx/dt = Ax+Bu, where A is a real matrix of size nxn and B is a real matrix of size nxm, solely depends on the eigenvalues of the stable matrix $A$. Such a system is stable if all eigenvalues of A are in the closed left half of the complex plane and all eigenvalues on the imaginary axis are semisimple. It is important to know that when an unstable LTI system becomes stable, i.e. when it has all eigenvalues in the stability region, or how much it has to be perturbed to be on this boundary. For control systems this distance to stability is well-understood. This is the converse problem of the distance to instability, where a stable matrix A is given and one looks for the smallest perturbation that moves an eigenvalue outside the stability region.
In this talk, I will talk about the distance to stability problem for LTI control systems. Motivated by the structure of dissipative-Hamiltonian systems, we define the DH matrix: a matrix A of size nxn is said to be a DH matrix if A=(J-R)Q for some real nxn matrices J, R, Q such that J is skew-symmetric, R is symmetric positive semidefinite and Q is symmetric positive definite. We will show that a system is stable if and only if its state matrix is a DH matrix. This results in an equivalent optimization problem with a simple convex feasible set. We propose a new very efficient algorithm to solve this problem using a fast gradient method. We show the effectiveness of this method compared to other approaches such as the block coordinate descent method and to several state-of-the-art algorithms.
For more details, this work has been published as [1].
[1] N. Gillis and P. Sharma, On computing the distance to stability for matrices using linear dissipative Hamiltonian systems. Automatica, 85, pp. 113-121, 2017.
- Low-Rank Matrix Approximation in the Infinity Norm
Nicolas Gillis (University of Mons) Co-authors: Yaroslav Shitov Abstract: Given a matrix M and a factorization rank r, low-rank matrix approximations with respect to the entry-wise infinity norm require to find a matrix X whose rank is at most r and that minimizes $\max_{i,j} |M_{ij} - X_{ij}|$. In this paper, we prove that the decision variant of this problem for r=1 is NP-complete. We also analyze several cases when the problem can be solved in polynomial time, and propose a heuristic algorithm which we use for the recovery of a quantized low-rank matrix.
- Benchmarking some iterative linear systems solvers for deformable 3D images registration
Justin Buhendwa Nyenyezi (Université de Namur) Abstract: In medical image analysis, it is common to analyse several images acquired at different times or by different devices. The image registration is a process of finding a spatial transformation that allows these images to be aligned in a common spatial domain and correspondences to be established between them. The non-rigid registration, that allows local deformations in the image, requires resolution of large linear systems.In this study, we are especially interested in benchmarking~\cite{more2009benchmarking,gould2017state} iterative system solvers on a large set of 3D medical images. These system solvers are based on the Conjugate Gradient method but different from the preconditioning techniques.
The ongoing results confirm that there is no single system solver that is the best for all the problems. However, the results indicate which solver has the highest probability of being the best within a factor f > 1 and considering a limited computational budget in terms of time and storage requirements.
- Spectral Unmixing with Multiple Dictionaries
Jérémy Cohen (UMONS/FNRS) Co-authors: Nicolas Gillis Abstract: Spectral unmixing aims at recovering the spectral signatures
of materials, called endmembers, mixed in a hyperspectral or
multispectral image, along with their abundances. A typical assumption is that the image contains one pure pixel per endmember, in which case spectral unmixing reduces to identifying these pixels. Many fully automated methods have been proposed in recent years, but little work has been done to allow users to select areas where pure pixels are present manually or using a segmentation algorithm.
Additionally, in a non-blind approach, several spectral libraries may be available rather than a single one, with a fixed (or an upper
or lower bound on) number on endmembers to chose from each. In this paper, we propose a multiple-dictionary constrained low-rank matrix approximation model that address these two problems. We propose an algorithm to compute this model, dubbed M2PALS, and its performance is discussed on both synthetic and real hyperspectral images.
Thursday 13:30 - 14:50 Integer programming Room 120 - Chair: Bernard Fortz
- The maximum covering cycle problem
Wim Vancroonenburg (KU Leuven - FWO) Co-authors: Andrea Grosso, Fabio Salassa Abstract: The maximum covering cycle problem (MCCP) deals with finding a simple cycle in an undirected graph such that the number of vertices that are in the cycle or adjacent to it is maximal. The MCCP is formulated as an ILP model and it is shown that the MCCP is NP-Hard. An iterative constraint generation procedure using a relaxed ILP model and heuristics is presented. Computational results on a diverse set of instances are presented, highlighting the effectiveness of the heuristics.
- Linear and quadratic reformulation techniques for nonlinear 0-1 optimization problems
Elisabeth Rodriguez Heck (HEC Liège - Management School of the University of Liège) Co-authors: Yves Crama Abstract: We are interested in the problem of nonlinear unconstrained optimization in binary variables. A common approach to solve a problem of this type consists in defining first a linear or a quadratic reformulation of the objective function and then using linear or quadratic integer programming techniques to optimize the reformulation. We focus on methodological aspects of these reformulations and present several theoretical and experimental results including a comparison of the computational performance of the different reformulation methods.
- Unit Commitment under Market Equilibrium Constraints
Jérôme De Boeck (Université Libre de Bruxelles) Co-authors: Luce Brotcorne, Fabio D'Andreagiovanni, Bernard Fortz Abstract: The classical Unit Commitment problem (UC) can be essentially described as the
problem of establishing the energy output of a set of generation units over a
time horizon, in order to satisfy a demand for energy, while minimizing the
cost of generation and respecting technological restrictions of the units
(e.g., minimum on/off times, ramp up/down constraints). The UC is typically
modelled as a (large scale) mixed integer program and its deterministic
version, namely the version not considering the presence of uncertain data, has
been object of wide theoretical and applied studies over the years.
Traditional (deterministic) models for the UC assume that the net demand for
each period is perfectly known in advance, or in more recent and more realistic
approaches, that a set of possible demand scenarios is known (leading to
stochastic or robust optimization problems).
However, in practice, the demand is dictated by the amounts that can be sold by
the producer at given prices on the day-ahead market. One difficulty
therefore arises if the optimal production dictated by the optimal solution to
the UC problem cannot be sold at the producer's desired price on the market,
leading to a possible loss. Another strategy could be to bid for additional
quantities at a higher price to increase profit, but that could lead to
infeasibilities in the production plan.
Our aim is to model and solve the UC problem with a second level of decisions
ensuring that the produced quantities are cleared at market equilibrium. In
their simplest form, market equilibrium constraints are equivalent to the
first-order optimality conditions of a linear program. The UC in contrast is
usually a mixed-integer nonlinear program (MINLP), that is linearized and
solved with traditional Mixed Integer (linear) Programming (MIP) solvers.
Taking a similar approach, we are faced to a bilevel optimization problem
where the first level is a MIP and the second level linear.
In this talk, as a first approach to the problem, we assume that demand curves
and offers of competitors in the market are known to the operator. This is a
very strong and unrealistic hypothesis, but necessary to develop a first model.
Following the classical approach for these models, we present the
transformation of the problem into a single-level program by rewriting and
linearizing the first-order optimality conditions of the second level. Then we
present some preliminary results on the performance of MIP solvers on this
model. Our future research will focus on strengthening the model using its
structure to include new valid inequalities or to propose alternative extended
formulations, and then study a stochastic version of the problem where demand
curves are uncertain.
Thursday 15:20 - 16:20 Material handling and warehousing 1 Room 138 - Chair: Greet Vanden Berghe
- A decade of academic research on warehouse order picking: trends and challenges
Babiche Aerts (University of Antwerp) Co-authors: T. Cornelissens, K. Sorensen Abstract: Warehouses play a key role in supply chain operations. Due to the recent trends in, e.g., e-commerce, the warehouse operational performance is exposed to new challenges such as the need for faster and reliable delivery of small orders. Order picking, defined as the process of retrieving stock keeping units from inventory to fulfil a specific customer request, is seen as the most labor-intensive activity in a warehouse and is therefore considered to be an interesting area of improvement in order to deal with the aforementioned challenges. A literature review by de Koster et al. offers an overview of order picking methods documented in academic literature up until 2007. We take this publication as a starting point and review developments in order picking systems that have been researched in the past ten years. The aim of our presentation is to give an overview of the current state of the art models and to identify trends and promising research directions in the field of order picking.
- Iterated local search algorithm for solving operational workload imbalances in order picking
Sarah Vanheusden (Hasselt University, research group logistics) Abstract: Trends such as the upcoming e-commerce market, shortened product life cycles, and greater product variety, expose order picking activities to new challenges. Order pickers need to fulfil many small orders for a large variety of stock keeping units due to a changed order behaviour of customers. Furthermore, warehouses accept late orders from customers to stay competitive and preserve high service levels. This results in extra difficulties for order picking operations, as more orders need to be picked and sorted in shorter and more flexible time windows. Warehouse managers therefore experience difficulties in balancing the workload of the order pickers on a daily basis, resulting in peaks of workload during the day.
Balancing the workload in an order picking system can be addressed from several perspectives. Instead of putting the focus on strategic or tactical solutions, the emphasis of this study is on the operational level, in order to avoid peaks in the number of picked order lines every hour of the day. The objective function aims to minimise the variance of planned order lines over all time slots, for each pick zone. Solving instances of realistic size to optimality in a reasonable amount of computation time does not seem feasible due to the complex nature of the operational workload imbalance problem (OWIP). The objective of this study is the development of an iterated local search algorithm to solve OWIP in a parallel zoned manual order picking system.
- Scheduling container transportation with capacitated vehicles through conflict-free trajectories: A local search approach
Emmanouil Thanos (KU Leuven) Co-authors: Tony Wauters, Greet Vanden Berghe Abstract: This study proposes an integrated model and a Local Search (LS) algorithm for real-time scheduling of transportation requests in a container terminal. Terminal's detailed layout is modeled as a network graph, including stacks of container boxes with different physical properties. Capacitated vehicles are integrated alongside their operations. Various real-world movement restrictions and conflict rules are incorporated, in addition to requests' precedence and container boxes' stacking constraints.
Thursday 15:20 - 16:20 Operations management Room 130 - Chair: Roel Leus
- The Robust Machine Availability Problem
Guopeng Song (KU Leuven) Co-authors: Daniel Kowalczyk, Roel Leus Abstract: We define and solve the robust machine availability problem in a parallel-machine environment, which aims to minimize the required number of identical machines that will still allow to complete all the jobs before a given due date.A branch-and-price procedure is proposed based on a set covering formulation, with the robust counterpart formulated for the pricing problem. Zero-suppressed binary decision diagrams (ZDDs) are introduced for solving the pricing problem,
- Optimizing line feeding under consideration of variable space constraints
Nico André Schmid (Ghent University) Co-authors: Veronique Limère Abstract: Feeding assembly lines with an increasing amount of parts is a very challenging and important task. We propose a model to support decision making for different options of part feeding minimizing overall costs. All options comprise different costs and space requirements. By allowing space borrowing of stations, we relax space constraints and show effects of space borrowing in comparison to increasing space. Furthermore, we analyze reasons for space borrowing and selection of part feeding options.
- Scheduling Markovian PERT networks to maximize the net present value: New results
Ben Hermans (KULeuven) Co-authors: Roel Leus Abstract: We study the problem of scheduling a project so as to maximize its expected net present value when task durations are exponentially distributed. Based on the structural properties of an optimal solution we show that, even if preemption is allowed, it is not necessary to do so. Next to its managerial importance, this result also allows for a new algorithm which improves on the current state of the art with several orders of magnitude.
Thursday 15:20 - 16:20 Matrix factorization Room 126 - Chair: Pierre Kunsch
- Log-determinant Non-Negative Matrix Factorization via Successive Trace Approximation
Man Shun Ang (Universite de Mons) Co-authors: Nicolas Gillis Abstract:
Non-negative matrix factorization (NMF) is the problem of approximating a nonnegative matrix X as the product of two smaller nonnegative matrices W and H so that X = WH. In this talk, we consider a regularized variant of NMF, with a log-determinant (logdet) term on the Gramian of the matrix W. This term acts as a volume regularizer: the minimization problem aims at finding a solution matrix W with low fitting error and such that the convex hull spanned by the columns of W has minimum volume. The logdet of the Gramian of W makes the columns of W interact in the optimization problem, making such logdet regularized NMF problem difficult to solve. We propose a method called successive trace approximation (STA). Based on a logdet-trace inequality, STA replaces the logdet regularizer by a parametric trace functional that decouples the columns on W. This allows us to transform the problem into a vector-wise non-negative quadratic program that can be solved effectively with dedicated methods. We show on synthetic and real data sets that STA outperforms state-of-the-art algorithms.
- Audio Source Separation Using Nonnegative Matrix Factorization
Valentin Leplat (UMons) Co-authors: Nicolas Gillis, Xavier Siebert Abstract: The audio source separation concerns the techniques used to extract unknown signals called sources from a mixed signal.
In this talk, we assume that the audio signal is recorded with a single microphone. Considering a mixed signal composed of various audio sources, the blind audio source separation consists in isolating and extracting each of the source on the basis of the single recording. Usually, the only known information is the number of estimated sources present in the mixed signal.
- Orthogonal Joint Sparse NMF for 3D-Microarray Data Analysis
Flavia Esposito (Università degli Studi di Bari Aldo Moro, Italy) Co-authors: Nicolas Gillis, Nicoletta Del Buono Abstract: Microarrays data analysis pose many challenges to understand biological processes. New technologies encourage the use of 3D microarray, that measures gene expression levels introducing time evolution over samples (useful in pharmacogenomics studies). We propose a new model, namely Orthogonal Joint Sparse NMF (OJSNMF), to extract relevant information from 3D microarrays. We tested and compared our approach on both synthetic and real datasets.
Thursday 16:30 - 17:10 Material handling and warehousing 2 Room 138 - Chair: Katrien Ramaekers
- Intermodal Rail-Road Terminal Operations
Michiel Van Lancker (KU Leuven) Co-authors: Greet Vanden Berghe, Tony Wauters Abstract: An intermodal rail-road terminal (IRRT) is a logistics facility where freight units are transshipped between trains and trucks. The IRRT can be modeled as an operational system with several aspects that complicate its optimization, such as non-crossing constraints for its cranes, uncertain arrival of trucks or specific requirements for freight units, like electricity requirements. A model for IRRT operation optimization is presented.
- Reducing Picker Blocking in a Real-life Narrow-Aisle Spare Parts Warehouse
Teun Van Gils (Hasselt University) Co-authors: Katrien Ramaekers & An Caris Abstract: Upcoming e-commerce markets force warehouses to handle a large number of orders within short time windows. Narrow-aisle warehouse systems allow to store a large number of products in small areas. In manual order picking systems, narrow aisles can result in substantial wait time due to picker blocking compared to wide-aisle systems. The objective of this study is to analyse the joint effect of the two main operational order picking planning problems, storage location assignment and picker routing, on order picking time, including travel time and wait time due to picker blocking.
Thursday 16:30 - 17:10 Routing and local search Room 130 - Chair: An Caris
- An analysis on the destroy and repair process in large neighbourhood search applied on the vehicle routing problem with time windows
Jeroen Corstjens (Hasselt University) Co-authors: An Caris, Benoît Depaire Abstract: Heuristics are most commonly evaluated using benchmark problems and comparing performance results with other methods. The aim is to be better than the competition. An investigation focused on understanding a heuristic method is rarely performed. We analyse the performance difference between two configurations of a large neighbourhood search algorithm applied on instances of the VRPTW and are able to explain the performance gap after analysing the destroy and repair process.
- Multi-tool hole drilling with sequence dependent drilling times
Reginald Dewil (Katholieke Universiteit Leuven) Co-authors: Ilker Küçükoglu, Corrinne Luteyn, Dirk Cattrysse Abstract: We present two MIP formulations for the Multi-Tool Hole Drilling Problem with Sequence Dependent Drilling Times by modeling it as a PCGTSP. In addition, we identify three improvements in the tabu search algorithm of Kolahan & Liang (2000): (1) preprocessing rules to greatly reduce the problem size, (2) a more efficient tabu list implementation based on a hash function, and (3) for a problem with n holes and m tools, by using a slightly different solution representation, neighbor objective function value evaluations can be done in O(m) instead of O(n).
Thursday 16:30 - 17:10 Traffic management Room 126 - Chair: Joris Walraevens
- Giving Priority Access to Freight Vehicles at Signalized Intersections
Joris Walraevens (Ghent University) Co-authors: Tom Maertens, Sabine Wittevrongel Abstract: Deceleration and acceleration of freight vehicles have a large impact on pollution and on waiting times. We therefore analyze a smart signalized intersection where green periods can be extended when freight vehicles approach the intersection. In particular, we research the effect of the extension period on mean waiting times. We propose an easy approximate analysis leading to explicit expressions of these mean waiting times in the model parameters. These can for instance be used to set up an optimization problem and to solve for the optimal green period extension.
- Creating a dynamic impact zone for conflict prevention in real-time railway traffic management
Sofie Van Thielen (KU Leuven) Co-authors: Pieter Vansteenwegen Abstract: In real-time, dispatchers have the challenging task to perform railway traffic management to reduce delays by updating the timetable, by means of rescheduling or rerouting some of the trains. In order to support them in making decisions, a Traffic Management System (TMS) is required that is capable of predicting train movements, detect conflicts and prevent them. A conflict occurs when two trains require the same part of the infrastructure at the same time. However, for dispatchers it is almost impossible to anticipate the impact of their actions on the entire network. This paper proposes a conflict prevention strategy (CPS) targeting at assisting dispatchers by considering only the relevant part of the network and the traffic when preventing conflicts. The CPS consists of a rerouting optimization and a retiming/reordering heuristic, and it is validated on a large study area of the Belgian network. High quality results are obtained, using only a few second of calculation time.
Thursday 16:30 - 17:10 Pharmaceutical supply chains Room 120 - Chair: Bart Smeulders
- The inventory/capacity trade-off with respect to the quality processes in a Guaranteed Service Vaccine Supply Chain
Stef Lemmens (KU Leuven) Co-authors: Catherine Decouttere, Nico Vandaele, Mauro Bernuzzi, Amir Reichman Abstract: The manufacturing of vaccines is subjected to the prevalence of meticulous quality control and quality assurance processes. Although the quality of each batch is inspected at multiple stages in the supply chain, for many batches, additional laboratory tests need to be performed and the deviations of such tests need to be investigated. For a given responsiveness level, we demonstrate the inventory/capacity trade-off with respect to the investigation of deviations in the laboratory testing.
- Optimization tool for the drug manufacturing in the pharmaceutical industry
Charlotte Tannier (N-SIDE) Co-authors: Benoit David, Sebastien Coppe Abstract: N-SIDE jointly developed with a Top 10 Pharmaceutical company a new optimization software (called CT-PRO) for improving the R&D drug manufacturing and the end-to-end supply chain management of clinical trials. CT-PRO is a cutting-edge tool based on optimization algorithms (MIP) which manages both risk management and cost minimization of all steps from drug substance manufacturing to the finished product dispensed to patients.
CT-PRO takes into account all operational constraints of the pharmaceutical industry to ensure the very high service level required for patients, while reducing manufacturing costs from 5% to 30%.
Friday 10:50 - 12:10 Optimization in health care Room 138 - Chair: Jeroen Beliën
- Staff Scheduling at a Medical Emergency Service: a case study at Instituto Nacional de Emergência Médica
Hendrik Vermuyten (KU Leuven, Campus Brussels) Co-authors: Joana Namorado Rosa; Inês Marques; Jeroen Beliën; Ana Barbosa-Póvoa Abstract: This presentation addresses a real-life personnel scheduling problem at a medical emergency service. In a first step, the problem is formulated as an integer program. However, the problem turns out to be too complex for a commercial IP solver for realistic problem instances. For this reason, two heuristics are developed, namely a diving heuristic and a variable neighbourhood decomposition search (VNDS) heuristic. The diving heuristic reformulates the problem by decomposing on the staff members. The LP relaxation is then solved using column generation. After the column generation phase has finished, integer solutions are found by heuristically branching on all variables with a value above a certain threshold until the schedule has been fixed for each staff member. Additionally, two different column generation schemes are compared. The VNDS heuristic consists of a local search phase and a shake phase. The local search phase uses the principles of fix-and-optimise in combination with different neighbourhoods to iteratively improve the current solution. After a fixed number of iterations without improvement, the search moves to the shake phase in which the solution is randomly changed to allow the heuristic to escape local optima and to diversify the search.
The performance of both heuristics is tested on a real-life case study at Instituto Nacional de Emerg\^{e}ncia M\'{e}dica (INEM) as well as nineteen additional problem instances of varying dimensions. Four of these instances are derived from the INEM dataset by changing one of the problem dimensions, while an instance generator was built to construct the fifteen other instances. Results show that the VNDS heuristic clearly outperforms both diving heuristic implementations and a state-of-the-art commercial IP solver. In only hour of computation time, it finds good solutions with an average optimality gap of only 4.76 percent over all instances. The solutions proposed by the VNDS are then compared with the actual schedule implemented by INEM. To facilitate this comparison, the heuristic is implemented in a scheduling tool with an attractive graphical user interface. For six choices of objective function weights, which put different emphases on the relative importance of the various soft constraints, schedules are constructed by the VNDS. Each of the six schedules is compared with the actual schedule implemented by INEM and evaluated on eight key performance indicators (KPIs). All proposed schedules outperform the real schedule. Finally, two what-if scenarios are analysed to illustrate the impact of managerial decisions on the quality of the schedules and the resulting employee satisfaction.
- Home chemotherapy: optimizing the production and administration of drugs.
Véronique François (HEC Liège - Management School of the University of Liège) Co-authors: Yasemin Arda, Diego Cattaruzza, Maxime Ogier Abstract: We introduce an integrated production scheduling and vehicle routing problem in which chemotherapy drugs are produced at the hospital and then administered to patients at home before the drugs expire. The problem calls for the determination, for each patient, of the drug production start time and administration start time, as well as the assignment of these two tasks to a pharmacist and a nurse respectively, with the objective of minimizing the total working time of pharmacists and nurses.
- An analysis of short term objectives for dynamic patient admission scheduling
Yi-hang Zhu (KU Leuven) Co-authors: Túlio A. M. Toffolo, Wim Vancroonenburg, Greet Vanden Berghe Abstract: Periodic re-optimization is a common approach for handling dynamic scheduling problems. When applying this approach, a well designed short term objective is important for obtaining good quality long term solutions. This work investigates how different short term strategies impact long term solutions in the context of the dynamic patient admission scheduling problem. Experimental results suggest that our proposed approach provides better solutions than the best-known for the problem.
- Robust Kidney Exchange Programs
Bart Smeulders (HEC Liège - Management School of the University of Liège) Co-authors: Yves Crama, Frits C.R. Spieksma Abstract: In this presentation, we talk about a generalization of recourse mechanisms described in the literature on the robust kidney exchange problem. This generalization can theoretically increase the number of kidney transplants, but this comes at a cost of increased computational difficulty.
Friday 10:50 - 12:10 Network design Room 130 - Chair: Jean-Sébastien Tancrez
- Considering complex routes in the Express Shipment Service Network Design problem
José Miguel Quesada PÉrez (Université catholique de Louvain) Co-authors: Jean-Charles Lange, Jean-Sébastien Tancrez Abstract: Defining a network of flights for the delivery of express packages is known as the Express Shipment Service Network Design problem. Authors addressing it usually include one-leg, two-leg and ferry routes. Assessing the value of more complex routes is an open question of practical value. We develop a model with five complex routes: routes bypassing hubs to deliver packages; routes connecting hubs; routes with relaxed due and release times; routes visiting two hubs; and routes transferring packages between planes. We use an extensive set of experiments to assess their contribution.
- Assessing Collaboration In Supply Chain
Thomas Hacardiaux (Université catholique de Louvain) Co-authors: Jean-Sébastien Tancrez Abstract: We propose a location-inventory model, formulated as a conic quadratic mixed integer program, to assess the benefit of horizontal cooperation, using the synergy value, comparing costs of stand-alone companies and horizontal partnerships. We conduct a large set of numerical experiments, varying several key parameters (vehicles capacity, opening, inventory and ordering costs and demand variability). We also discuss the impact of the number of partners, of the ability to open new facilities and of the retailers’ locations.
- Planning of feeding station installment and battery sizing for an electric urban bus network
Virginie Lurkin (École polytechnique fédérale de Lausanne) Co-authors: A. Zanarini, S.S. Azadeh, Y. Maknoon, M. Bierlaire Abstract: During the last few decades, environmental impact of the fossil fuel-based transportation infrastructure has led to renewed interest in electric transportation infrastructure, especially in urban public mass-transportation sector. The deployment of battery-powered electric bus systems within the public transportation sector plays an important role to increase energy efficiency and to abate emissions. Rising attention is given to bus systems using fast charging technology. An efficient feeding stations installation and an appropriate dimensioning of battery capacity are crucial to minimize the total cost of ownership for the citywide bus transportation network and to enable an energetically feasible bus operation. The complexity of the problem comes from the simultaneous decisions of the power capacity for the batteries in the buses, and the locations and types of feeding stations. A mixed-integer linear optimization model is developed to determine the cost optimal feeding stations installation for a bus network as well as the adequate battery capacity for each bus line of the network
- A matheuristic for the problem of pre-positioning relief supplies
Renata Turkes (University of Antwerp) Co-authors: Kenneth Sörensen, Daniel Palhazi Cuervo Abstract: Every year millions of people around the world are affected by natural and man-made disasters. No country is immune from the risk of disasters, but much human loss can be avoided by preparing to better deal with these emergencies. Pre-positioning emergency supplies at strategic locations is a mechanism to increase preparedness for disasters, as it makes the critical supplies such as water, food or medicine readily available to the people in need. To solve the pre-positioning problem is to develop a strategy that determines the number, location and size of storage facilities, the quantities of various types of emergency supplies stocked in each facility and the distribution of the supplies to demand locations after an event, under uncertainty about demands, survival of pre-positioned supplies and transportation network availability. Despite a growing scientific attention for the pre-positioning problem, there still does not exist a set of benchmark instances what hinders thorough hypotheses testing, sensitivity analyses, model validation or solution procedure evaluation. Researchers therefore have to invest a lot of time and effort to process raw historical data from several databases to generate a single case study that is nonetheless hardly sufficient to draw any meaningful conclusions. By carefully manipulating some of the instance parameters, we generated 30 diverse case studies that were inspired from 4 instances collected from the literature. In addition, we developed a tool to algorithmically generate arbitrarily many random instances of any size and with diverse characteristics. The case studies and the random instance generator will be made publicly available to hopefully foster further research on the problem of pre-positioning relief supplies. Since the problem become intractable for any of the instances of reasonable size, we developed a matheuristic that employs iterated local search techniques to look for good location and inventory configurations and uses CPLEX to find the optimal solution of the aid distribution sub-problem. Numerical experiments on the set of instances we generated show that the matheuristic outperforms CPLEX for any given computation time, especially for large instances.
Friday 10:50 - 12:10 Local search methodology Room 126 - Chair: Patrick De Causmaecker
- Easily Building Complex Neighbourhoods With the Cross-Product Combinator
Fabian Germeau (CETIC) Co-authors: Yoann Guyot, Gustavo Ospina, Renaud De Landtsheer, Christophe Ponsard Abstract: OscaR.cbls supports a domain specific language for modelling local search procedures called combinators. In this language, local search neighbourhoods are objects and they can be combined into other neighbourhoods. Such mechanisms provide great improvement in expressiveness and development time, as is targeted by the OscaR.cbls framework. This presentation will illustrate this and focus on two combinators: the cross product of neighbourhoods and the fix-point of neighbourhood.
- Generic Support for Global Routing Constraint in Constraint-Based Local Search Frameworks
Renaud De Landtsheer (CETIC) Co-authors: Quentin Meurisse Abstract: This presentation will present a generic support incorporated in OscaR.cbls to develop global constraint on routing problems. Global constraints are represented by a non-Abelian group (T,+,-) and the provided mechanics manages every technical aspect of the constraints
- Formalize neighbourhoods for local search using predicate logic
Tu San Pham (KU Leuven) Co-authors: Jo Devriendt, Patrick De Causmaecker Abstract: Local search heuristics are powerful approaches to solve difficult combinatorial optimization problems and are particularly suitable for large size problems. However, local search approaches are often expressed in low-level concepts, which are not suitably formal to allow for modularity and reusability. Related to this issue, Swan et. al. point out the need of a pure-functional description of local search and metaheuristics. Lastly, Hooker opined in that “Since modelling is the master and computation the servant, no computational method should presume to have its own solver. [...] Exact
methods should evolve gracefully into inexact and heuristic methods as the problem scales up.”
The above issues inspire our work on formalizing local search heuristics, which should lead to modular, reusable and machine-interpretable formalizations of local search heuristics.
A local search heuristic consists of 3 layers: search strategy, neighbourhood moves and evaluation machinery. While the first and last layers are most likely problem independent, neighbourhood moves are usually problem-specific. Given a set of neighbourhoods moves, several local search heuristics can be created for a single problem.
Small changes in problem requirements are also easy to be taken into account without affecting most of the neighbourhoods and the general algorithms. Therefore, formalizing neighbourhood moves is the first step towards formalizing local search heuristics.
In this work, we formally describe neighbourhood moves and their corresponding move evaluations in FO(.), a rich extension of first-order logic, in which a problem’s constraints and objective function can also be formalized. We also extend IDP, an automated reasoning system , with built-in local search heuristics, e.g. first improvement search, best improvement search and local search, to obtain a framework that employs the modular neighbourhood formalizations to simulate a local search algorithm.
This framework is demonstrated by reusing the formalizations of three different optimization problems: the Travelling Salesman Problem, the assignment problem and the colouring violations problems. The numerical results show the feasibility of our work.
- What is the impact of a solution representation on metaheuristic performance?
Pieter Leyman (KU Leuven KULAK) Co-authors: Patrick De Causmaecker Abstract: In metaheuristics, diversification and intensification are typically the driving forces of the search process. Diversification allows for exploration of different regions in the solution space, whereas intensification focuses on promising regions. Metaheuristics, however, often operate on a representation of the solutions, rather than on solutions themselves. Hence, it is worth investigating the impact of the selected representation on algorithm performance.
Friday 10:50 - 12:10 ORBEL Award Room 120 - Chair: Frits Spieksma
Friday 13:00 - 14:00 Production and inventory management Room 138 - Chair: Tony Wauters
- Dynamic programming algorithms for energy constrained lot sizing problems
Ayse Akbalik (LCOMS, Université de Lorraine) Co-authors: Christophe RAPINE, Guillaume GOISQUE Abstract: We study a single-item lot sizing problem under an energy constraint. We have to satisfy a demand known over a finite horizon in a system of M parallel, identical and capacitated machines. In each period, we have to decide how the available amount of energy is shared among the start-up of the machines and the production of units. We show this problem to be NP-hard if some energy parameters are time-dependent, even if almost all the cost parameters are null. We also show that the problem is polynomial if all the energy consumption parameters are stationary.
- Minimizing makespan on a single machine with release date and inventory constraints
Morteza Davari (KU Leuven) Co-authors: Mohammad Ranjbar, Roel Leus, Patrick De Causmaecker Abstract: In this paper, we consider a single machine scheduling problem with release date and inventory constraints. Each job arrives at its release date to a machine, has a processing time and has an impact on the inventory level. We aim to find a sequence of jobs such that the makespan is minimized while all release dates and inventory constraints are met. To solve the problem efficiently, we introduce a block-based branch-and-bound algorithm. Via computational results, we show that our algorithm is more efficient than the alternative approaches in the literature.
- Stocking and Expediting in Two-Echelon Spare Parts Inventory Systems under System Availability Constraints
Melvin Drent (University of Luxembourg) Co-authors: Joachim Arts Abstract: We consider a two-echelon spare parts inventory system consisting of one central warehouse and multiple local warehouses. Each warehouse keeps multiple types of repairable parts to maintain several types of capital goods. The local warehouses face Poisson demand and are replenished by the central warehouse. We assume that unsatisfied demand is backordered at all warehouses. Furthermore, we assume deterministic lead times for the replenishments of the local warehouses. The repair shop at the central warehouse has two repair options for each repairable part: a regular repair option and an expedited repair option. Both repair options have stochastic lead times. Irrespective of the repair option, each repairable part uses a certain resource for its repair. Assuming a dual-index policy at the central warehouse and base stock control at the local warehouses, an exact and efficient evaluation procedure for a given control policy is formulated. To find an optimal control policy, we look at the minimization of total investment costs under constraints on both the aggregate mean number of backorders per capital good type and the aggregate mean fraction of repairs that are expedited per repair resource. For this non-linear non-convex integer programming problem, we develop a greedy heuristic and an algorithm based on decomposition and column generation. Both solution approaches perform very well with average optimality gaps of 1.56 and 0.23 percent, respectively, across a large test bed of industrial size.
Friday 13:00 - 14:00 Logistics 4.0 Room 130 - Chair: Thierry Pironet
- Drone delivery from trucks: Drone scheduling for given truck routes
Dirk Briskorn (University of Wuppertal) Co-authors: Nils Boysen, Stefan Fedtke, Stefan Schwerdfege Abstract: Last mile deliveries with unmanned aerial vehicles (also denoted as drones) are seen as one promising idea to reduce the negative impact of excessive freight traffic. To overcome the difficulties caused by the comparatively short operating ranges of drones, an innovative concept suggests to apply trucks as mobile landing and take-off platforms. In this context, we consider the problem to schedule the delivery to customers by drones for given truck routes. Given a fixed sequence of stops constituting a truck route and a set of customers to be supplied, we aim at a drone schedule (i.e., a set of trips each defining a drone's take-off and landing stop and the customer serviced), such that all customers are supplied and the total duration of the delivery tour is minimized. We differentiate whether multiple drones or just a single one are placed on a truck and whether or not take-off and landing stops have to be identical. We provide an analysis of computational complexity for each resulting subproblem, introduce mixed-integer programs, and provide a computational study evaluating them.
- Addressing Uncertainty in Meter Reading for Utility Companies using RFID Technology: Simulation Experiments
Christof Defryn (Maastricht University) Co-authors: Debdatta Sinha Roy, Bruce Golden Abstract: Utility companies read the electric, gas, and water meters of their residential and commercial customers on a regular basis. A short distance wireless technology called the radio-frequency identification (RFID) technology is used for automatic meter reading (AMR). The main contribution of our research is to address the issues of RFID technology by generating robust routes for the CEVRP that minimize the number of missed reads and to demonstrate these ideas by simulation experiments using real-world data from utility companies.
- A UAV Location and Routing Problem with Synchronization Constraints
Ertan Yakici (National Defense University, Turkish Naval Academy) Co-authors: Mumtaz Karatas, Oktay Yilmaz Abstract: In this study, we introduce a problem which attempts to optimize location and routing of a homogeneous UAV fleet. The problem allocates the available capacity to the potential locations while it sustains the feasibility defined by synchronization constraints which include time windows at visited points and capacity monitoring in the stations. An MILP formulation for the problem is given and a heuristic method based on ACO approach is suggested.
Friday 13:00 - 14:00 Data clustering Room 126 - Chair: Yves De Smet
- Extensions of PROMETHEE to multicriteria clustering: recent developments
Jean Rosenfeld (Université libre de Bruxelles) Co-authors: Van Assche Dimitri, De Smet Yves Abstract: PROMETHEE belongs to the family of so-called outranking multicriteria methods. They have been initiated and developed by Prof. Jean-Pierre Brans since the 80s. For the last thirty years, a number of researchers have contributed to its methodological developments and applications to real problems. In 2010, Behzadian et al. already reported more than 200 applications published in 100 journals. These cover finance, health care, environmental management, logistics and transportation, education, sports, etc. The successful application of PROMETHEE is certainly due to its simplicity and the existence of user-friendly software such as PROMCALC, Decision Lab 2000, Visual PROMETHEE and D-SIGHT. PROMETHEE has initially been developed for (partial or complete) ranking problems. Later, additional tools have been proposed like for instance GAIA for the descriptive problematic or PROMETHEE V for portfolio selection problems. Following this trend, J. Figueira, Y. De Smet and J.P. Brans proposed, in 2004, an extension of PROMETHEE for sorting and clustering problems. Unfortunately, these first extensions presented some limits and therefore were never published in a scientific journal (but remained accessible as a technical report of the SMG research unit). Today, this work has been cited 78 times (according to Google Scholar) and has led to the development of different algorithms for sorting and clustering (such as for instance FlowSort or P2CLUST). The aim of this presentation is to present recent advances in multicriteria clustering methods based on the PROMETHEE methodology. After a brief summary of existing procedures we will focus on related research questions such as the development of partition quality indicators or the integration of clusters size constraints.
- More Robust and Scalable Sparse Subspace Clustering Based on Multi-Layered Graphs and Randomized Hierarchical Clustering
Maryam Abdolali (Amirkabir University of Technology (Tehran Polytechnic) ) Co-authors: Nicolas Gillis Abstract: Despite theoretical guarantees and empirical success of sparse subspace clustering (SSC) for partitioning data from union of subspaces, it is not practical for large-scale datasets.
To this end, we propose a multi-layered graph framework that selects multiple sets of few anchor points using a randomized hierarchical clustering technique. This structure searches for a representation which is close to subspace representation of each graph while keeping shared connectivity among different layers.
- Design and implementation of a modular distributed and parallel clustering algorithm
Landelin Delcoucq (University of Mons) Abstract: Nowadays, large heterogeneous datasets are collected. The analyze of those data to be able to extract relevant information represents a major challenge of the coming years. Data mining methods have to adapt to those changes.
My work will propose an implementation of K-Means using MapReduce.
The distributed and parallel features of my algorithm will allow us to use it on data coming from multiple locations and the modular feature will allow us to use it on data heterogeneously structured.
Friday 13:00 - 14:00 Collective decision making Room 120 - Chair: Bernard De Baets
- On facility location problems and penalty-based aggregation
Raúl Pérez Fernández (Ghent University) Co-authors: Bernard De Baets Abstract: Penalty-based aggregation problems amount to selecting as the aggregate the element that minimizes a penalty for disagreeing with the elements to be aggregated. The computation of the minimizer(s) of the penalty is not always trivial. Exhaustive analyses of the computational complexity of this issue have been addressed for different types of structures. In a more general setting, facility location problems might be considered for identifying the minimizer(s).
- Improving the quality of the assessment of food samples by combining absolute and relative information
Bernard De Baets (Universiteit Gent) Co-authors: Raúl Pérez-Fernández, Marc Sader Abstract: A classical problem in food science concerns the assessment of the quality of food samples. Typically, several panellists are trained extensively on how to assess the quality of a given food sample. Unfortunately, this training is usually expensive, and, thus, it is common to invoke untrained panellists searching for an additional source of information. Untrained panellists are not able to assess each food sample individually, therefore, they are usually asked to rank the food samples instead. In this presentation, we will explain how we can combine both types of information in order to improve the quality of the assessment of food samples.
- Coordination and threshold problems in combinatorial auctions
Bart Vangerven (Bergische Universität Wuppertal) Co-authors: Dries R. Goossens, Frits C.R. Spieksma Abstract: Combinatorial auctions are allocation mechanisms that enable selling and buying multiple items simultaneously, and allow bidders to bid on packages of items. Combinatorial auctions offer the possibility for a coalition of bids on small packages to jointly outbid a single bidder's claim on the complete set of items. Such a coalition faces two hurdles before it can become winning: the coordination and the threshold problem. Our contribution is the clarification of those two problems. We also develop a quantitative measure to express the severity of both problems.
Friday 14:10 - 15:10 Sport scheduling Room 138 - Chair: Dries Goossens
- Scheduling time-relaxed double round-robin tournaments with availability constraints
David Van Bulck (Ghent University) Co-authors: Dries Goossens Abstract: Over the last four decades, operations-research has successfully been applied to many sports scheduling problems. Most of these problems, however, concern time-constrained round-robin tournaments, i.e. tournaments where all teams meet all other teams a fixed number of times and teams play once in a round (if the number of teams is even). This is in contrast with the limited number of papers that consider time-relaxed schedules that contain (many) more slots than there are matches per team. Since time-relaxed schedules offer more flexibility, they are widely used in (amateur) competitions where venues are typically shared and players have other activities as well.
In the time-relaxed double round-robin problem with availability constraints (TRDRRA), we are given a set of teams and a set of slots. To cope with venue and player availability, each team can provide a set of dates on which it can host a game, and a set of dates on which it cannot play at all. To avoid injuries, a team should ideally rest for at least R days between two consecutive matches. However, if this is not possible, we penalize the solution with a value of p_r each time a team has only r < R days between two consecutive matches. In addition, a team can only play up to M games within a period of R+1 days. To increase the fairness of the generated schedules, we also explain how various fairness metrics can be integrated. The goal is to construct a schedule with minimum cost, scheduling all matches and respecting the availability constraints.
Since most traditional methods, such as first-break-then-schedule, focus on the alternation of home and away matches in a time-constrained setting, they are not appropriate to solve our problem. Therefore, we propose two new heuristics. The first is a genetic algorithm backed by a local improvement heuristic which is able to both repair and improve schedules, resulting in a memetic algorithm. Basically, the improvement heuristic sequentially solves a transportation problem which schedules (or reschedules) all home games of a team. In this transportation problem, the set of supply nodes consists of all slots on which this team can play a home game, and the set of demand nodes consists of all opponents against whom it can play a home game.
The second heuristic is a fix-and-relax heuristic procedure based on the teams as well as on time-intervals. This constructive method initially relaxes all variables. Next, it gradually replaces the fractional variables by resolving the model with the integrality constraints enabled for a small subset of relaxed variables. After each iteration, it then fixes the variable values of the selected group and repeats the procedure until all variables have an integral value. Finally, a simple ruin-and-recreate procedure tries to improve the generated solution. We assess the performance of both algorithms for several instances from the literature. Overall, the quality of the generated schedules from both heuristics is comparable with the optimal solutions obtained with integer programming (Gurobi) but the required computational effort is considerably less.
- Combined proactive and reactive strategies for round robin football scheduling
Xia-jie Yi (Ghent University) Co-authors: Dries Goossens Abstract: This paper focuses on the robustness of football schedules, and how to improve it using combined proactive and reactive approaches. Although hard efforts are invested at the beginning of the season to create high-quality initial football schedules, these schedules are rarely fully played as planned. Games that are postponed or canceled, inducing deviations from the initial schedule are defined as disruptions. Fixtures as they were effectively played are referred to as the realized schedule. We assume that disruptions can only be rescheduled on a date that constitutes a so-called fictive round, which is actually not present in the initial schedule.
For the purpose of evaluating the quality of initial and realized schedules, measures based on breaks, balancedness and failures are used. A break corresponds to two consecutive games without an alternating pattern of home-away advantage. Nurmi et al. introduced k?balancedness of a schedule as the biggest difference, k, between the number of home games and away games after each round among all teams. Sometimes, there are problems with rescheduling disruptions because there is no fictive round available. We call an unsuccessfully rescheduled disruption a failure.
In order to develop schedules and rescheduling policies that are robust with respect to the aforementioned quality measures, we investigate a number of combined proactive and reactive approaches. The proactive approaches focus on developing an initial schedule that anticipates the realization of some unpredicted events. We consider two strategies in which fictive rounds are spread equally throughout the season and the second half of the season, respectively. The reactive approaches revise and re-optimize the initial schedule when a disruption occurs. Three strategies are developed here: first available fictive round, best available fictive round based on breaks and best available fictive round based on k. Particularly, we discuss these strategies under two assumptions: (i) each disrupted match should be rescheduled right away and permanently, and (ii) disruptions can be rescheduled to a fictive round and this arrangement can be changed later if more information becomes available.
Considering a double round robin (DRR) tournament with twenty teams, a mirrored canonical schedule is used as an initial plan. We randomly generate disruptions to this schedule based on probability distributions resulting from our empirical study. Combining each proactive strategy with each reactive strategy, we compare the results on three performance indexes, i.e., breaks, k, and failures.
- A constructive matheuristic strategy for the Traveling Umpire Problem
Reshma Chirayil Chandrasekharan (KU Leuven) Co-authors: Tulio A. M. Toffolo, Tony Wauters Abstract: Traveling Umpire problem (TUP) is a sports scheduling problem concerning the assignment of umpires to the games of a fixed round robin tournament. Introduced by Michael Trick and Hakan Yildiz in 2007, the problem abstracts the umpire scheduling problem in the American Major League Baseball (MLB). A typical season of MLB requires umpires to travel extensively between team venues and therefore, the primary aim of the problem is to assign umpires such that the total travel distance is minimized. Constraints that prevent assigning umpires to consecutive teams or team venues make the problem challenging. Instances comprising of 8 to 30 teams have been proposed and since the introduction, various exact and heuristic techniques have been employed to obtain near optimal solutions to small and medium-sized instances. However, exact techniques prove to be inefficient in producing high quality solutions for large instances of 26 to 32 teams which resembles the real world problem. The present work focuses on improving the solutions for the large instance and presents a decomposition based method implementing constructive matheuristics (CMH). A given problem is decomposed into subproblems of which IP formulations are solved sequentially to optimality. These optimal solutions of the subproblems are utilized to construct a solution for the full problem. Various algorithmic parameters are implemented and extensive experiments are conducted to study their effects in the final solution quality. The algorithm being constructive, parameters have to be tuned such that the solution for the current subproblem does not prevent the feasibility of the future subproblems. In addition, design parameters are utilized to ensure feasibility of constraints that cannot be locally evaluated with in each subproblem. Parameters such as size of the subproblems and amount of overlap between the subproblems are tuned such that the solution quality is maximized while the runtime lies within the benchmark time limit of 5 hours. Design parameters such as the objective function and future of subproblems ensure that the solution constructed from the optimal solutions of the subproblems continues to be feasible in terms of the full problem.
The proposed method has been able to improve the current best solutions of all the large instances within the benchmark time limits. In addition, CMH is also able to improve solutions of two medium sized instances of 18 teams. Furthermore, CMH generates solutions those are comparable or better than the solutions generated by other similar heuristics. Experiments conducted so far on the TUP suggest the possibility of CMH being applied on other similar problems. Apart from the innovations in terms of design parameters that may improve the CMH algorithm, the CMH has the inherent property of getting faster with the evolution of better IP solvers. Insights on the applicability of CMH to similar optimization problems and the expected advantages or shortcomings may also be discussed.
Friday 14:10 - 15:10 Discrete choice modeling Room 130 - Chair: Virginie Lurkin
- Maximum likelihood estimation of discrete and continuous parameters: an MILP formulation
Anna Fernández Antolín (TRANSP-OR, EPFL) Co-authors: Virginie Lurkin, Michel Bierlaire Abstract: In discrete choice modeling, we often face unobserved correlations between alternatives, where it is appropriate to use nested logit models. Given a finite set of nesting structures, the traditional approach is to estimate the models corresponding to each of them and select a posteriori the most appropriate one. We propose to integrate the selection of the optimal nesting structure to the maximum likelihood framework of the parameter estimation by linearizing the objective function and converting the problem in an MILP. We call this discrete-continuous maximum likelihood.
- Integrating advanced discrete choice models in mixed integer linear optimization
Meritxell Pacheco Paneque (TRANSP-OR EPFL) Co-authors: Shadi Sharif Azadeh, Michel Bierlaire, Bernard Gendron Abstract: The integration of choice models in mixed integer linear programming (MILP) is appealing to operators because it provides a better understanding of the preferences of customers while planning for their systems. However, the complexity of choice models leads to mathematical formulations that are highly nonlinear and nonconvex in the variables of interest, and therefore difficult to be included in MILP. In this research, we present a general framework that overcomes these limitations by relying on simulation to integrate advanced discrete choice models in MILP formulations.
Friday 14:10 - 15:10 Data classification Room 126 - Chair: Ashwin Ittoo
- A density-based decision tree for one-class classification
Sarah Itani (University of Mons) Co-authors: Fabian Lecron & Philippe Fortemps Abstract: One-Class Classification (OCC) aims at predicting a single class on the basis of its lonely training representatives, called target instances. This mode of classification is thus opposed to common prediction problems with two or several classes. As data availability is not necessarily ensured for a good number of medical and industrial applications, OCC presents itself as an interesting alternative methodology. Our proposal addresses OCC through a decision tree. The related learning process is driven by density estimation.
- Comparison of active learning classification strategies
Xavier Siebert (Faculté Polytechnique de Mons) Co-authors: Nouara Bellahdid, Moncef Abbas Abstract: Modern technologies constantly produce huge quantities of data. Because these data are often plain and unlabeled, a particular class of machine learning algorithms is devoted to help in the data annotation process. In the setting considered in this paper, the algorithm interacts with an oracle (e.g., a domain expert) to label instances from an unlabeled data set. The goal of active learning is to reduce the labeling effort from the oracle while achieving a good classification. One way to achieve this is to carefully choose which unlabeled instance to provide to the oracle such that it most improves the classifier performance. Active learning therefore consists in finding the most informative and representative sample. Informativeness measures the impact in reducing the generalization error of the model, while representativeness considers how the sample represents the underlying distribution. In early active learning research the approaches were based on informativeness, with methods such as uncertainty sampling, or query by committee. These approaches thus ignore the distribution of the data. To overcome this issue, active learning algorithms that exploit the structure of the data have been proposed. Among them, approaches based on the representativeness criterion have proved quite successful, such as clustering methods and optimal experiment design. Various approaches combining the two criteria have been studied: methods based on the informativeness of uncertainty sampling or query by committee, and a measure of density to discover the representativeness criterion, others methods combine the informativeness with semi-supervised algorithms that provide the representativeness. In this work, we review several active learning classification strategies and illustrate them with simulations to provide a comparative study between these strategies.
- Risk bounds on statistical learning
Boris Ndjia Njike (Université de Mons) Co-authors: Xavier Siebert Abstract: The aim of this paper is to study theoretical risk bounds when using the Empirical Risk Minimization principle for pattern classification problems.We review some recent developments in statistical learning theory, in particular those involving minimal loss strategies.
We conclude with a discussion of the practical implications of these results.
|
|
|
ORBEL - Conference chair: Prof. A. Arda -
Platform: Prof. M. Schyns.
|
|
|
|