Meet the Jury seminar

Posted by: Patrick DE CAUSMAECKER
Contact:[email protected]

Meet the Jury, PhD Nguyen Dang

KU Leuven, KULAK, Kortrijk, Wednesday 16 May


1:30 pm, C611, Ilker Birbil, Erasmus University Rotterdam:
A Framework for Parallel Second Order Incremental Optimization Algorithms for Solving Partially Separable Problems

2:30 pm, C611, Manuel López-Ibáñez, University of Manchester:
Why Automatic Algorithm Design is Inevitable

3:30 pm, C611, Jin-Kao Hao, University of Angers:
Learning and data mining driven optimization for combinatorial search problems: case studies

5:00 pm, B422, public PhD defense of Nguyen Dang:
Data analytics for algorithm design

6:30 pm: Reception, hall Rectoraat (A380)


J. Berlamont, chairman
P. De Causmaecker, promotor
T. Stuetzle, co-promotor (ULB, Brussels)
G. Vander Berghe (FIIW)
M. Denecker, secretary
Jin-Kao Hao (University of Angers, France)
M. López-Ibáñez  (Manchester Business School, UK)


Ilker Birbil, Erasmus School of Economics, Erasmus University Rotterdam,The Netherlands

A Framework for Parallel Second Order Incremental Optimization Algorithms for Solving Partially Separable Problems

Consider a recommendation problem, where multiple firms are willing to cooperate to improve their rating predictions. However, the firms insists on finding a machine learning approach, which guarantees that their data remain in their own servers. To solve this problem, I will intro- duce our recently proposed approach HAMSI (Hessian Approximated Multiple Subsets Iteration). HAMSI is a provably convergent, second order incremental algorithm for solving large-scale partially separable optimization problems. The algorithm is based on a local quadratic approximation, and hence, allows incorporating curvature information to speed-up the convergence. HAMSI is inherently parallel and it scales nicely with the number of processors. I will conclude my talk with several implementation details and our numerical results on a set of matrix factorization problems.


Prof. Ilker Birbil received his PhD degree from North Carolina State University, Raleigh, USA. He worked for two years as a postdoctoral research fellow in The Netherlands and then as a faculty member for 13 years in Turkey. His recent research interests center around developing optimization algorithms for handling prediction and decision problems that play a key role in data science. Currently, he is working as a faculty member in Econometric Institute of Erasmus University Rotterdam, where he teaches various courses on machine learning and optimization.


Manuel López-Ibáñez, Alliance Manchester Business School, University of Manchester

Why Automatic Algorithm Design is Inevitable

Improvements in optimization, machine learning and, specially, in computing power have reached a point in which automatic methods are increasingly being used to assist human experts in designing and fine- tuning the algorithms used for optimization, machine learning, and other tasks. In this talk, I will go through some of the recent successes in this field, what can we expect from the future, and answer a few of the critical questions aimed at automatic algorithm design.


Since October 2015, Dr. Manuel López-Ibáñez is a lecturer (Assistant Professor) in the Decision and Cognitive Sciences Research Centre at the Alliance Manchester Business School, University of Manchester, UK. Between October 2011 and September 2015, he was a Chargé de recherches (Postdoctoral researcher) of the Belgian F.R.S.-FNRS at the IRIDIA laboratory in the Université Libre de Bruxelles (ULB), Brussels, Belgium. He received a M.S. degree in Computer Science from the University of Granada (Spain) in 2004, and a Ph.D. from Edinburgh Napier University, U.K. in 2009. His main expertise is on the application of metaheuristics, including local search, evolutionary algorithms and ant colony optimization, to optimization problems, including continuous, combinatorial, and multi-objective problems. His current research is on the experimental analysis and automatic configuration and tuning of stochastic optimization algorithms, in particular, when applied to multi-objective optimization problems.


Jin-Kao Hao, Computer Science Department of the University of Angers (France)

Learning and data mining driven optimization for combinatorial search problems: case studies

We present two case studies of using learning and data mining techniques for solving combinatorial optimization problems: reinforcement learning for graph coloring, and frequent patterns for quadratic assignment. We show how learning and data mining techniques can be advantageously combined with an optimization method to obtain high-quality solutions for difficult combinatorial optimization problems.


Dr. Jin-Kao Hao holds the title of Distinguished Professor at the Computer Science Department of the University of Angers (France) and is Senior Fellow of the Institut Universitaire de France. He headed the LERIA Lab. from 2003 until 2015. His research lies in the design of effective algorithms and intelligent computational methods for solving large-scale combinatorial search problems. He is interested in various application areas including bioinformatics, data science, telecommunication, complex networks, and transportation. He has published some 220 papers including 110 SCI journal papers and co-edited 9 books in the Springer LNCS series. He has served on some 200 Program Committees of International Conferences and is on the Editorial Board of 7 International Journals.


Nguyen Thi Thanh DANG, Department of Computer Science, KU Leuven

In the era of increased computational power, collecting large amounts of data on the performance of optimization algorithms has become easy. This naturally leads to the rise of applications of various techniques from data science to assist the algorithm development process. This dissertation follows the same line of research, where data analytics are applied to support two key aspects during the algorithm design process: automatic algorithm configuration and the analysis of algorithm components and parameters’ influence on algorithm performance.

With the support of the Arenberg Doctoral School