New York University
Zhong-Ping Jiang received his PhD degree in automatic control and applied mathematics from the Ecole des Mines de Paris in 1989. Currently, he is a Professor of Electrical and Computer Engineering at the Tandon School of Engineering, New York University. His main research interests include stability theory, robust/adaptive/distributed nonlinear control, adaptive dynamic programming and their applications to information, mechanical and biological systems. In these fields, he has authored over 180 journal papers and numerous conference papers with Google Scholar h-index 61. He is coauthor of the books Stability and Stabilization of Nonlinear Systems (with Dr. I. Karafyllis, Springer, 2011) and Nonlinear Control of Dynamic Networks (with Drs. T. Liu and D.J. Hill, Taylor & Francis, 2014). Professor Jiang is an IEEE Fellow and an IFAC Fellow.
Robust Adaptive Dynamic Programming with Applications to Power Systems and Neuroscience
Abstract: Bellman's Dynamic Programming is a powerful theory for addressing multi-stage decision making problems, and has been used to solve the optimal control problem. However, it suffers from the ‘curse of dimensionality' and the ‘curse of modeling’. In this talk, a new framework of robust adaptive dynamic programming (RADP) is proposed to relax these two restrictions and, as opposed to the past literature, will focus exclusively on continuous-time dynamic systems. By means of reinforcement learning and nonlinear control techniques, tools for the design of adaptive optimal nonlinear controllers will be developed. We will show that RADP is also a significant extension of the existing work in approximate/adaptive dynamic programming (ADP) in that the order of the dynamic processes in question is not known. The mismatch between the real plant and the simplified model is called dynamic uncertainty.
Applications to power systems and biological motor control are presented to illustrate the effectiveness of RADP.
Polytechnic Institute of Porto, Portugal
Professor Zita Vale is the director of the Knowledge Engineering and Decision Support Research Group (GECAD) and a professor at the Polytechnic Institute of Porto. She received her diploma in Electrical Engineering in 1986 and her PhD in 1993, both from University of Porto. She works in the area of Power Systems, with special interest in the application of Artificial Intelligence techniques to Power Systems. She published over 500 works, including about 60 papers in international scientific journals, 38 book chapters, and more than 400 papers in international scientific conferences.
A multi-agent based platform for effective management and real-time simulation of smart grids
Abstract: Smart grid concepts are rapidly being transferred to the market and huge investments have been made in renewable-based electricity generation and in smart metering. Increasing power systems efficiency requires the strategic use of the available resources.
Enabling the intensive use of solar and wind based electrical energy production, the widespread use of demand response and the integration of a significant share of electric and hybrid vehicles requires new management approaches and the practical implementation of adequate businesses models able to ensure the profitable participation of traditional and new players.
The keynote will present a multi-agent based real-time management and simulation platform that allows the real use and the realistic testing of diverse alternative business models and technologic approaches. The platform is able to make use of a wide diversity of resource scheduling and management methods; the keynote will particularly focus on the intelligent energy resource scheduling
approaches being used. A realistic case study making use of real-time monitoring data from actual buildings, and laboratorial equipment for simulating additional loads will be presented.
Faculty of Computer Science, Otto von Guericke University Magdeburg, Germany
Andreas studied computer science and economics at the Technical University of Braunschweig, Germany, received his Ph.D. in computer science from the Otto von Guericke University of Magdeburg, Germany in 2001 and changed then to UC Berkeley, CA and worked for two years as postdoctoral researcher on adaptive soft computing and visualisation techniques for information retrieval systems. In 2003 he joined the University of Magdeburg as assistant professor for information retrieval and in 2007 he was called to a chair of a tenured professor for ‘Data and Knowledge Engineering’ at the same university. Andreas was visiting researcher at the University of Melbourne, Australia, visiting professor at Université Pierre et Marie Curie, Paris and he is an Emmy Noether Fellow of the German Science Foundation (DFG). He received several teaching and best paper awards. In his research, he is currently focusing on the development of interactive systems for information retrieval and organization while exploiting most recent technologies from information retrieval, machine learning, human computer interaction and companion technologies.
Adaptive Exploration of Information Spaces: Supporting Searching, Learning and Sensemaking
Abstract: Analyzing and exploring huge object collections or retrieving specific information from it are tasks we frequently have to perform. While searching for specific bits of information in huge collections like the Web is in many cases already well supported by existing search systems, more challenging explorative (re)search tasks that require combining, linking, structuring and analyzing (sub)sets of data are not yet appropriately supported by existing technologies. Typical examples are law and patent search and investigative journalism, but also research in the digital humanities would strongly benefit from tools that support interactive exploration of complex and feature rich collections like historical archives or social networks and media. This talk will point out underlying issues and motivates how exploratory (re)search processes can be supported by user adaptive technologies. The talk will especially focus on methods that are able to use rich metadata and contextual information of huge data collections – e.g. extracted from user interaction and ontologies – as bias or constrained for interactive searching, learning and sensemaking.
Intel Corporation, USA
Dr. Catherine Huang is a Senior Research Scientist at Intel Labs. Her research interest are scalable machine learning, artificial intelligence and statistical signal processing. Catherine’s primary research focus is security intelligence. She was a Researcher-in-Residence at Intel Science and Technology Center at University of California Berkeley. Catherine is co-chair and co-founder of Intel Data Science Center of Excellence. Prior to joining Intel, Catherine was a postdoc researcher at Neurotechnology Lab in Oregon Health & Science University (OHSU). She had several year experience in China Construction Bank before her graduate study. She received her Ph.D. in Brain Computer Interfaces from OHSU in USA in 2010 and M.Sc. in Electrical Engineering from University of New Brunswick in Canada in 2005 and B.Sc. in Electrical Engineering from South China University of Technology. Catherine has two USA patents and over 30 papers and book chapters. She is a technical committee (TC) member of IEEE MLSP TC and IEEE ISA TC. She has served as Special Session Chair for IEEE IJCNN 2017 and IEEE MLSP 2015, Industrial Liaison Chair for IEEE SSCI 2016 and IEEE WCCI 2014, and Data Competition Chair for IEEE MLSP 2016 and MLSP 2013.
Challenges and Opportunities in Cybersecurity Intelligence
Abstract: Cybersecurity is among the most serious economic and national security challenges we face in the 21st century. Internet growth massively increases the number of potential targets for cyberattacks, which could potentially have disastrous consequences for individuals and for society. In fact, seriours breaches of cybersecurity have already occurred. Yet research and development for securing cyberspace has not progressed much to go beyond to win the hidden battle. With vast amounts of data of many types at multiple scales in time and in space, there is an essential need for artificial intelligence approaches to accelerate progress. In this talk, some work in scalable cybersecurity solutions and adversarial resilient machine learning at Intel will be presented. Real-world applications such as web security and autonomous driving will be discussed. These applications illustrate the opportunities and challenges in applying artificial intelligence in securing cyberspace if cybersecurity intelligence is to fulfill its potential to advance the protection of cyberspace from attack.
University of Essex, UK
Edward Tsang is the Director of the Centre for Computational Finance and Economic Agents (CCFEA), University of Essex. CCFEA is an interdisciplinary research centre, which applies artificial intelligence methods to problems in finance and economics. He has a first degree in Finance and a PhD in artificial intelligence. His best known for his work in constraint satisfaction and computational finance. His book in constraint satisfaction is the most cited work in the field. He founded the Computation Finance and Economics Technical Committee in IEEE’s Computational Intelligence Society in 2004.
Directional Changes: a new way to look at market dynamics
Abstract: This talk explains a new concept called Directional Changes and how it sheds new lights on the study of financial markets.
When history is recorded, one does not report the situation at the end of each day, each month or each year. One records significant events. Yet when one looks at stock prices, one often use time series such as daily closing prices; that is snapshots at the end of each day. Richard Olsen proposed an event-based approach to summarize price movements, based on a concept called Directional Changes. A directional change is defined by a threshold that the observer cares about. For example, one investor might find 5% significant, while another might find 1% significant. An r% directional change is basically a price change of r% from the last peak or bottom price. This new concept, Directional Change, provides researchers with new perspectives to the market. It enables one to see things that cannot be seen in time series. It also enables researchers to discover striking regularities in markets, which one could benefit from. The concepts of Directional Change and its potential in algorithmic trading will be explained in the talk.
Nanyang Technological University, Singapore
Dr. Lipo Wang received the Bachelor degree from National University of Defense Technology (China) and PhD from Louisiana State University (USA). His research interest is intelligent techniques with applications to communications, image/video processing, biomedical engineering, and data mining. He is (co-) author of over 270 papers, of which more than 90 are in journals. He holds a U.S. patent in neural networks and a Chinese patent in VLSI. He has co-authored 2 monographs and (co-)edited 15 books. He was/will be keynote/panel speaker for 25 international conferences. He is/was Associate Editor/Editorial Board Member of 30 international journals, including 3 IEEE Transactions, and guest editor for 10 journal special issues. He is an AdCom member of the IEEE Computational Intelligence Society (CIS) for 2 terms and served as CIS Vice President for Technical Activities and Chair of Emergent Technologies Technical Committee. He is a member of the Board of Governors of the International Neural Network Society (2011-2016) and was an AdCom member of the IEEE Biometrics Council. He served as Chair of Education Committee, IEEE Engineering in Medicine and Biology Society (EMBS). He was President of the Asia-Pacific Neural Network Assembly (APNNA) and received the APNNA Excellent Service Award. He was founding Chair of both the EMBS Singapore Chapter and CIS Singapore Chapter. He serves/served as chair/committee members of over 200 international conferences.
Towards Human-Level Intelligence in Image Classification
Abstract: This talk highlights some of our recent research results in image classification using computational intelligence. Our techniques include class-dependent feature selection, compact radial-basis-function (RBF) neural networks, granular support vector machines, and semi-exhaustive search feature selection. We demonstrate our algorithms in various challenging problems, such as semiconductor chip fault detection, glaucoma screening, microarray cancer diagnosis, content-based image retrieval, face recognition, and video action recognition.
University of Michigan-Dearborn, USA
Dr. Yi Lu Murphey received a M.S. degree in computer science from Wayne State University, Detroit, Michigan, in 1983, and a Ph.D. degree with a major in Computer Engineering and a minor in Control Engineering from the University of Michigan, Ann Arbor, Michigan, in 1989. She is a professor of electrical and computer engineering and the Associate Dean for Graduate Education and Research with the College of Engineering and Computer Science at the University of Michigan-Dearborn. Prior to her current position, she served as the chair of the Electrical and Computer Engineering Department for seven years. She has authored over 140 publications in refereed journals and conference proceedings in the areas of machine learning, pattern recognition, and computer vision with applications to intelligent vehicle systems, optimal vehicle power management, data analytics, and automated and connected vehicles. She has received significant research funding over the last twenty years from the National Science Foundation, US Department of Defense, and many industrial companies. Dr. Murphey is a Distinguished Lecturer for the IEEE Society of Vehicular Technologies and a fellow of IEEE.
Computational Intelligence in Vehicles and Transportation Systems
Abstract: Nearly every facet of our society is undergoing a shift of connecting individuals to their community. The “Internet of Things” movement is giving great power to the individual by personalizing information that is time and location-aware. In the broader transportation community, building on the momentum and success of prior and current research, two primary areas have been identified as the forefront of ITS (Intelligent Transportation Systems) research, Connected and Automated Vehicles (CAV). Transforming individual vehicles into an integrated cyberphysical system through connectivity and automation can improve vehicle efficiency, convenience and safety to drivers, and reduce greenhouse-gas emissions by an order of magnitude. In this talk, I will present three different research projects in the areas of CAV using computational intelligence, accurate prediction of traffic flow, individual driving speed profiles, and personalized driving routes.
Nanyang Technological University (NTU), Singapore
Yew-Soon Ong is Chair and Professor of the School of Computer Science and Engineering at Nanyang Technological University (NTU), Singapore. He has served as Director of the Computational Intelligence Research Centre from 2008 - 2015 and Director of the A*Star SIMTECH-NTU Joint Lab on Complex Systems. He is also a Principal Investigator of the Data Analytics & Complex Systems Programme in the NTU-Rolls Royce Corporate Laboratory. He received his Bachelors and Master's degrees in NTU and obtained his PhD degree on Artificial Intelligence in complex design from the Computational Engineering and Design Centre at the University of Southampton, United Kingdom. His research focus is in computational intelligence (CI), particularly on evolutionary, memetic computation and machine learning. He founded the Task Force on Memetic Computing under the IEEE Computational Intelligence Society Emergent Technology Technical Committee and served as its Chair from 2007 to 2010. He has several talks as keynote, plenary or invited speaker at international conferences, workshops and research institutions worldwide. He was named as a Thomson Reuters Highly Cited Researcher and World's Most Influential Scientific Minds in 2016. He received the 2015 IEEE Computational Intelligence Magazine Outstanding Paper Award and the 2012 IEEE Transactions on Evolutionary Computation Outstanding Paper Award for his work pertaining to Memetic Computing. He is founding Editor-In-Chief of the IEEE Transactions on Emerging Topics in Computational Intelligence, founding Technical Editor-In-Chief of Memetic Computing Journal (Springer), and Associate Editor of many journals including the IEEE Transactions on Evolutionary Computation, the IEEE Transactions on Neural Networks & Learning Systems, IEEE Transactions on Cybernetics, IEEE Transactions on Big Data, and others. He has filed several patents and innovative achievements in the area of computational intelligence. His research results have generated considerable commercialisation impacts and led to new start-ups. He was Conference Chair of the 2016 IEEE Congress on Evolutionary Computation, IEEE World Congress on Computational Intelligence, Vancouver, Canada, and serves as secretary of the IEEE Transactions on Computational Intelligence and AI in Games steering committee.
Feature Grouping in Big Dimensionality
Abstract: The world continues to generate quintillion bytes of data daily, leading to the pressing needs for new efforts in dealing with the grand challenges brought by Big Data. Today, there is a growing consensus among the computational intelligence communities that data volume presents an immediate challenge pertaining to the scalability issue. However, when addressing volume in Big Data analytics, researchers in the data analytics community have largely taken a one-sided study of volume, which is the "Big Instance Size" factor of the data. The flip side of volume which is the dimensionality factor of Big Data, on the other hand, has received much lesser attention.
In this talk, focus is placed on the relatively under-explored topic of "Big Dimensionality", wherein the explosion of features (variables) brings about new challenges to computational intelligence. We begin with an analysis on the origins of Big Dimensionality. The evolution of feature dimensionality in the last two decades is then discussed using popular data repositories considered in the data analytics and computational intelligence research communities. Subsequently, some of the state-of-the-art feature selection schemes reported in the field of computational intelligence are reviewed to reveal the inadequacies of existing approaches in keeping pace with the emerging phenomenon of Big Dimensionality. Our findings on several established databases with big dimensionality across a wide spectrum of domains have indicated that an extremely small portion of the feature pairs contributes significantly to the underlying interactions and there exists feature groups that are highly correlated. Inspired by the intriguing observations, a novel learning approach that exploits the presence of sparse correlations for the efficient identifications of informative and correlated feature groups from big dimensional data that translates to a reduction in complexity is then presented.
Ecole Polytechnique, France
Benjamin Doerr is a full professor at the French Ecole Polytechnique and an adjunct professor at Saarland University. He received his diploma (1998), PhD (2000) and habilitation (2005) from Kiel University. His research area is the theory both of problem-specific algorithms and of randomized search heuristics like evolutionary algorithms. He is a co-founder of the theory track at GECCO and served as its co-chair 2007-2009 and 2014. He regularly gives theory tutorials at GECCO, CEC, and PPSN. He is a member of the editorial boards of several journal including "Evolutionary Computation", "Natural Computing", and "Theoretical Computer Science".
From complexity theory to better algorithms
Abstract: Black-box complexity is a purely theoretical notion trying to capture how difficult it is to solve a problem via problem-unspecific algorithms (black-box algorithms). In simple words, the black-box complexity of an optimization problem is the minimum (expected) number of fitness evaluations a (randomized) algorithm needs to perform find the optimum. In this talk, I will show how this abstract notion has helped us to develop novel and superior evolutionary algorithms. The talk requires no previous expertise in theory of algorithms.
Intelligent Systems Research Centre (ISRC), Ulster University, N. Ireland
Prof. McDaid graduated from the University of Liverpool UK with a BEng (Hons) in Electrical and Electronics Engineering in 1985 and subsequently completed his PhD in Solid State Devices from the same institution. He is currently Professor of Computational Neuroscience at Ulster University and is head of the Computational Neuroscience & Neural Engineering) CNET research group. Prof. McDaid principle research interest is software/hardware implementations of neural based computational systems and he has several research grants in this domain. His ultimate vision is to understand and model the mechanisms that underpin self-repair in the human brain thus providing the blue print for advanced architectures that exhibit a fault tolerant capability well beyond existing computational systems. He has been an investigator on a wide range of funded projects focusing on modelling inter-neuron communications, spiking neuron cells, the role of endocannaboids in self-repairing astrocyte-neuron networks and G-Protein Coupled Receptor signaling in astrocytes. His most recent grant is focused on the development of an astro-centric brian model in hardware that harnesses the self repairing capability for robotic applications. Prof. McDaid was guest editor for a special issue in the International Journal of Neural Systems and principle editor for a Special Topic in Frontiers of Neuroscience entitled “Biophysically based Computational Models of Astrocyte - Neuron Coupling and their Functional Significance. He has co-authored over 120 publications.
From Biophysical Models of Brain Repair to Highly Adaptive Hardware
Abstract: It is widely accepted that the brain’s computational ability is distributed across a connectionist system of neurons. However, there are many brain functions which are difficult to explain through neural communications alone and we look to other cells and their functional significance to unravel the complex biophysical processes occurring within the brain. Recent research has highlighted that astrocytes (a sub-type of glial cell in the central nervous system) continually exchange information with multiple synapses and consequently, act as regulators of neural circuitry through coordination of transmission at remote synaptic junctions. The regulatory capability of these cells is now known to underpin many high level processes in the functional and dysfunctional brain. This talk will present recent research at ISRC that explores how bi-directional coupling between astrocytes and spiking neurons can provide a distributed cellular level repair capability, where faults in the input neural circuitry that dampen or even silence neuronal activity can be circumvented by re-modeling the weights. The talk will highlight that a new generation of highly novel, self-repairing hardware based algorithms is now possible by embracing this “astro-centric” paradigm. Progress in the mapping of this paradigm to hardware and in particular for FPGA implementations, will be presented. Strategies for addressing some of the hardware challenges including the interconnecting of astrocytes and spiking neurons for large scale networks will be also be outlined.
Laboratory of Thermal Turbomachines, Parallel CFD & Optimization Unit, National Technical University of Athens, Greece
Prof. Kyriakos C. Giannakoglou received his B.S. degree in Mechanical Engineering in 1982 and his Ph.D. degree in Computational Fluid Dynamics in 1987, both from the National Technical University of Athens (NTUA), Greece. He is Professor with the Lab. of Thermal Turbomachines and head of the Parallel CFD & Optimization Unit of the School of Mechanical Engineering of NTUA. His research interests include development of CFD methods for internal (including turbomachines) and external aerodynamics (cars, aircrafts), development of inverse design and (multi-objective, multi-disciplinary) optimization algorithms based on evolutionary algorithms and deterministic (adjoint) methods and/or neural networks and parallelization of the corresponding software (including Cluster & Grid Computing). Regarding bio-inspired, stochastic optimization methods, he has developed a very efficient metamodel-assisted evolutionary algorithm, enhanced by distributed and or hierarchical (multilevel) search or a combination of them, leading to a reduction in the optimization turnaround time by an order of magnitude with respect to conventional EAs. His research group has developed and brought to market the generic optimization software EASY (Evolutionary Algorithm System, http://184.108.40.206/research/easy.html), including all these features. Regarding adjoint-based optimization, he is intensively developing new discrete or (mostly) continuous variants for the computation of first- and higher-order derivatives, in various project funded by the EU, three major European car industries, etc. He has authored 60+ Journal papers, >100 Conference papers and chapters in books. He has supervised 15 concluded PhD theses (URL: http://220.127.116.11/research/). Since May 2007, he is the Chairman of the ERCOFTAC (European Research Community on Flow Turbulence and Combustion) Special Interest Group SIG34 on Design Optimization.
Multi-objective Evolutionary Algorithms assisted by Artificial Neural Networks and Dimensionality Reduction Techniques – Industrial Applications
Abstract: During the last two decades, Evolutionary Algorithms (EAs) have widely been used to solve optimization problems, including multi-objective and multi-disciplinary ones,, in various scientific areas. It is known that an "unpleasant" feature of an EA-based optimization is that the required number of evaluations increases considerably by increasing the problem size (curse of dimensionality). This is quite important in engineering/industrial applications, such as the ones the author's group is dealing with (for the aerospace, turbomachinery or the automotive industry, for example). Quite often, for these reasons, engineers solving industrial optimization problems with costly evaluation models and/or with a great number of design variables become reluctant to use EAs, in their standard form at least. To alleviate these problems, surrogate evaluation models can be implemented in the framework of multi-objective EAs, giving rise to the so-called Metamodel--Assisted EAs (\MAEAs). A class of efficient and effective metamodels relies upon computational intelligence techniques; these will be presented in the first part of this lecture. On the other hand, in engineering optimization problems, the number of design variables can be quite high, which, apart from requiring many generations/evaluations for the EA to converge, increases also is the cost for training the metamodel(s) and, despite that, makes them having low prediction accuracy. This problem can be overcome by applying dimensionality reduction techniques, driven by then principal component analysis (PCA) of representative, dynamically-updated, individual sets. In the second part of this lecture, the PCA is not used only to reduce the dimension of the variable vector when the metamodels are trained but, also, to better guide the application of the evolution operators. Multi-objective industrial applications will be showcased in both parts of the lecture.
France Telecom R&D
Maurice Clerc was working with France Telecom R&D as a Research Engineer (optimisation of telecommunications networks).
In 2005, he has been awarded with James Kennedy by IEEE Transactions on Evolutionary Computation for their 2002 paper on particle swarm optimisation (PSO). He is currently retired but still active in this field: a book about PSO in 2005 (translated into English in 2006), a book in 2015 about guided randomness in optimisation (translated into English), several papers in international journals and conference proceedings, external examiner for PhD theses, reviewer and member of editorial board and program committee for conferences and journals (IEEE TEC best reviewer award 2007), co-webmaster of the Particle Swarm Central and of the Adaptive Population-based Simplex optimiser site.
Total Memory Optimiser: Concept and Compromises
Abstract: For most usual optimisation problems, the Nearer is Better assumption is true (in probability). Classical iterative algorithms take this property into account, either explicitly or implicitly, by forgetting some information collected during the process, assuming it is not useful any more. However, when the property is not globally true, i.e. for deceptive problems, it may be necessary to keep all the sampled points and their values, and to exploit this increasing amount of information. Such a basic Total Memory Optimiser is presented here. We experimentally show that this technique can outperform classical methods on deceptive problems. As it gets very computing time expensive when the dimension of the problem increases, a few compromises are suggested to speed it up.