SSD - Helander Seminar
Prof. Dr. Per Helander - Fusion Energy, Stellarators, and the Wendelstein 7-X Project
Department of Stellarator Theory, Max Planck Institute for Plasma Physics Greifswald, Germany
Do we need new energy sources? Despite all the rhetoric from politicians, the vast majority of all energy still comes from fossil fuels and will continue to do so for the foreseeable future – the main reason being the enormous technical difficulties facing any alternative solution. There are, in fact, only very few carbon-free options that could even remotely satisfy Mankind’s present hunger for energy.
Fusion energy is one of these options. Fusion reactions occur in the Sun and other stars, but another reaction, that between deuterium and tritium producing helium, has a much higher cross section and would be easier to realise on Earth. The fuel must however be heated to at least 100 million degrees and be thermally insulated from the surroundings. The most promising way to accomplish this task is to confine the resulting plasma in a toroidal magnetic field.
Two main confinement concepts have emerged along these lines, the tokamak and the stellarator. They have been explored over decades of research and taken great strides in recent years. A very large tokamak, ITER, is now being built in Cadarache in the south of France, with the aim of demonstrating a positive energy balance from a fusion plasma for the first time. A much more modest stellarator – but still the world’s largest experiment of this type – has recently started operation in Greifswald. This device, Wendelstein 7-X, aims to show the feasibility of fusion in stellarators, which offers potential benefits in comparison with tokamaks.
In my talk, I will elaborate on the need for fusion research and on the physical principles of magnetic plasma confinement, and describe the Wendelstein 7-X project. I will also show the latest results from this device, which recently managed to achieve the best plasma confinement ever in a stellarator.
EU Regional School - Pock Seminar
Prof. Dr. Thomas Pock - Variational Methods for Computer Vision: Modeling, Numerical Solution and Learning
Institute for Computer Graphics and Vision, Graz University of Technology, Austria
Variational methods (also known as energy minimization methods) are among the
most flexible methods for solving inverse problems. The idea is to set up an
energy functional whose low energy states correspond to physically plausible
solutions of the problem. Hence, computing the solution of a problem is
formulated as an optimization problem. In this course, you will learn about
variational methods for solving classical computer vision problems such as image
restoration, image segmentation, stereo and motion estimation. You will learn
about both the basic modeling aspects (different regularization terms and data
fitting terms) as well as numerical optimization algorithms to solve the models.
Moreover, you will learn about functional lifting, which is a technique whose
aim is to reformulate a hard problem (usually due to non-convexity) in a higher
dimensional space, where the problem becomes convex. Finally, you will also
learn about our recent activities to improve variational models by means of
machine learning techniques.
SSD - Van der Aast Seminar
Prof. Dr. Wil van der Aalst - Process Mining and Simulation: A Match Made in Heaven
Chair of Process and Data Science, RWTH Aachen University
Event data are collected everywhere: in logistics, manufacturing, finance, healthcare, customer relationship management, e-learning, e-government, and many other domains. The events found in these domains typically refer to activities executed by resources at particular times and for particular cases. Process mining provides a novel set of tools to exploit such data. Event data can be used to discover the real processes, to detect deviations from normative processes, and to analyze bottlenecks and waste. However, process mining tends to be backward-looking. Fortunately, simulation can be used to explore different design alternatives and to anticipate performance problems. Through simulation experiments various “what if” questions can be answered and redesign alternatives can be compared with respect to key performance indicators. However, making a good simulation model may be very time consuming and models may be outdated by the time they are ready. Therefore, process mining and simulation complement each other well. In his talk, Wil van der Aalst will argue that process mining and simulation form a match made in heaven. He will introduce process mining concepts and show (1) how to discover simulation models, (2) how to view real and simulated event data in a unified manner, and (3) how to make process mining more forward-looking using simulation. He will also explain how his team applied process mining in over 150 organizations, developed the open-source tool ProM, and influenced the 20+ commercial process mining tools available today.
EU Regional School - Bockhorst Seminar
Dr. Heinrich Bockhorst - One-Sided Communication, MPI on Threads, Overlap of Communication and Computation. Old MPI topics - New Answers?
Senior Software Engineer at Intel Cooperation, Germany
This Talk will provide a short overview on some MPI topics that have been discussed for a long time. MPI’s one sided communication is available for about 20 years but it was ignored by most programmers due to the available implementations showing poor performance. This is kind of a chicken or the egg dilemma because the MPI developers did not spent their time on software that was not used. Another reason was that the necessary hardware was missing or too expensive. The final reason against one sided MPI communication was the cumbersome syntax that had to be used. The recent advances with MPI-3 may help to promote one sided communication.
MPI on threads is the next topic. The combination of MPI and threads is defined in the MPI standard. Nowadays people are switching to hybrid MPI+ threading programs instead of pure MPI. This became necessary since a pure MPI utilization of CPUs with up to 72 cores is not efficient. Reasons against the pure MPI model are memory consumption and network congestions. Many collectives are not scaling very well and the rank count should be kept low. Most implementations do not efficiently support the hybrid model. Reasons for that will be discussed and a recent solution presented.
Overlap of communication and computation is the third topic. There are coding examples where one sided communication and threads are used to achieve this overlap. Some coding examples will be presented to show how this can be done.
The poster shows two next neighbor exchange patterns. The first is with overlap and the second pattern is without overlap.
SSD - Peherstorfer Seminar
Prof. Dr. Benjamin Peherstorfer- Learning Context -aware Reduced Models for Multifidelity Computations
Courant Institute of Mathematical Sciences, New York University, USA
Traditional model reduction constructs reduced models with the aim of replacing expensive, high-fidelity models to speed up computations. However, reduced and high-fidelity models are increasingly used together in multifidelity methods, which means that the purpose of reduced models becomes supporting computations with the high-fidelity models rather than approximating and replacing them. In this presentation, we propose context-aware reduced models that are explicitly constructed for being used together with high-fidelity models in multifidelity computations. In the first part of the presentation, we introduce the adaptive multifidelity Monte Carlo (AMFMC) method that constructs reduced models that optimally support the multifidelity estimation of statistics of high-fidelity model outputs. Our analysis shows that our context-aware reduced models optimally reduce the runtime of multifidelity estimation, even though they are less accurate in the sense of traditional model reduction. In the second part, we present a multifidelity approach to dynamically couple reduced models with high-fidelity models, where the reduced models are adapted in a context-aware sense with sparse data from the high-fidelity model. Our numerical examples demonstrate that the dynamic coupling is particularly beneficial in case of convection-dominated problems, where our context-aware approach achieves significant speedups, whereas traditional reduced models are even more costly to evaluate than the high-fidelity models.