Register event
24 events found

Content provider: PRACE  or Babraham Institute Bioinfor... 

  • [ONLINE] Parallel Programming Workshop @ BSC

    13 - 22 October 2020

    [ONLINE] Parallel Programming Workshop @ BSC https://tess.elixir-europe.org/events/online-parallel-programming-workshop-bsc The registration to this course is closed. Please, bring your own laptop. All the PATC courses at BSC are free of charge. Course Convener: Xavier Martorell Sessions will be in October 13th-16th and 19th-22nd from 2pm to 5.30pm and delivered via Zoom. Level: Intermediate: For trainees with some theoretical and practical knowledge, some programming experience. Advanced: For trainees able to work independently and requiring guidance for solving complex problems. Attendants can bring their own applications and work with them during the course for parallelization and analysis. Prerequisites: Fortran, C or C++ programming. All examples in the course will be done in C Software requirements: Zoom (recommended), SSH client (to connect HPC systems), X Server (enabling remote visual tools). Objectives: The objectives of this course are to understand the fundamental concepts supporting message-passing and shared memory programming models. The course covers the two widely used programming models: MPI for the distributed-memory environments, and OpenMP for the shared-memory architectures. The course also presents the main tools developed at BSC to get information and analyze the execution of parallel applications, Paraver and Extrae. It also presents the Parallware Assistant tool, which is able to automatically parallelize a large number of program structures, and provide hints to the programmer with respect to how to change the code to improve parallelization. It deals with debugging alternatives, including the use of GDB and Totalview. The use of OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of current compute nodes in clustered architectures is also considered. Paraver will be used along the course as the tool to understand the behavior and performance of parallelized codes. The course is taught using formal lectures and practical/programming sessions to reinforce the key concepts and set up the compilation/execution environment. Attendants can bring their own applications and work with them during the course for parallelization and analysis. Learning Outcomes: The students who finish this course will be able to develop benchmarks and applications with the MPI, OpenMP and mixed MPI/OpenMP programming models, as well as analyze their execution and tune their behaviour in parallel architectures. Agenda:  all times are 2pm to 5.30pm with two breaks of 15' Tuesday 13/10/2020 1. Introduction to parallel architectures, algorithms design and performance parameters 2. Introduction to the MPI programming model 3. Practical: How to compile and run MPI applications Wednesday 14/10/2020 1. Introduction to Paraver: tool to analyze and understand performance 2. Practical: Trace generation and trace analysis Thursday 15/10/2020 1. MPI: Point-to-point communication, collective communication 2. Practical: Simple matrix computations 3. MPI: Blocking and non-blocking communications 4. Practical: matrix computations with non-blocking communication Friday 16/10/2020 1. MPI: Collectives, Communicators, Topologies 2. Practical: Heat equation example Monday 19/10/2020 1. OpenMP Fundamentals: the fork-join model (lecture) 2. OpenMP Fundamentals: the fork-join model (hands-on) 3. OpenMP Fundamentals: the data environment (lecture) 4. OpenMP Fundamentals: the data environment (hands-on) Tuesday 20/10/2020 1. OpenMP Work-sharing: distributing work among threads (lecture)  2. OpenMP Work-sharing: distributing work among threads (hands-on)  3. OpenMP Work-sharing: loop distribution (lecture)  4. OpenMP Work-sharing: loop distribution (hands-on)  Wednesday 21/10/2020 1. OpenMP Tasking model: basics (lecture)    The task construct    The taskwait 2. OpenMP Tasking model: basics (hands-on) 3. OpenMP Tasking model: intermediate (lecture) 4. OpenMP Tasking model: intermediate (hands-on) Thursday 22/10/2020 1. Hybrid MPI+OpenMP    Standard (threading level, synchronous/asynchronous MPI)    TAMPI 2. Practical: Heat, nbody End of Course https://events.prace-ri.eu/event/1041/ 2020-10-13 12:00:00 UTC 2020-10-22 15:30:00 UTC [] [] [] workshops_and_courses [] []
  • Data Carpentry @ EPCC online

    19 - 22 October 2020

    Data Carpentry @ EPCC online https://tess.elixir-europe.org/events/data-carpentry-epcc-online In many domains of research, the rapid generation of large amounts of data is fundamentally changing how research is done. The deluge of data presents great opportunities, but also many challenges in managing, analysing and sharing data. Data Carpentry aims to teach the skills that will enable researchers to be more effective and productive. The course is designed for learners with little to no prior knowledge of programming, shell scripting, or command line tools.     https://events.prace-ri.eu/event/1080/ 2020-10-19 09:00:00 UTC 2020-10-22 12:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Introduction to parallel programming @ UL

    20 - 23 October 2020

    [ONLINE] Introduction to parallel programming @ UL https://tess.elixir-europe.org/events/introduction-to-parallel-programming-2f81120c-c12b-44ac-a2c2-bff2a21d8dc5 First part of this course (Day 1&2) is part of Slovenian EuroHPC Competence Centre training event. The second part of this course (Day 3&4) is a PRACE training event. Event is organized by LECAD Laboratory, University of Ljubljana, Slovenia. Participation is free of charge! Registration will close on Monday 12 October at midnight. In light of COVID-19 situation, this will be an online event. https://events.prace-ri.eu/event/1049/ 2020-10-20 06:00:00 UTC 2020-10-23 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] ChEESE Advanced Training on HPC for Computational Seismology @ HLRS

    21 - 23 October 2020

    [ONLINE] ChEESE Advanced Training on HPC for Computational Seismology @ HLRS https://tess.elixir-europe.org/events/online-cheese-advanced-training-on-hpc-for-computational-seismology-hlrs Overview August 3: Note: Due to the COVID-19 pandemic we currently plan to provide this course as an ONLINE course (using Zoom). Are you interested in numerical wave simulations but feel like there are too many codes and methods out there and don’t know which option is right for you? Or, perhaps you just want to refresh your knowledge and gain some practical experience with the different possibilities? Either way, this fully virtual workshop is for you! During three full days on 21- 23 October 2020, you will receive introductory hands-on tutorials for ExaHyPE, Salvus, SeisSol, and SPECFEM3D, which are four different high-performance computational seismology software packages focused on waveform modeling and inversion. The workshop will provide you with an overview of their similarities, differences, and areas of applicability. Unique and in-depth half-day practicals will be provided for each package, with computational resources provided by HLRS. Participants will also be encouraged to present their own work and/or motivation as a one-minute lightning-talk during the wrap-up session of the workshop. This training event is hosted by ChEESE, the Center of Excellence for Exascale in Solid Earth and supported by the ENERXICO project. The organizers look forward to seeing you in October! Target audience Researchers and students of all levels interested in augmenting their science with numerical waveform and dynamic rupture simulations as well as full-waveform inversion.   For further information and registration please visit the HLRS course page. https://events.prace-ri.eu/event/1046/ 2020-10-21 07:00:00 UTC 2020-10-23 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [POSTPONED] Introduction to Deep Learning Models @ JSC

    21 - 23 October 2020

    [POSTPONED] Introduction to Deep Learning Models @ JSC https://tess.elixir-europe.org/events/introduction-to-deep-learning-models-jsc This course has to be postponed to 2021. This course focuses on a recent machine learning method known as deep learning that emerged as a promising disruptive approach, allowing knowledge discovery from large datasets in an unprecedented effectiveness and efficiency. It is particularly relevant in research areas, which are not accessible through modelling and simulation often performed in HPC. Traditional learning, which was introduced in the 1950s and became a data-driven paradigm in the 90s, is usually based on an iterative process of feature engineering, learning, and modelling. Although successful on many tasks, the resulting models are often hard to transfer to other datasets and research areas. This course provides an introduction into deep learning and its inherent ability to derive optimal and often quite generic problem representations from the data (aka ‘feature learning’). Concrete architectures such as Convolutional Neural Networks (CNNs) will be applied to real datasets of applications using known deep learning frameworks such as Tensorflow, Keras, or Torch. As the learning process with CNNs is extremely computational-intensive the course will cover aspects of how parallel computing can be leveraged in order to speed-up the learning process using general purpose computing on graphics processing units (GPGPUs). Hands-on exercises allow the participants to immediately turn the newly acquired skills into practice. After this course participants will have a general understanding for which problems CNN learning architectures are useful and how parallel and scalable computing is facilitating the learning process when facing big datasets. Prerequisites: Participants should be able to work on the Unix/Linux command line, have a basic level of understanding of batch scripts required for HPC application submissions, and have a minimal knowledge of probability, statistics, and linear algebra. Participants should bring their own notebooks (with an ssh-client). Application Applicants will be notified one month before the course starts, whether they are accepted for participitation. Instructors: Prof. Dr. Morris Riedel, Dr. Gabriele Cavallaro, Dr. Jenia Jitsev, Jülich Supercomputing Centre Contact For any questions concerning the course please send an e-mail to g.cavallaro@fz-juelich.de. https://events.prace-ri.eu/event/983/ 2020-10-21 11:00:00 UTC 2020-10-23 14:30:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] GPU Programming with OpenACC @ CSC

    22 - 23 October 2020

    [ONLINE] GPU Programming with OpenACC @ CSC https://tess.elixir-europe.org/events/online-gpu-programming-with-openacc-csc This course will be delivered as an ONLINE COURSE for remote participation  due to the COVID-19 measures enforced by most European governments. Description This course is an introduction to GPU programming using the directive based OpenACC paradigm. The course consists of lectures and hands-on exercises. Topics covered in the course include basic usage of OpenACC, how to control data movement between CPUs and GPUs, and some more advanced issues related to profiling and interoperability with e.g. MPI. Learning outcome After the course the participants should have the basic skills needed for utilising OpenACC with new or existing programs. Prerequisites and content level The participants are assumed to have a working knowledge of Fortran and/or C programming languages. In addition, fluent operation in a Linux/Unix environment will be assumed. The content level of the course is broken down as: beginner's - 0%, intermediate - 90%, advanced - 10%, community-targeted content - 0%. Agenda Day 1, Thursday 22.10   9:00 -  9:30   Introduction to accelerators​              9:30 -9:35   Coffee break   9:35 - 10:30  Introduction to OpenACC 10:30 - 12:00  Exercises [offline]          12:00 - 13:00  Lunch time recommended 13:00 - 16:00  Exercises [offline cont.] 16:00 - 17:00  Q&A session Day 2, Friday 23.10 9:00 -  10:00  Data management​          10:00 -10:15  Coffee break 10:15 - 11:00  Profiling and performance optimisation 11:00 - 12:00  Exercises [offline]         12:00 - 13:00  Lunch time recommended 13:00 - 15:30  Exercises [offline cont.] 15:30 - 16:00  Advanced topic: Multiple GPUs with MPI 16:00 - 17:00  Q&A session Lecturers:  Martti Louhivuori (CSC), George Markomanolis (CSC) Language:  English Price:           Free of charge (2 training days) REGISTRATION is OBLIGATORY since the details to access the online course will be provided to the registered and accepted attendees only. If you have registered to this course and you are not able to attend, please CANCEL your registration in advance by sending an email to patc@csc.fi https://events.prace-ri.eu/event/1073/ 2020-10-22 06:00:00 UTC 2020-10-23 14:00:00 UTC [] [] [] workshops_and_courses [] []
  • [MOOC] A short introduction to MPI One-sided Communication @ HLRS & ASTRON & Surfsara

    25 October - 7 November 2020

    [MOOC] A short introduction to MPI One-sided Communication @ HLRS & ASTRON & Surfsara https://tess.elixir-europe.org/events/mooc-a-short-introduction-to-mpi-one-sided-communication-hlrs-astron-surfsara Overview This online course will start on the 26th of October and it will run for two weeks, with an estimated effort of 4 hours per week. It is provided as a MOOC (Massive Open Online Course). During this time you will be guided through the concepts of one-sided communication and learn to perform these data movements correctly and efficiently using MPI (with C and Fortran bindings). Several examples and exercises will illustrate their use and help to understand the benefits of these routines, and you also will be able to challenge your knowledge with some quizzes! This online course has been developed by HLRS, ASTRON and SURFsara with the sponsorship of PRACE, and the registration is done for free through FutureLearn in the indicated link below.   For further information and registration please visit the FutureLearn course page. https://events.prace-ri.eu/event/1079/ 2020-10-25 23:00:00 UTC 2020-11-07 23:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] High-performance scientific computing in C++ @ JSC

    26 - 29 October 2020

    [ONLINE] High-performance scientific computing in C++ @ JSC https://tess.elixir-europe.org/events/high-performance-scientific-computing-in-c-jsc-7577a0b2-ea9c-4d4d-831c-b75adb7ec714 Modern C++, with its support for procedural, objected oriented, generic and functional programming styles, offers many powerful abstraction mechanisms to express complexity at a high level while remaining very efficient. It is therefore the language of choice for many scientific projects. However, achieving high performance by today's standards requires understanding and exploiting multiple levels of parallelism, and understanding C++ code from a performance centric viewpoint. In this course, the participants will learn how to write C++ programs which better utilize typical HPC hardware resources of the present day. The course is geared towards scientists and engineers already familiar with C++17 (at the very least C++14), who wish to develop maintainable and fast applications. They will learn techniques to better utilize CPU caches, instruction pipelines, SIMD functionality and multi-threading. Shared memory parallel programming on multiple CPU cores will be introduced using parallel STL of C++17 and Intel (R) Threading Building Blocks. The participants will also learn basic GPGPU programming in C++ using NVidia CUDA and Thrust. Prerequisites: Good working knowledge of C++, especially the C++14 standard. Please check with these questions whether your C++ knowlegde fulfills the requirements. Application Registrations are only considered until 16 October 2020, the maximal number of participants is limited. Applicants will be notified, whether they are accepted for participitation. Instructor: Dr. Sandipan Mohanty, JSC Contact For any questions concerning the course please send an e-mail to s.mohanty@fz-juelich.de This course was originally planned for 15-17 June 2020 as an in-person course. https://events.prace-ri.eu/event/984/ 2020-10-26 08:00:00 UTC 2020-10-29 15:30:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Tools and techniques to quickly improve performances of HPC applications in Solid Earth@Cineca

    26 - 28 October 2020

    [ONLINE] Tools and techniques to quickly improve performances of HPC applications in Solid Earth@Cineca https://tess.elixir-europe.org/events/tools-and-techniques-to-quickly-improve-performances-of-hpc-applications-in-solid-earth-cineca It will be  shown a course targeted to improve the overall performance of a code in Solid Earth, currently in use at the CHEESE Center of Excellence H2020 project. Starting from a description of main architectures characterizing the HPC world,  parallel performance profiling tools will be provided on examples of the code to find the so-called performance bottlenecks. Starting from the profiling analysis, it will show how and where to intervene with respect to the hardware characterization of the HPC machine used for the investigation to improve the overall performance of serial and parallel codes.   Particular attention will be given to tools and techniques to improve the performance of the codes with respect to I / O and vectorization. Skills: At the end of the course the student will be able to: use a concrete methodology to improve the performance of a code in Solid Earth already in use in the context of ChEESE project find and solve all the main bottlenecks of an application with respect to appropriate computational metrics and the machine used Target audience:  Researchers in Solid Earth interested to learn and use all those techniques and related tools that may allow them to improve the performance of their code on current HPC architectures in the shortest possible time. Pre-requisites: Basic knowledge of LINUX/UNIX. Knowledge of C, FORTRAN, MPI or OpenMP is recommended. Notions of Parallel computing techniques and algorithms for Solid Earth applications   Coordinating Teachers: Dr. V Ruggiero, G.Amati.   https://events.prace-ri.eu/event/973/ 2020-10-26 08:15:00 UTC 2020-10-28 16:30:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Data Management with iRODS and Compute @SURFsara

    29 October 2020

    [ONLINE] Data Management with iRODS and Compute @SURFsara https://tess.elixir-europe.org/events/data-management-with-irods-and-compute-surfsara-11cb4922-2fd9-4e88-b5b0-1930d3c1afba Would you want to practice reproducible research in HPC and preserve your data for the future? Research Data Management is essential in effective and reproducible science. Due to ever increasing data volumes and complexity, researchers need modern tools to manage all stages of the data life cycle. Here we will use iRODS, a data management framework, to incorporate data management principles in compute pipelines.  What? In this course you will: - Learn about the the iRODS data management framework and icommands - Understand how to incorporate provenance in a compute workflow - Know about FAIR in data processing workflows Who? - Everyone interested in learning advanced data management tools for compute and data processing workflows Requirements - Basic knowledge of Linux and shell commands You should have - Your own laptop with an up-to-date browser and a terminal emulator. The use of the operating systems Linux and macOS is preferred, but not mandatory. For Windows users we recommend to download MobaXterm (portable version) as terminal emulator.   IMPORTANT INFORMATION: WAITING LIST If the course gets fully booked, no more registrations are accepted through this website. However, you can be included in the waiting list: for that, please send an email to training@surfsara.nl and you'll be informed when a place becomes available. https://events.prace-ri.eu/event/1058/ 2020-10-29 08:00:00 UTC 2020-10-29 16:30:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Heterogeneous computing with performance modelling @ SNIC

    4 - 5 November 2020

    [ONLINE] Heterogeneous computing with performance modelling @ SNIC https://tess.elixir-europe.org/events/online-heterogeneous-computing-with-performance-modelling-snic Modern high core count CPUs and accelerators, such as GPUs, have been shown to provide significant performance benefits in many different applications. However, for a novice, or even for a moderately experienced scientist or programmer, it is not always clear which applications could potentially benefit from these resources and which do not. For example, a Nvidia V100 GPU can perform artificial intelligence (AI) related computations in a fraction of the time it takes a regular CPU to perform the same computations but ill-informed OpenACC compiler pragmas can actually make a code run slower. Why is this? When should one invest time in parallelization and/or GPU-acceleration? How much speedup can be expected with a given application? The main goal of this two-day course is to start answering these questions. The course also covers the basics of GPU programming and aims to provide the necessary information for avoiding the most common pitfalls. More information, schedule, and registration can be found on the course webpage at HPC2N, Umeå University. https://events.prace-ri.eu/event/1064/ 2020-11-04 08:00:00 UTC 2020-11-05 16:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Shared memory parallelization with OpenMP @ VSC Vienna

    5 - 6 November 2020

    [ONLINE] Shared memory parallelization with OpenMP @ VSC Vienna https://tess.elixir-europe.org/events/online-shared-memory-parallelization-with-openmp-vsc-vienna This course will be delivered as an ONLINE COURSE (using Zoom) for remote participation. REGISTRATION is strictly NECESSARY since the details to access the online course will be provided to the registered and accepted attendees only. Registration follows the "first come – first served" principle. For the online course we can take a maximum of 40 persons. Additional registrations will be added to a waiting list. Furthermore, accepted participants will be contacted a few days before the course and asked to do a short PRE-ASSIGNMENT that has to be completed before the course starts. For doing the HANDS-ON LABS all you need is a C/C++ or Fortran compiler that supports OpenMP. Participants can choose between using their own laptops or any system they have access to that provides an OpenMP enabled compiler, it's even possible to do the exercises in Google Colab. Alternatively, participants can ask for a training user account at the VSC for the duration of the course. Overview The focus of this 2 days course is on shared memory parallelization with OpenMP for dual-core, multi-core, shared memory, and ccNUMA platforms. This course teaches OpenMP starting from a beginners level. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the OpenMP directives, environment variables, and library routines. Race-condition debugging tools are also presented. Content Levels: Beginners = 6:00h (50%) + Intermediate = 4:00h (33%) + Advanced = 2:00h (17%) This course is a PRACE training event. It is organized by the VSC Research Center, TU Wien, in cooperation with the High-Performance Computing-Center Stuttgart (HLRS). Agenda & Content 1st day 08:45  Join online 09:00    Welcome 09:10    Introduction to modern hardware (talk) 10:10    An overview of OpenMP (talk) 10:35   Coffee 10:50    OpenMP programming and execution model (talk+practical) 12:30   Lunch 13:30    OpenMP worksharing directives (talk+practical)  15:00   Coffee 15:15    More OpenMP (talk+practical) 16:00    Summary (talk) + Q & A 16:30   End of first day 2nd day 08:45  Join online 09:00    How to write correct OpenMP programs (talk) 09:30    Intel Inspector XE (demo+practical) 10:30   Coffee 10:45    How to write efficient OpenMP programs (talk) 11:00    Heat equation (talk+practical) 12:30   Lunch 13:30    Heat equation (practical — continued) 14:00    Vectorization with OpenMP (talk) 14:25    Thread affinity in OpenMP (talk) 15:00   Coffee 15:15     The taskloop directive (talk+practical) 16:30   End of second day (course) https://events.prace-ri.eu/event/1076/ 2020-11-05 07:45:00 UTC 2020-11-06 15:30:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Directive-based GPU programming with OpenACC @ JSC

    9 - 11 November 2020

    [ONLINE] Directive-based GPU programming with OpenACC @ JSC https://tess.elixir-europe.org/events/online-directive-based-gpu-programming-with-openacc-jsc GPU-accelerated computing drives current scientific research. Writing fast numeric algorithms for GPUs offers high application performance by offloading compute-intensive portions of the code to the GPU. The course will cover basic aspects of GPU architectures and programming. Focus is on the usage of the directive-based OpenACC programming model which allows for portable application development. Examples of increasing complexity will be used to demonstrate optimization and tuning of scientific applications. Topics covered will include: Introduction to GPU/Parallel computing Programming model OpenACC Interoperability of OpenACC with GPU libraries (like cuBLAS and cuFFT) and CUDA Multi-GPU Programming with MPI and OpenACC Tools for debugging and profiling Performance optimization The course consists of lectures and interactive hands-on sessions in C or Fortran (the attendee’s choice). This course will take place as an online event. The link to the online platform will be provided to the registrants only. Prerequisites: Some knowledge about Linux, e.g. make, command line editor, Linux shell (see for instance this overview), a little experience in C/Fortran Application Registrations are only considered until 2 November 2020, the maximal number of participants is limited. Applicants will be notified, whether they are accepted for participitation. Instructors: Dr. Andreas Herten, Dr. Thorsten Hater, Dr. Kaveh Haghighi-Mood, JSC; Markus Hrywniak, Jiri Kraus, NVIDIA Contact For any questions concerning the course please send an e-mail to a.herten@fz-juelich.de https://events.prace-ri.eu/event/1075/ 2020-11-09 08:00:00 UTC 2020-11-11 12:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Earth Sciences Simulation Environments @ BSC

    10 - 12 November 2020

    [ONLINE] Earth Sciences Simulation Environments @ BSC https://tess.elixir-europe.org/events/earth-sciences-simulation-environments-bsc-599900ce-5ae4-4c46-81cd-d14a424670dd The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge. Course conveners: Marc Guevara, Carles Tena and Maria Teresa Pay Perez This training will be online via zoom. Course lecturers: Earth Sciences Department researchers involved in the group of Computational Earth Sciences, Atmospheric Composition, Climate Prediction and Earth System Services. Final lectures TBA. Objectives: The objective of this PATC course is to cover the basics of a high performance computing (HPC) environment oriented towards earth science applications, specifically chemical weather modelling and climate modelling. More precisely, the course will cover: Introduction to earth science fundamentals and modelling; Basic usage of an HPC environment: shell, compilers, libraries, file systems, queuing system and parallel computing; Build and configure targeted earth science applications with the NMMB/BSC-CTM chemical transport model and with the EC-EARTH climate model; Execute and monitor numerical experiments using a workflow manager; Analyse and visualise model outputs with a wide set of tools. Learning outcomes: Participants will learn and gain experience in accessing an HPC facility, installing earth science numerical models and related utilities and libraries, running numerical simulations, monitoring the execution of supercomputing jobs, analysing and visualising model results. Level: (All courses are designed for specialists with at least 1st cycle degree or similar background experience) For trainees with some theoretical and practical knowledge Prerequisites: At least University degree in progress on Earth Sciences, Computer Sciences or related area Basic knowledge of UNIX Knowledge of C, FORTRAN, MPI or openMP is recommended Knowledge of Earth Sciences data formats is recommended (grib, netcdf, hdf,…) Knowledge of R and Python Agenda: Day 1 (Nov. 10) – 4.5h 10:00-10:15 Welcome and introduction of the BSC Earth Sciences Department. (Francisco Doblas-Reyes, Head of BSC-ES Department) 10:15-10:30 Technical and logistic information. (Marc Guevara and Carles Tena) 10:30-11:00 Introduction to earth science fundamentals and modelling. (Raffaele Bernardello) 11:00-11:30 Break 11:30-13:00 Introduction to Linux and HPC, BSC facilities, PRACE and RES + preparation/checks for hands-on tutorial after lunch break (Mario Acosta) 13:00-14:00 Lunch break 14:00-16:00 HPC hands-on tutorial. At the end of this tutorial, the students will be able to compile and execute jobs on Mare Nostrum 4. (Mario Acosta and Miguel Castrillo and Carles Tena) Day 2 (Nov 11) – 3.5h 9:30-10:00 Lecture on the EC-EARTH climate model. (Raffaele Bernardello) 10:00-10:30 Lecture on the NMMB-MONARCH atmosphere chemistry model. (Oriol Jorba) 10:30-11:00 Lecture on setting up and running the NMMB-MONARCH model with Autosubmit. (Francesca Macchia and Gilbert Montane) 11:00-11:30 Break 11:30-13:30 Model hands-on tutorial. Students will run a prepared case. (Francesca Macchia and Gilbert Montane and Carles Tena and Marc Guevara) Day 3 (Nov 12) – 3h 10:00-11:00 Introduction to analysis and visualisation tools for model outputs (s2dverification, MapGenerator, CDO, NCO, panoply, ncview) (Francesco Benincasa) 11:00-11:30 Break 11:30-13:30 Analysis hands-on tutorial. Students will apply tools for analysis and visualisation to the outputs created the day before. (Francesco Benincasa and Carles Tena and Marc Guevara) END of COURSE https://events.prace-ri.eu/event/1042/ 2020-11-10 08:30:00 UTC 2020-11-12 15:30:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] High-performance computing with Python @ JSC

    16 - 20 November 2020

    [ONLINE] High-performance computing with Python @ JSC https://tess.elixir-europe.org/events/high-performance-computing-with-python-jsc-8f572b9d-fe3d-4357-a015-1be3342e47a8 Python is increasingly used in high-performance computing projects. It can be used either as a high-level interface to existing HPC applications and libraries, as embedded interpreter, or directly. This course combines lectures and hands-on sessions. We will show how Python can be used on parallel architectures and how to optimize critical parts of the kernel using various tools. The following topics will be covered: Interactive parallel programming with IPython Profiling and optimization High-performance NumPy Just-in-time compilation with numba Distributed-memory parallel programming with Python and MPI Bindings to other programming languages and HPC libraries Interfaces to GPUs This course is aimed at scientists who wish to explore the productivity gains made possible by Python for HPC. Prerequisites: Good working knowledge of Python and NumPy Application Registrations are only considered until 6 November 2020. Applicants will be notified, whether they are accepted for participitation. Instructors: Dr. Jan Meinke, Dr. Olav Zimmermann, JSC Contact For any questions concerning the course please send an e-mail to j.meinke@fz-juelich.de https://events.prace-ri.eu/event/982/ 2020-11-16 08:00:00 UTC 2020-11-20 12:00:00 UTC [] [] [] workshops_and_courses [] []
  • CodeRefinery online workshop

    17 - 26 November 2020

    CodeRefinery online workshop https://tess.elixir-europe.org/events/coderefinery-online-workshop Course goals and format In this course, you will become familiar with tools and best practices for version control and reproducibility in modern research software development. The main focus is on using Git for efficiently writing and maintaining research software. We will not teach programming, but we teach the tools you need to use programming well. We assume that you are familiar with the programming language that you use in your work and research. We try to keep the course as language-independent as possible but we will show some basic code examples in Python. This is an informal and interactive event with type-along type of presentations, live coding, and demos. Short tutorials alternate with practical exercises. Prerequisites You should be able to navigate the file tree in a terminal session and edit text files in the terminal. This Linux shell crash course (video) contains the essentials. Basics in one or more programming languages. You need to install some software (see below). It is useful if you have a basic idea of how Git works. We will start from the basics anyway, but please go through this Git-refresher material for a basic overview and important configuration steps. Make sure that Git is configured, and verify the configuration (see below). Schedule All times are in Europe/Stockholm time Time converter The schedule includes frequent breaks. The schedule is subject to change. Pre-workshop installation help and verification times Nov 12 (Thu) 13:00 - 14:00 Nov 13 (Fri) 13:00 - 14:00 Join anytime during this (preferably not everyone at the very start and end) Day 1 (Nov 17, Tue) 9:00 - 9:20 Welcome and practical information  9:20 - 12:00 Introduction to version control - part 1/2 (“Motivation” to “Undoing”)  Day 2 (Nov 18, Wed) 9:00 - 12:00 Introduction to version control - part 2/2 (“Branching and merging” to “Inspecting history”)  Day 3 (Nov 19, Thu) 9:00 - 12:00 Collaborative distributed version control  Day 4 (Nov 24, Tue) 9:00 - 9:15 Mini-intro  9:00 - 11:00 Reproducible research and FAIR data  11:00 - 12:00 Social coding and open software  Day 5 (Nov 25, Wed) 9:00 - 10:30 Documentation  10:30 - 12:00 Jupyter  Day 6 (Nov 26, Thu) 9:00 - 10:30 Automated testing 10:45 - 11:45 Modular code development  11:45 - 12:00 Concluding remarks and where to go from here Software requirements You need to install some software before CodeRefinery (why we ask you to do this). Please do this - and check it - in advance, otherwise you’ll start off behind. Note that, e.g., a working Python executable on your laptop is not sufficient - a version greater than 3.4 is strongly recommended and a number of extra packages need to be installed as detailed on the Python installation page. You also need to check your git configuration (video: how to configure Git). If you have an institutional laptop with limited rights, start in advance and/or ask for help to translate these instructions to work on your system. Bash Editor Git, including some configuration (optional) Visual diff tools Python Jupyter and JupyterLab Snakemake Zoom You should either a) drop by one of our verification sessions in advance, or b) verify with your team’s helper before the workshop. How to join We are normally limited by number of helpers, thus why not allow groups to bring their own helper?: We have the concept of teams that bring their own breakout room. Anyone may register as a learner, we will try to take as many as we can. Or you can register as a helper. If you are somewhat comfortable with these tools, you can be a helper! (see below) Register as a team, one helper and 4-6 learners. We will accept all complete teams. Everyone on the team must register separately. Decide some team name, and when registering, everone enter this same name so that we can link you together. During the exercise sessions in breakout rooms, the helper has primary responsibility for their group’s hands-on exercises. Still, instructors drop by to check and help as needed, so don’t worry if you don’t know everything. This is a great opportunity to bring your friends and colleagues and prepare for the future together. If you’ve been to a CodeRefinery before and have used git some after that, you are definitely capable of being a helper. If you routinely use git and know Python somewhat well, you are also very qualified to be a helper. Registration Please visit the workshop webpage.  https://events.prace-ri.eu/event/1082/ 2020-11-17 08:00:00 UTC 2020-11-26 11:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Node-Level Performance Engineering @ LRZ

    2 - 4 December 2020

    [ONLINE] Node-Level Performance Engineering @ LRZ https://tess.elixir-europe.org/events/online-node-level-performance-engineering-lrz This online course covers performance engineering approaches on the compute node level. Even application developers who are fluent in OpenMP and MPI often lack a good grasp of how much performance could at best be achieved by their code. This is because parallelism takes us only half the way to good performance. Even worse, slow serial code tends to scale very well, hiding the fact that resources are wasted. This course conveys the required knowledge to develop a thorough understanding of the interactions between software and hardware. This process must start at the core, socket, and node level, where the code gets executed that does the actual computational work. We introduce the basic architectural features and bottlenecks of modern processors and compute nodes. Pipelining, SIMD, superscalarity, caches, memory interfaces, ccNUMA, etc., are covered. A cornerstone of node-level performance analysis is the Roofline model, which is introduced in due detail and applied to various examples from computational science. We also show how simple software tools can be used to acquire knowledge about the system, run code in a reproducible way, and validate hypotheses about resource consumption. Finally, once the architectural requirements of a code are understood and correlated with performance measurements, the potential benefit of code changes can often be predicted, replacing hope-for-the-best optimizations by a scientific process.   The course is a PRACE training event. Introduction Our approach to performance engineering Basic architecture of multicore systems: threads, cores, caches, sockets, memory The important role of system topology Tools: topology & affinity in multicore environments Overview likwid-topology and likwid-pin Microbenchmarking for architectural exploration Properties of data paths in the memory hierarchy Bottlenecks OpenMP barrier overhead Roofline model: basics Model assumptions and construction Simple examples Limitations of the Roofline model Pattern-based performance engineering Optimal use of parallel resources Single Instruction Multiple Data (SIMD) Cache-coherent Non-Uniform Memory Architecture (ccNUMA) Simultaneous Multi-Threading (SMT) Tools: hardware performance counters Why hardware performance counters? likwid-perfctr Validating performance models Roofline case studies Dense matrix-vector multiplication Sparse matrix-vector multiplication Jacobi (stencil) smoother Optional: The ECM performance model https://events.prace-ri.eu/event/1052/ 2020-12-02 08:00:00 UTC 2020-12-04 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] GPU Programming with CUDA @ JSC

    7 - 11 December 2020

    [ONLINE] GPU Programming with CUDA @ JSC https://tess.elixir-europe.org/events/online-gpu-programming-with-cuda-jsc GPU-accelerated computing drives current scientific research. Writing fast numeric algorithms for GPUs offers high application performance by offloading compute-intensive portions of the code to an NVIDIA GPU. The course will cover basic aspects of GPU architectures and programming. Focus is on the usage of the parallel programming language CUDA-C which allows maximum control of NVIDIA GPU hardware. Examples of increasing complexity will be used to demonstrate optimization and tuning of scientific applications. Topics covered will include: Introduction to GPU/Parallel computing Programming model CUDA GPU libraries like CuBLAS and CuFFT Tools for debugging and profiling Performance optimizations Prerequisites: Some knowledge about Linux, e.g. make, command line editor, Linux shell, experience in C/C++ Dates: 7-11 December 2020, 09:00-13:00 each day Application Registrations are only considered until 30November 2020, the maximal number of participants is limited. Applicants will be notified, whether they are accepted for participitation. Instructors: Dr. Jan Meinke, Jochen Kreutz, Dr. Andreas Herten, JSC; Jiri Kraus, Markus Hrywniak, NVIDIA Contact For any questions concerning the course please send an e-mail to j.meinke@fz-juelich.de https://events.prace-ri.eu/event/1083/ 2020-12-07 08:00:00 UTC 2020-12-11 12:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] CUDA 4 Dummies @ VSC Vienna

    10 - 11 December 2020

    [ONLINE] CUDA 4 Dummies @ VSC Vienna https://tess.elixir-europe.org/events/online-cuda-4-dummies-vsc-vienna This course will be delivered as an ONLINE COURSE (using Zoom) for remote participation. REGISTRATION is strictly NECESSARY since the details to access the online course will be provided to the registered and accepted attendees only. Registration follows the "first come – first served" principle. For the online course we can take a maximum of 40 persons. Additional registrations will be added to a waiting list. Furthermore, accepted participants will be contacted a few days before the course and asked to do a short PRE-ASSIGNMENT that has to be completed before the course starts. The HANDS-ON LABS will be done on VSC-3, all participants will get a training user account at the VSC for the duration of the course. Overview CUDA is the standard API for code development targeting the GPU and a number of impressive examples have already been given in diverse areas from particle physics to computational biology. CUDA allows simple extensions of standard C/C++/Fortran code with GPU-specific functions. In this way thousands of cores available on the GPU can be leveraged to work in parallel and thus carry out significant fractions of the computational workload on the device rather than the CPU. There is also a vast set of auxiliary tools available to the developer including libraries, code templates, building blocks, analysis tools, developmental frameworks and in general a vivid community making up the CUDA Developer Zone. It is often for this multifaceted support environment that the interested beginner is feeling overwhelmed and unsettled about which particular first steps should best be taken to gain a straightforward introduction into the subject. For exactly this reason the present course is offering a systematic step-by-step introduction into GPU computing from the perspective of the newcomer. Basic design principles will be established, central programming techniques acquired and a number of state-of-the-art workflows examined that efficiently employ the GPU and are frequently used in scientific computing. This course provides lectures, demos, and hands-on labs. This course is a PRACE training event. It is organized by the VSC Research Center, TU Wien. Agenda & Content (tentative) 1st day – 10 December 2020 Basic design principles, getting started with simple CUDA routines, mastering the CUDA environment; Memory Hierarchy with a focus on CUDA managed unified memory; Practical exercise: all the pitfalls one can run into when porting a dummy code to CUDA. 08:45 - 09:00    join online 09:00 - 13:00    lectures and hands-on labs (with breaks: 11:00-11:15) 13:00 - 14:00    lunch break 14:00 - 17:00    lectures and hands-on labs (with breaks: 15:15-15:30) 2nd day – 11 December 2020 Selected examples to demonstrate CUDA capabilities and GPU operation, e.g., bandwidthTest, simpleAssert, simpleStreams, etc.; CUDA libraries, CUBLAS, CUSolver, considering numerical precision, gaining insight into the runtime; Practical exercise: CUDA in everyday life (of a scientist). 08:45 - 09:00    join online 09:00 - 13:00    lectures and hands-on labs (with breaks: 11:00-11:15) 13:00 - 14:00    lunch break 14:00 - 17:00    lectures and hands-on labs (with breaks: 15:15-15:30) https://events.prace-ri.eu/event/1078/ 2020-12-10 07:45:00 UTC 2020-12-11 16:00:00 UTC [] [] [] workshops_and_courses [] []
  • Creating Scientific Figures

    11 December 2020

    Creating Scientific Figures https://tess.elixir-europe.org/events/creating-scientific-figures For more details please see https://www.bioinformatics.babraham.ac.uk/cgi-bin/public/course_dates.cgi. Course status: Spaces Available. 2020-12-11 09:30:00 UTC 2020-12-11 17:00:00 UTC Babraham Campus Babraham Campus CB22 3AT [] mailto:babraham.bioinformatics@babraham.ac.uk [] [] [] [] []
  • [FORMAT TBD] Programming Distributed Computing Platforms with COMPSs @ BSC

    26 - 27 January 2021

    [FORMAT TBD] Programming Distributed Computing Platforms with COMPSs @ BSC https://tess.elixir-europe.org/events/format-tbd-programming-distributed-computing-platforms-with-compss-bsc Please, bring your own laptop. All the PATC courses at BSC are free of charge. Course convener: Rosa Badia, Workflows and Distributed Computing Group Manager, Computer Sciences - Workflows and Distributed Computing Department Lecturers:  Rosa M Badia, Workflows and Distributed Computing Group Manager, Computer Sciences - Workflows and Distributed Computing Department, BSC Javier Conejero, Senior Researcher, Computer Sciences - Workflows and Distributed Computing Department, BSC Jorge Ejarque, Researcher, Computer Sciences - Workflows and Distributed Computing Department, BSC Daniele Lezzi, Senior Researcher, Computer Sciences - Workflows and Distributed Computing Department, BSC Objectives: The objective of this course is to give an overview of the COMPSs programming model, which is able to exploit the inherent concurrency of sequential applications and execute them in a transparent manner to the application developer in distributed computing platform. This is achieved by annotating part of the code as tasks, and building at execution a task-dependence graph based on the actual data used consumed/produced by the tasks. The COMPSs runtime is able to schedule the tasks in the computing nodes and take into account facts like data locality and the different nature of the computing nodes in case of heterogeneous platforms. Additionally, recently COMPSs has been enhanced with the possibility of coordinating Web Services as part of the applications. COMPSs supports Java, C/C++ and Python as programming languages. Learning Outcomes:  In the course, the COMPSs syntax, programming methodology and an overview of the runtime internals will be given. The attendees will get a first lesson about programming with COMPSs that will enable them to start programming with this framework. A hands-on with simple introductory exercises will be also performed. The students who finish this course will be able to develop simple COMPSs applications and to run them both in a local resource and in a distributed platform (initially in a private cloud). The exercises will be delivered in Python and Java. In case of Python, Jupyter notebooks will be used in some of the exercises. Level: for trainees with some theoretical and practical knowledge. INTERMEDIATE: for trainees with some theoretical and practical knowledge; those who finished the beginners course ADVANCED: for trainees able to work independently and requiring guidance for solving complex problems Prerequisites: Programming skills in Java and Python  Agenda:  Day 1 (January 26th, 2021) 9:30 - 10:00 - Welcome and round table (30') 10:00 - 10:30 - Session 1: Introduction to COMPSs (30') 10:30 - 11:15 - Session 2: PyCOMPSs: Writing Python applications (45') 11:15 - 11:45 - Coffee break 11:45 - 13:00 - Session 3: Python Hands-on using Jupyter notebooks (1h15') 13:00 - 14:30 - Lunch break 14:30 - 15:00 - Session 4: Machine learning with dislib (30') 15:00 - 16:30 - Session 5: Hands-on with dislib (1h30') Day 2 (January 27th, 2021) 9:30 - 11:00 - Session 6: Java & C++ (1h30') 11:00 - 11:30 - Cofee break 11:30 - 13:00 - Session 7: COMPSs Advanced Features (1h30') 13:00 - 14:30 - Lunch break 14:30 - 16:30 - Session 8: Cluster Hands-on (MareNostrum) (2h) 16:30 - 16:45 - Session 9: COMPSs Installation & Final Notes (15') END of COURSE https://events.prace-ri.eu/event/1054/ 2021-01-26 08:30:00 UTC 2021-01-27 15:30:00 UTC [] [] [] workshops_and_courses [] []
  • [FORMAT TBD] Managing distributed data with Hecuba and dataClay @ BSC

    28 January 2021

    [FORMAT TBD] Managing distributed data with Hecuba and dataClay @ BSC https://tess.elixir-europe.org/events/format-tbd-managing-distributed-data-with-hecuba-and-dataclay-bsc Please, bring your own laptop. All the PATC courses at BSC are free of charge. Course conveners: Department and Research group: Computer Science - Workflows and Distributed Computing Yolanda Becerra, Data-driven Scientific Computing research line, Senior researcher Anna Queralt, Distributed Object Management research line, Senior researcher Course Lecturers: Department and Research group: Computer Sciences - Workflows and Distributed Computing Alex Barceló, Distributed object Management research line, Researcher Yolanda Becerra, Data-driven Scientific Computing research line, Senior researcher Adrián Espejo, Data-driven Scientific Computing research line, Junior research engineer Daniel Gasull, Distributed object Management research line, Research engineer Pol Santamaria, Data-driven Scientific Computing research line, Junior developer Anna Queralt, Distributed object Management research line, Senior researcher Objectives: The objective of this course is to give an overview of BSC storage solutions, Hecuba and dataClay. These two platforms allow to easily store and manipulate distributed data from object-oriented applications, enabling programmers to handle object persistence using the same classes they use in their programs, thus avoiding time consuming transformations between persistent and non-persistent data models. Also, Hecuba and dataClay enable programmers to transparently manage distributed data, without worrying about its location. This is achieved by adding a minimal set of annotations in the classes. Both Hecuba and dataClay can work independently or integrated with the COMPSs programming model and runtime to facilitate parallelization of applications that handle persistent data, thus providing a comprehensive mechanism that enables the efficient usage of persistent storage solutions from distributed programming environments. Both platforms offer a common interface to the application developer that facilitates using one solution or the other depending on the needs, without changing the application code. Also, both of them have additional features that allow the programmer to take advantage of their particularities. Learning Outcomes:   In the course, the Hecuba and dataClay syntax, programming methodology and an overview of their internals will be given. Also, an overview of COMPSs at user level will be provided in order to take advantage of the distribution of data with both platforms. The attendees will get a first lesson about programming with the common storage interface that will enable them to start programming with both frameworks. A hands-on with simple introductory exercises will be also performed for each platform, with and without COMPSs to distribute the computation. The students who finish this course will be able to develop simple Hecuba and dataClay applications and to run them both in a local resource and in a distributed platform (initially in a private cloud) Prerequisites: Basic programming skills in Python and Java. Previous attendance to PATC course on programming distributed systems with COMPSs is recommended.   Tentative Agenda (TBC):  Day 1 (Jan 28th, 2021) Session 1 / 9:30 – 13:00 9:30-10:00 Round table. Presentation and background of participants 10:00-11:00 Motivation, introduction and syntax of BSC storage platforms 11:00-11:30 Coffee break 11:30-12:15 Hands-on with storage API 12:15-13:00 COMPSs overview and how to parallelize a sequential application 13:00-14:30 Lunch break Session 2/ 14:30 – 18:00 14:30-16:00 Hecuba specifics and hands-on 16:00-16:30 Break 16:30-18:00 dataClay specifics and hands-on END of COURSE       https://events.prace-ri.eu/event/1055/ 2021-01-28 08:30:00 UTC 2021-01-28 17:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] OpenMP Programming Workshop @ LRZ

    17 - 19 February 2021

    [ONLINE] OpenMP Programming Workshop @ LRZ https://tess.elixir-europe.org/events/online-openmp-programming-workshop-lrz With the increasing prevalence of multicore processors, shared-memory programming models are essential. OpenMP is a popular, portable, widely supported, and easy-to-use shared-memory model. Since its advent in 1997, the OpenMP programming model has proved to be a key driver behind parallel programming for shared-memory architectures.  Its powerful and flexible programming model has allowed researchers from various domains to enable parallelism in their applications.  Over the more than two decades of its existence, OpenMP has tracked the evolution of hardware and the complexities of software to ensure that it stays as relevant to today’s high performance computing community as it was in 1997. This workshop will cover a wide range of  topics, reaching from the basics of OpenMP programming using the "OpenMP Common Core" to really advanced topics. During each day lectures will be mixed with hands-on sessions. Day 1 The first day will cover basic parallel programming with OpenMP. Most OpenMP programmers use only around 21 items from the specification. We call these the “OpenMP Common Core”. By focusing on the common core on the first day, we make OpenMP what it was always meant to be: a simple API for parallel application programmers. In this hands-on tutorial, students use active learning through a carefully selected set of exercises, to master the Common Core and learn to apply it to their own problems. Day 2 and 3 Day 2 and 3 will cover advanced topics like: Mastering Tasking with OpenMP, Taskloops, Dependencies and Cancellation Host Performance: SIMD / Vectorization Host Performance: NUMA Aware Programming, Memory Access, Task Affinity, Memory Management Tool Support for Performance and Correctness, VI-HPS Tools Offloading to Accelerators Other Advanced Features of OpenMP 5.0 Future Roadmap of OpenMP Developers usually find OpenMP easy to learn. However, they are often disappointed with the performance and scalability of the resulting code. This disappointment stems not from shortcomings of OpenMP but rather with the lack of depth with which it is employed. The lectures on Day 2 and Day 3 will address this critical need by exploring the implications of possible OpenMP parallelization strategies, both in terms of correctness and performance. We cover tasking with OpenMP and host performance, putting a focus on performance aspects, such as data and thread locality on NUMA architectures, false sharing, and exploitation of vector units. Also tools for performance and correctness will be presented. Current trends in hardware bring co-processors such as GPUs into the fold. A modern platform is often a heterogeneous system with CPU cores, GPU cores, and other specialized accelerators. OpenMP has responded by adding directives that map code and data onto a device, the target directives. We will also explore these directives as they apply to programming GPUs. OpenMP 5.0 features will be highlighted and the future roadmap of OpenMP will be presented. All topics are accompanied with extensive case studies and we discuss the corresponding language features in-depth. Topics might be still subject to change. For the hands-on sessions participants need to use their own laptops or systems with a C/C++ or Fortran compiler supporting at least OpenMP 4.5 installed (see https://www.openmp.org/resources/openmp-compilers-tools/). The course is organized as a PRACE training event by LRZ in collaboration with the OpenMP ARB and RWTH Aachen. Lecturers Dr. Michael Klemm holds an M.Sc.  and a Doctor of Engineering degree from the Friedrich-Alexander-University Erlangen-Nuremberg, Germany.  His areas of interest include compiler construction, design of programming languages, parallel programming, and performance analysis and tuning.  Michael Klemm joined the OpenMP organization in 2009 and was appointed CEO of the OpenMP ARB in 2016. Dr. Christian Terboven is a senior scientist and leads the HPC group at RWTH Aachen University. His research interests center around Parallel Programming and related Software Engineering aspects. Dr. Terboven has been involved in the Analysis, Tuning and Parallelization of several large-scale simulation codes for various architectures. He is responsible for several research projects in the area of programming models and approaches to improve the productivity and efficiency of modern HPC systems. He is further co-author of the new book “Using OpenMP – The Next Step“, https://www.openmp.org/tech/using-openmp-next-step/ https://events.prace-ri.eu/event/1084/ 2021-02-17 09:00:00 UTC 2021-02-19 16:00:00 UTC [] [] [] workshops_and_courses [] []
  • Uncertainty quantification @ MdlS

    10 - 12 May 2021

    Uncertainty quantification @ MdlS https://tess.elixir-europe.org/events/uncertainty-quantification-mdls-fdfcdb10-4d49-416c-8acb-0abe86443d9e Uncertainty in computer simulations, deterministic and probabilistic methods for quantifying uncertainty, OpenTurns software, Uranie software Content Uncertainty quantification takes into account the fact that most inputs to a simulation code are only known imperfectly. It seeks to translate this uncertainty of the data to improve the results of the simulation. This training will introduce the main methods and techniques by which this uncertainty propagation can be handled without resorting to an exhaustive exploration of the data space. HPC plays an important role in the subject, as it provides the computing power made necessary by the large number of simulations needed. The course will present the most important theoretical tools for probability and statistical analysis, and will illustrate the concepts using the OpenTurns software. Course Outline Day 1 : Methodology of Uncertainty Treatment – Basics of Probability and Statistics •    General Uncertainty Methodology (30’) : A. Dutfoy •    Probability and Statistics: Basics (45’) : G. Blondet •    General introduction to Open TURNS and Uranie (2 * 30’) : G. Blondet, J.B. Blanchard •    Introduction to Python and Jupyter (45’): practical work on distributions manipulations Lunch •    Uncertainty Quantification (45’) : J.B. Blanchard •    OpenTURNS – Uranie practical works: sections 1, 2 (1h): G. Blondet,  J.B. Blanchard,  A. Dutfoy •    Central tendency and Sensitivity analysis (1h): A. Dutfoy Day 2 : Quantification, Propagation and Ranking of Uncertainties •    Application to OpenTURNS and Uranie (1h): section 3 M. Baudin, G. Blondet, F. Gaudier, J.B. Blanchard •    Estimation of probability of rare events (1h): G. Blondet •    Application to OpenTURNS and Uranie (1h): M. Baudin, G. Blondet, F. Gaudier, J.B. Blanchard Lunch •    Distributed computing (1h) : Uranie (15’, F. Gaudier, J.B. Blanchard), OpenTURNS (15’, G. Blondet), Salome et OpenTURNS (30’, O. Mircescu) •    Optimisation and Calibration (1h) : J.B. Blanchard, M. Baudin •    Application to OpenTURNS and Uranie (1h): J.B. Blanchard, M. Baudin Day 3 : HPC aspects – Meta model •    HPC aspects specific to the Uncertainty treatment (1h) : K. Delamotte •    Introduction to Meta models (validation, over-fitting) – Polynomial chaos expansion (1h) : JB Blanchard, C. Mai, •    Kriging meta model (1h): C. Mai Lunch •    Application to OpenTURNS and Uranie (2h) : C. Mai, G. Blondet, J.B. Blanchard •    Discussion /  Participants projects Learning outcomes Learn to recognize when uncertainty quantification can bring new insight to simulations. Know the main tools and techniques to investigate uncertainty propagation. Gain familiarity with modern tools for actually carrying out the computations in a HPC context. Prerequisites Basic knowledge of probability will be useful, as will a basic familiarity with Linux. https://events.prace-ri.eu/event/1081/ 2021-05-10 07:30:00 UTC 2021-05-12 15:00:00 UTC [] [] [] workshops_and_courses [] []

Note, this map only displays events that have geolocation information in TeSS.
For the complete list of events in TeSS, click the grid tab.