Register event
18 events found

Event types: Workshops and courses 

and

Content provider: PRACE 

  • [MOOC] A short introduction to MPI One-sided Communication @ HLRS & ASTRON & Surfsara

    25 October - 7 November 2020

    [MOOC] A short introduction to MPI One-sided Communication @ HLRS & ASTRON & Surfsara https://tess.elixir-europe.org/events/mooc-a-short-introduction-to-mpi-one-sided-communication-hlrs-astron-surfsara Overview This online course will start on the 26th of October and it will run for two weeks, with an estimated effort of 4 hours per week. It is provided as a MOOC (Massive Open Online Course). During this time you will be guided through the concepts of one-sided communication and learn to perform these data movements correctly and efficiently using MPI (with C and Fortran bindings). Several examples and exercises will illustrate their use and help to understand the benefits of these routines, and you also will be able to challenge your knowledge with some quizzes! This online course has been developed by HLRS, ASTRON and SURFsara with the sponsorship of PRACE, and the registration is done for free through FutureLearn in the indicated link below.   For further information and registration please visit the FutureLearn course page. https://events.prace-ri.eu/event/1079/ 2020-10-25 23:00:00 UTC 2020-11-07 23:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Heterogeneous computing with performance modelling @ SNIC

    4 - 5 November 2020

    [ONLINE] Heterogeneous computing with performance modelling @ SNIC https://tess.elixir-europe.org/events/online-heterogeneous-computing-with-performance-modelling-snic Modern high core count CPUs and accelerators, such as GPUs, have been shown to provide significant performance benefits in many different applications. However, for a novice, or even for a moderately experienced scientist or programmer, it is not always clear which applications could potentially benefit from these resources and which do not. For example, a Nvidia V100 GPU can perform artificial intelligence (AI) related computations in a fraction of the time it takes a regular CPU to perform the same computations but ill-informed OpenACC compiler pragmas can actually make a code run slower. Why is this? When should one invest time in parallelization and/or GPU-acceleration? How much speedup can be expected with a given application? The main goal of this two-day course is to start answering these questions. The course also covers the basics of GPU programming and aims to provide the necessary information for avoiding the most common pitfalls. More information, schedule, and registration can be found on the course webpage at HPC2N, Umeå University. https://events.prace-ri.eu/event/1064/ 2020-11-04 08:00:00 UTC 2020-11-05 16:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Shared memory parallelization with OpenMP @ VSC Vienna

    5 - 6 November 2020

    [ONLINE] Shared memory parallelization with OpenMP @ VSC Vienna https://tess.elixir-europe.org/events/online-shared-memory-parallelization-with-openmp-vsc-vienna This course will be delivered as an ONLINE COURSE (using Zoom) for remote participation. REGISTRATION is strictly NECESSARY since the details to access the online course will be provided to the registered and accepted attendees only. Registration follows the "first come – first served" principle. For the online course we can take a maximum of 40 persons. Additional registrations will be added to a waiting list. Furthermore, accepted participants will be contacted a few days before the course and asked to do a short PRE-ASSIGNMENT that has to be completed before the course starts. For doing the HANDS-ON LABS all you need is a C/C++ or Fortran compiler that supports OpenMP. Participants can choose between using their own laptops or any system they have access to that provides an OpenMP enabled compiler, it's even possible to do the exercises in Google Colab. Alternatively, participants can ask for a training user account at the VSC for the duration of the course. Overview The focus of this 2 days course is on shared memory parallelization with OpenMP for dual-core, multi-core, shared memory, and ccNUMA platforms. This course teaches OpenMP starting from a beginners level. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the OpenMP directives, environment variables, and library routines. Race-condition debugging tools are also presented. Content Levels: Beginners = 6:00h (50%) + Intermediate = 4:00h (33%) + Advanced = 2:00h (17%) This course is a PRACE training event. It is organized by the VSC Research Center, TU Wien, in cooperation with the High-Performance Computing-Center Stuttgart (HLRS). Agenda & Content 1st day 08:45  Join online 09:00    Welcome 09:10    Introduction to modern hardware (talk) 10:10    An overview of OpenMP (talk) 10:35   Coffee 10:50    OpenMP programming and execution model (talk+practical) 12:30   Lunch 13:30    OpenMP worksharing directives (talk+practical)  15:00   Coffee 15:15    More OpenMP (talk+practical) 16:00    Summary (talk) + Q & A 16:30   End of first day 2nd day 08:45  Join online 09:00    How to write correct OpenMP programs (talk) 09:30    Intel Inspector XE (demo+practical) 10:30   Coffee 10:45    How to write efficient OpenMP programs (talk) 11:00    Heat equation (talk+practical) 12:30   Lunch 13:30    Heat equation (practical — continued) 14:00    Vectorization with OpenMP (talk) 14:25    Thread affinity in OpenMP (talk) 15:00   Coffee 15:15     The taskloop directive (talk+practical) 16:30   End of second day (course) https://events.prace-ri.eu/event/1076/ 2020-11-05 07:45:00 UTC 2020-11-06 15:30:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Directive-based GPU programming with OpenACC @ JSC

    9 - 11 November 2020

    [ONLINE] Directive-based GPU programming with OpenACC @ JSC https://tess.elixir-europe.org/events/online-directive-based-gpu-programming-with-openacc-jsc GPU-accelerated computing drives current scientific research. Writing fast numeric algorithms for GPUs offers high application performance by offloading compute-intensive portions of the code to the GPU. The course will cover basic aspects of GPU architectures and programming. Focus is on the usage of the directive-based OpenACC programming model which allows for portable application development. Examples of increasing complexity will be used to demonstrate optimization and tuning of scientific applications. Topics covered will include: Introduction to GPU/Parallel computing Programming model OpenACC Interoperability of OpenACC with GPU libraries (like cuBLAS and cuFFT) and CUDA Multi-GPU Programming with MPI and OpenACC Tools for debugging and profiling Performance optimization The course consists of lectures and interactive hands-on sessions in C or Fortran (the attendee’s choice). This course will take place as an online event. The link to the online platform will be provided to the registrants only. Prerequisites: Some knowledge about Linux, e.g. make, command line editor, Linux shell (see for instance this overview), a little experience in C/Fortran Application Registrations are only considered until 2 November 2020, the maximal number of participants is limited. Applicants will be notified, whether they are accepted for participitation. Instructors: Dr. Andreas Herten, Dr. Thorsten Hater, Dr. Kaveh Haghighi-Mood, JSC; Markus Hrywniak, Jiri Kraus, NVIDIA Contact For any questions concerning the course please send an e-mail to a.herten@fz-juelich.de https://events.prace-ri.eu/event/1075/ 2020-11-09 08:00:00 UTC 2020-11-11 12:05:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Earth Sciences Simulation Environments @ BSC

    10 - 12 November 2020

    [ONLINE] Earth Sciences Simulation Environments @ BSC https://tess.elixir-europe.org/events/earth-sciences-simulation-environments-bsc-599900ce-5ae4-4c46-81cd-d14a424670dd The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge. Course conveners: Marc Guevara, Carles Tena and Maria Teresa Pay Perez This training will be online via zoom. Course lecturers: Earth Sciences Department researchers involved in the group of Computational Earth Sciences, Atmospheric Composition, Climate Prediction and Earth System Services. Final lectures TBA. Objectives: The objective of this PATC course is to cover the basics of a high performance computing (HPC) environment oriented towards earth science applications, specifically chemical weather modelling and climate modelling. More precisely, the course will cover: Introduction to earth science fundamentals and modelling; Basic usage of an HPC environment: shell, compilers, libraries, file systems, queuing system and parallel computing; Build and configure targeted earth science applications with the NMMB/BSC-CTM chemical transport model and with the EC-EARTH climate model; Execute and monitor numerical experiments using a workflow manager; Analyse and visualise model outputs with a wide set of tools. Learning outcomes: Participants will learn and gain experience in accessing an HPC facility, installing earth science numerical models and related utilities and libraries, running numerical simulations, monitoring the execution of supercomputing jobs, analysing and visualising model results. Level: (All courses are designed for specialists with at least 1st cycle degree or similar background experience) For trainees with some theoretical and practical knowledge Prerequisites: At least University degree in progress on Earth Sciences, Computer Sciences or related area Basic knowledge of UNIX Knowledge of C, FORTRAN, MPI or openMP is recommended Knowledge of Earth Sciences data formats is recommended (grib, netcdf, hdf,…) Knowledge of R and Python Agenda: Day 1 (Nov. 10) – 4.5h 10:00-10:15 Welcome and introduction of the BSC Earth Sciences Department. (Francisco Doblas-Reyes, Head of BSC-ES Department) 10:15-10:30 Technical and logistic information. (Marc Guevara and Carles Tena) 10:30-11:00 Introduction to earth science fundamentals and modelling. (Raffaele Bernardello) 11:00-11:30 Break 11:30-13:00 Introduction to Linux and HPC, BSC facilities, PRACE and RES + preparation/checks for hands-on tutorial after lunch break (Mario Acosta) 13:00-14:00 Lunch break 14:00-16:00 HPC hands-on tutorial. At the end of this tutorial, the students will be able to compile and execute jobs on Mare Nostrum 4. (Mario Acosta and Miguel Castrillo and Carles Tena) Day 2 (Nov 11) – 3.5h 9:30-10:00 Lecture on the EC-EARTH climate model. (Raffaele Bernardello) 10:00-10:30 Lecture on the NMMB-MONARCH atmosphere chemistry model. (Oriol Jorba) 10:30-11:00 Lecture on setting up and running the NMMB-MONARCH model with Autosubmit. (Francesca Macchia and Gilbert Montane) 11:00-11:30 Break 11:30-13:30 Model hands-on tutorial. Students will run a prepared case. (Francesca Macchia and Gilbert Montane and Carles Tena and Marc Guevara) Day 3 (Nov 12) – 3h 10:00-11:00 Introduction to analysis and visualisation tools for model outputs (s2dverification, MapGenerator, CDO, NCO, panoply, ncview) (Francesco Benincasa) 11:00-11:30 Break 11:30-13:30 Analysis hands-on tutorial. Students will apply tools for analysis and visualisation to the outputs created the day before. (Francesco Benincasa and Carles Tena and Marc Guevara) END of COURSE https://events.prace-ri.eu/event/1042/ 2020-11-10 09:00:00 UTC 2020-11-12 12:30:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Introduction to Parallel Programming @GRNET

    11 - 13 November 2020

    [ONLINE] Introduction to Parallel Programming @GRNET https://tess.elixir-europe.org/events/online-introduction-to-parallel-programming-grnet [ONLINE] Introduction to Parallel Programming 11 - 13 November 2020 Purpose of the course The focus is to understand the basics of parallel programming with the message-passing interface (MPI) and OpenMP parallel programming paradigms. MPI is the dominant parallelization paradigm in high performance computing and enables one to write programs that run on distributed memory machines, such as the ARIS Greek supercomputer and other systems of the PRACE infrastructure. OpenMP is a threading based approach which enables one to parallelize a program over a single shared memory machine, such as a single node in ARIS. The course also contains performance and best practice considerations, e.g., hybrid MPI+OpenMP parallelization. The course ends with a section presenting profiling and code optimizations to understand the behavior and performance of parallelized codes. The 3 day course consist of lectures and hands-on exercises on parallel programming. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the taught constructs of the Message Passing  Interface (MPI) and the shared memory directives of OpenMP. The course ends with a section presenting profiling and code optimizations to understand the behavior and performance of parallelized codes. After the course the participants should be able to write simple parallel programs and parallelize existing programs with the basic features of MPI and/or OpenMP. Users should also be able to perform basic profiling and debugging of parallel applications. . Prerequisites The course addresses participants who are familiar with the C/C++ or Fortran programming languages and have working experience with the Linux operating system and the use of the command line. Experience with parallel programming (MPI and OpenMP) is not required. Hands on sessions will run on participants computers. An HPC like environment with the necessary tools within a Virtual Machine image will be provided. Course language is English.   Venue ONLINE using zoom Timetable https://events.prace-ri.eu/event/1086/attachments/1344/2413/Timetable.pdf About Tutors   Dr. Dellis (Male) holds a B.Sc. in Chemistry (1990) and PhD in Computational Chemistry (1995) from the National and Kapodistrian University of Athens, Greece. He has extensive HPC and grid computing experience. In EGEE/EGI projects he acted as application support and VO software manager for SEE VO, grid sites administrator, NGI_GRNET support staff (2008-2014). In PRACE 1IP-6IP he was involved in benchmarking tasks either as group member or as BCO (2010-2020). Currently he holds the position of “HPC Team leader” at GRNET S.A.   Dr. Ioannis E. Venetis received his PhD in 2006 from the Computer Engineering and Informatics Department at the University of Patras, Greece. Currently he teaches "Parallel Processing" and "Software and Programming for High Performance Systems" at the same Department. He has participated in numerous research projects in the area of Parallel Computing. His main research interests include parallel programming models, run-time systems for supporting such models, co-processor programming (especially using GPUs and the Intel Xeon Phi) and parallelization of computationally demanding applications.   About GRNET   GRNET – National Infrastructures for Research and Technology, is the national network, cloud computing and IT e-Infrastructure and services provider. It supports hundreds of thousands of users in the key areas of Research, Education, Health and Culture. GRNET provides an integrated environment of cutting-edge technologies integrating a country-wide dark fiber network, data centers, a high performance computing system and Internet, cloud computing, high-performance computing, authentication and authorization services, security services, as well as audio, voice and video services. GRNET scientific and advisory duties address the areas of information technology, digital technologies, communications, e-government, new technologies and their applications, research and development, education, as well as the promotion of Digital Transformation. Through international partnerships and the coordination of EC co-funded projects, it creates opportunities for know-how development and exploitation, and contributes, in a decisive manner, to the development of Research and Science in Greece and abroad. National Infrastructures for Research and Technology – Networking Research and Education www.grnet.gr, hpc.grnet.gr https://events.prace-ri.eu/event/1086/ 2020-11-11 08:30:00 UTC 2020-11-13 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] High-performance computing with Python @ JSC

    16 - 20 November 2020

    [ONLINE] High-performance computing with Python @ JSC https://tess.elixir-europe.org/events/high-performance-computing-with-python-jsc-8f572b9d-fe3d-4357-a015-1be3342e47a8 Python is increasingly used in high-performance computing projects. It can be used either as a high-level interface to existing HPC applications and libraries, as embedded interpreter, or directly. This course combines lectures and hands-on sessions. We will show how Python can be used on parallel architectures and how to optimize critical parts of the kernel using various tools. The following topics will be covered: Interactive parallel programming with IPython Profiling and optimization High-performance NumPy Just-in-time compilation with numba Distributed-memory parallel programming with Python and MPI Bindings to other programming languages and HPC libraries Interfaces to GPUs This course is aimed at scientists who wish to explore the productivity gains made possible by Python for HPC. Prerequisites: Good working knowledge of Python and NumPy Application Registrations are only considered until 6 November 2020. Applicants will be notified, whether they are accepted for participitation. Instructors: Dr. Jan Meinke, Dr. Olav Zimmermann, JSC Contact For any questions concerning the course please send an e-mail to j.meinke@fz-juelich.de https://events.prace-ri.eu/event/982/ 2020-11-16 08:00:00 UTC 2020-11-20 12:00:00 UTC [] [] [] workshops_and_courses [] []
  • CodeRefinery online workshop

    17 - 26 November 2020

    CodeRefinery online workshop https://tess.elixir-europe.org/events/coderefinery-online-workshop Course goals and format In this course, you will become familiar with tools and best practices for version control and reproducibility in modern research software development. The main focus is on using Git for efficiently writing and maintaining research software. We will not teach programming, but we teach the tools you need to use programming well. We assume that you are familiar with the programming language that you use in your work and research. We try to keep the course as language-independent as possible but we will show some basic code examples in Python. This is an informal and interactive event with type-along type of presentations, live coding, and demos. Short tutorials alternate with practical exercises. Prerequisites You should be able to navigate the file tree in a terminal session and edit text files in the terminal. This Linux shell crash course (video) contains the essentials. Basics in one or more programming languages. You need to install some software (see below). It is useful if you have a basic idea of how Git works. We will start from the basics anyway, but please go through this Git-refresher material for a basic overview and important configuration steps. Make sure that Git is configured, and verify the configuration (see below). Schedule All times are in Europe/Stockholm time Time converter The schedule includes frequent breaks. The schedule is subject to change. Pre-workshop installation help and verification times Nov 12 (Thu) 13:00 - 14:00 Nov 13 (Fri) 13:00 - 14:00 Join anytime during this (preferably not everyone at the very start and end) Day 1 (Nov 17, Tue) 9:00 - 9:20 Welcome and practical information  9:20 - 12:00 Introduction to version control - part 1/2 (“Motivation” to “Undoing”)  Day 2 (Nov 18, Wed) 9:00 - 12:00 Introduction to version control - part 2/2 (“Branching and merging” to “Inspecting history”)  Day 3 (Nov 19, Thu) 9:00 - 12:00 Collaborative distributed version control  Day 4 (Nov 24, Tue) 9:00 - 9:15 Mini-intro  9:00 - 11:00 Reproducible research and FAIR data  11:00 - 12:00 Social coding and open software  Day 5 (Nov 25, Wed) 9:00 - 10:30 Documentation  10:30 - 12:00 Jupyter  Day 6 (Nov 26, Thu) 9:00 - 10:30 Automated testing 10:45 - 11:45 Modular code development  11:45 - 12:00 Concluding remarks and where to go from here Software requirements You need to install some software before CodeRefinery (why we ask you to do this). Please do this - and check it - in advance, otherwise you’ll start off behind. Note that, e.g., a working Python executable on your laptop is not sufficient - a version greater than 3.4 is strongly recommended and a number of extra packages need to be installed as detailed on the Python installation page. You also need to check your git configuration (video: how to configure Git). If you have an institutional laptop with limited rights, start in advance and/or ask for help to translate these instructions to work on your system. Bash Editor Git, including some configuration (optional) Visual diff tools Python Jupyter and JupyterLab Snakemake Zoom You should either a) drop by one of our verification sessions in advance, or b) verify with your team’s helper before the workshop. How to join We are normally limited by number of helpers, thus why not allow groups to bring their own helper?: We have the concept of teams that bring their own breakout room. Anyone may register as a learner, we will try to take as many as we can. Or you can register as a helper. If you are somewhat comfortable with these tools, you can be a helper! (see below) Register as a team, one helper and 4-6 learners. We will accept all complete teams. Everyone on the team must register separately. Decide some team name, and when registering, everone enter this same name so that we can link you together. During the exercise sessions in breakout rooms, the helper has primary responsibility for their group’s hands-on exercises. Still, instructors drop by to check and help as needed, so don’t worry if you don’t know everything. This is a great opportunity to bring your friends and colleagues and prepare for the future together. If you’ve been to a CodeRefinery before and have used git some after that, you are definitely capable of being a helper. If you routinely use git and know Python somewhat well, you are also very qualified to be a helper. Registration Please visit the workshop webpage.  https://events.prace-ri.eu/event/1082/ 2020-11-17 08:00:00 UTC 2020-11-26 11:00:00 UTC [] [] [] workshops_and_courses [] []
  • GPU Programming with CUDA @ EPCC Online

    23 - 26 November 2020

    GPU Programming with CUDA @ EPCC Online https://tess.elixir-europe.org/events/gpu-programming-with-cuda-epcc-online Outline This course will provide an introduction to GPU computing with CUDA aimed at scientific application programmers. The course will give a background on the difference between CPU and GPU architectures as a prelude to introductory exercises in CUDA programming. The course will discuss the execution of kernels, memory management, and shared memory operations. Common performance issues are discussed and their solution addressed. The course will also cover some of the alternatives to CUDA commonly available (OpenCL, OpenACC, and Kokkos) at the current time. A separate "Hackathon Day" will be available for attendees to try out their own problems (or a 'canned' extended example) with the help of staff from both EPCC and NVIDIA. Learning Outcomes At the end of the course, attendees should be in a position to make an informed decision on how to approach GPU parallelisation in their applications in an efficient and portable manner. Pre-requisites Attendees must be familiar with programming in C or C++ (a number of the baseline CUDA exercises are also available using CUDA Fortran). Some knowledge of parallel/threaded programming models would be useful. Access to a GPU machine will be supplied. Note: this course will not address machine learning or any machine learning frameworks. https://events.prace-ri.eu/event/1085/ 2020-11-23 10:00:00 UTC 2020-11-26 17:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Fortran Programming for Scientific Computing

    25 - 27 November 2020

    [ONLINE] Fortran Programming for Scientific Computing https://tess.elixir-europe.org/events/online-fortran-programming-for-scientific-computing This course will be delivered as an ONLINE EVENT for remote participation due to the COVID-19 measures enforced by most European governments. Description The Fortran programming language is still - despite of its long history - one of the standard and most useful languages in scientific computing. Our course will give the participants the basic knowledge on the Fortran 95 programming language as well as some glimpses on the Fortran 2003/2008 standards and their new features. Best practices for obtaining high performance with Fortran will also be discussed. Both days include lectures and practical hands-on exercises. Learning outcome After this course participants should be able to write simple programs and also understand and modify existing codes written using the Fortran language. Prerequisites Basic knowledge on programming in general is recommended. In addition, basic skills needed for working with Linux command line terminal will be assumed. The content level of the course is broken down as: - beginners: 70%; - intermediate: 25%; - advanced: 5%; - community-targeted content: 0% Agenda (tentative) Day 1, Wednesday 25.11 Introduction Getting started with Fortran Procedures and modules Day 2, Thursday 26.11    Fortran arrays    Input / Output  Day 3, Friday 27.11                 Derived types    Generic procedures and procedure pointers    Language interoperability and other useful features Lunch time recommended: 12:00-13:00 each day Lecturers:  Sami Ilvonen (CSC) Language:  English Price:           Free of charge (3 training days) REGISTRATION is OBLIGATORY since the details to access the online course will be provided to the registered and accepted attendees only. If you have registered to this course and you are not able to attend, please CANCEL your registration in advance by sending an email to patc@csc.fi https://events.prace-ri.eu/event/1088/ 2020-11-25 07:00:00 UTC 2020-11-27 14:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Node-Level Performance Engineering @ LRZ

    2 - 4 December 2020

    [ONLINE] Node-Level Performance Engineering @ LRZ https://tess.elixir-europe.org/events/online-node-level-performance-engineering-lrz This online course covers performance engineering approaches on the compute node level. Even application developers who are fluent in OpenMP and MPI often lack a good grasp of how much performance could at best be achieved by their code. This is because parallelism takes us only half the way to good performance. Even worse, slow serial code tends to scale very well, hiding the fact that resources are wasted. This course conveys the required knowledge to develop a thorough understanding of the interactions between software and hardware. This process must start at the core, socket, and node level, where the code gets executed that does the actual computational work. We introduce the basic architectural features and bottlenecks of modern processors and compute nodes. Pipelining, SIMD, superscalarity, caches, memory interfaces, ccNUMA, etc., are covered. A cornerstone of node-level performance analysis is the Roofline model, which is introduced in due detail and applied to various examples from computational science. We also show how simple software tools can be used to acquire knowledge about the system, run code in a reproducible way, and validate hypotheses about resource consumption. Finally, once the architectural requirements of a code are understood and correlated with performance measurements, the potential benefit of code changes can often be predicted, replacing hope-for-the-best optimizations by a scientific process.   The course is a PRACE training event. Introduction Our approach to performance engineering Basic architecture of multicore systems: threads, cores, caches, sockets, memory The important role of system topology Tools: topology & affinity in multicore environments Overview likwid-topology and likwid-pin Microbenchmarking for architectural exploration Properties of data paths in the memory hierarchy Bottlenecks OpenMP barrier overhead Roofline model: basics Model assumptions and construction Simple examples Limitations of the Roofline model Pattern-based performance engineering Optimal use of parallel resources Single Instruction Multiple Data (SIMD) Cache-coherent Non-Uniform Memory Architecture (ccNUMA) Simultaneous Multi-Threading (SMT) Tools: hardware performance counters Why hardware performance counters? likwid-perfctr Validating performance models Roofline case studies Dense matrix-vector multiplication Sparse matrix-vector multiplication Jacobi (stencil) smoother Optional: The ECM performance model https://events.prace-ri.eu/event/1052/ 2020-12-02 08:00:00 UTC 2020-12-04 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] GPU Programming with CUDA @ JSC

    7 - 11 December 2020

    [ONLINE] GPU Programming with CUDA @ JSC https://tess.elixir-europe.org/events/online-gpu-programming-with-cuda-jsc GPU-accelerated computing drives current scientific research. Writing fast numeric algorithms for GPUs offers high application performance by offloading compute-intensive portions of the code to an NVIDIA GPU. The course will cover basic aspects of GPU architectures and programming. Focus is on the usage of the parallel programming language CUDA-C which allows maximum control of NVIDIA GPU hardware. Examples of increasing complexity will be used to demonstrate optimization and tuning of scientific applications. Topics covered will include: Introduction to GPU/Parallel computing Programming model CUDA GPU libraries like CuBLAS and CuFFT Tools for debugging and profiling Performance optimizations Prerequisites: Some knowledge about Linux, e.g. make, command line editor, Linux shell, experience in C/C++ Dates: 7-11 December 2020, 09:00-13:00 each day Application Registrations are only considered until 30November 2020, the maximal number of participants is limited. Applicants will be notified, whether they are accepted for participitation. Instructors: Dr. Jan Meinke, Jochen Kreutz, Dr. Andreas Herten, JSC; Jiri Kraus, Markus Hrywniak, NVIDIA Contact For any questions concerning the course please send an e-mail to j.meinke@fz-juelich.de https://events.prace-ri.eu/event/1083/ 2020-12-07 08:00:00 UTC 2020-12-11 12:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] CUDA 4 Dummies @ VSC Vienna

    10 - 11 December 2020

    [ONLINE] CUDA 4 Dummies @ VSC Vienna https://tess.elixir-europe.org/events/online-cuda-4-dummies-vsc-vienna This course will be delivered as an ONLINE COURSE (using Zoom) for remote participation. REGISTRATION is strictly NECESSARY since the details to access the online course will be provided to the registered and accepted attendees only. Registration follows the "first come – first served" principle. For the online course we can take a maximum of 40 persons. Additional registrations will be added to a waiting list. Furthermore, accepted participants will be contacted a few days before the course and asked to do a short PRE-ASSIGNMENT that has to be completed before the course starts. The HANDS-ON LABS will be done on VSC-3, all participants will get a training user account at the VSC for the duration of the course. Overview CUDA is the standard API for code development targeting the GPU and a number of impressive examples have already been given in diverse areas from particle physics to computational biology. CUDA allows simple extensions of standard C/C++/Fortran code with GPU-specific functions. In this way thousands of cores available on the GPU can be leveraged to work in parallel and thus carry out significant fractions of the computational workload on the device rather than the CPU. There is also a vast set of auxiliary tools available to the developer including libraries, code templates, building blocks, analysis tools, developmental frameworks and in general a vivid community making up the CUDA Developer Zone. It is often for this multifaceted support environment that the interested beginner is feeling overwhelmed and unsettled about which particular first steps should best be taken to gain a straightforward introduction into the subject. For exactly this reason the present course is offering a systematic step-by-step introduction into GPU computing from the perspective of the newcomer. Basic design principles will be established, central programming techniques acquired and a number of state-of-the-art workflows examined that efficiently employ the GPU and are frequently used in scientific computing. This course provides lectures, demos, and hands-on labs. This course is a PRACE training event. It is organized by the VSC Research Center, TU Wien. Agenda & Content (tentative) 1st day – 10 December 2020 Basic design principles, getting started with simple CUDA routines, mastering the CUDA environment; Memory Hierarchy with a focus on CUDA managed unified memory; Practical exercise: all the pitfalls one can run into when porting a dummy code to CUDA. 08:45 - 09:00    join online 09:00 - 13:00    lectures and hands-on labs (with breaks: 11:00-11:15) 13:00 - 14:00    lunch break 14:00 - 17:00    lectures and hands-on labs (with breaks: 15:15-15:30) 2nd day – 11 December 2020 Selected examples to demonstrate CUDA capabilities and GPU operation, e.g., bandwidthTest, simpleAssert, simpleStreams, etc.; CUDA libraries, CUBLAS, CUSolver, considering numerical precision, gaining insight into the runtime; Practical exercise: CUDA in everyday life (of a scientist). 08:45 - 09:00    join online 09:00 - 13:00    lectures and hands-on labs (with breaks: 11:00-11:15) 13:00 - 14:00    lunch break 14:00 - 17:00    lectures and hands-on labs (with breaks: 15:15-15:30) https://events.prace-ri.eu/event/1078/ 2020-12-10 07:45:00 UTC 2020-12-11 16:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Practical Deep Learning @ CSC

    17 - 18 December 2020

    [ONLINE] Practical Deep Learning @ CSC https://tess.elixir-europe.org/events/online-practical-deep-learning-csc This course will be delivered as an ONLINE EVENT for remote participation due to the COVID-19 measures enforced by most European governments. Description This course gives a practical introduction to deep learning, convolutional and recurrent neural networks, GPU computing, and tools to train and apply deep neural networks for natural language processing, images, and other applications. The course consists of lectures and hands-on exercises. TensorFlow 2, Keras, and PyTorch  will be used in the exercise sessions. CSC's Notebooks environment will be used on the first day of the course, and the GPU-accelerated Puhti supercomputer on the second day. Learning outcome After the course the participants should have the skills and knowledge needed to begin applying deep learning for different tasks and utilizing the GPU resources available at CSC for training and deploying their own neural networks. Prerequisites The participants are assumed to have working knowledge of Python and suitable background in data analysis, machine learning, or a related field. Previous experience in deep learning is not required, but the fundamentals of machine learning are not covered on this course.  Basic knowledge of a Linux/Unix environment will be assumed. Agenda (tentative) Day 1, Thursday 17.12    09.00 – 11.00 Introduction to deep learning and to Notebooks    11.00 – 12.00 Multi-layer perceptrons                12:00 - 13:00  Lunch    13.00 – 14.30 Image data and convolutional neural networks    14.30 – 16.00 Text data and recurrent neural networks Day 2, Friday 18.12    09.00 – 10.30 Deep learning frameworks, GPUs, batch jobs    10.30 – 12.00 Image classification exercises                12:00 - 13:00  Lunch    13.00 – 14.30 Attention and text categorization exercises    14.30 – 16.00 Cloud, using multiple GPUs Lecturers:  Markus Koskela (CSC),  Mats Sjöberg (CSC) Language:  English Price:           Free of charge (2 training days) REGISTRATION is OBLIGATORY since the details to access the online course will be provided to the registered and accepted attendees only. If you have registered to this course and you are not able to attend, please CANCEL your registration in advance by sending an email to patc@csc.fi https://events.prace-ri.eu/event/1087/ 2020-12-17 07:00:00 UTC 2020-12-18 14:00:00 UTC [] [] [] workshops_and_courses [] []
  • [FORMAT TBD] Programming Distributed Computing Platforms with COMPSs @ BSC

    26 - 27 January 2021

    [FORMAT TBD] Programming Distributed Computing Platforms with COMPSs @ BSC https://tess.elixir-europe.org/events/format-tbd-programming-distributed-computing-platforms-with-compss-bsc Please, bring your own laptop. All the PATC courses at BSC are free of charge. Course convener: Rosa Badia, Workflows and Distributed Computing Group Manager, Computer Sciences - Workflows and Distributed Computing Department Lecturers:  Rosa M Badia, Workflows and Distributed Computing Group Manager, Computer Sciences - Workflows and Distributed Computing Department, BSC Javier Conejero, Senior Researcher, Computer Sciences - Workflows and Distributed Computing Department, BSC Jorge Ejarque, Researcher, Computer Sciences - Workflows and Distributed Computing Department, BSC Daniele Lezzi, Senior Researcher, Computer Sciences - Workflows and Distributed Computing Department, BSC Objectives: The objective of this course is to give an overview of the COMPSs programming model, which is able to exploit the inherent concurrency of sequential applications and execute them in a transparent manner to the application developer in distributed computing platform. This is achieved by annotating part of the code as tasks, and building at execution a task-dependence graph based on the actual data used consumed/produced by the tasks. The COMPSs runtime is able to schedule the tasks in the computing nodes and take into account facts like data locality and the different nature of the computing nodes in case of heterogeneous platforms. Additionally, recently COMPSs has been enhanced with the possibility of coordinating Web Services as part of the applications. COMPSs supports Java, C/C++ and Python as programming languages. Learning Outcomes:  In the course, the COMPSs syntax, programming methodology and an overview of the runtime internals will be given. The attendees will get a first lesson about programming with COMPSs that will enable them to start programming with this framework. A hands-on with simple introductory exercises will be also performed. The students who finish this course will be able to develop simple COMPSs applications and to run them both in a local resource and in a distributed platform (initially in a private cloud). The exercises will be delivered in Python and Java. In case of Python, Jupyter notebooks will be used in some of the exercises. Level: for trainees with some theoretical and practical knowledge. INTERMEDIATE: for trainees with some theoretical and practical knowledge; those who finished the beginners course ADVANCED: for trainees able to work independently and requiring guidance for solving complex problems Prerequisites: Programming skills in Java and Python  Agenda:  Day 1 (January 26th, 2021) 9:30 - 10:00 - Welcome and round table (30') 10:00 - 10:30 - Session 1: Introduction to COMPSs (30') 10:30 - 11:15 - Session 2: PyCOMPSs: Writing Python applications (45') 11:15 - 11:45 - Coffee break 11:45 - 13:00 - Session 3: Python Hands-on using Jupyter notebooks (1h15') 13:00 - 14:30 - Lunch break 14:30 - 15:00 - Session 4: Machine learning with dislib (30') 15:00 - 16:30 - Session 5: Hands-on with dislib (1h30') Day 2 (January 27th, 2021) 9:30 - 11:00 - Session 6: Java & C++ (1h30') 11:00 - 11:30 - Cofee break 11:30 - 13:00 - Session 7: COMPSs Advanced Features (1h30') 13:00 - 14:30 - Lunch break 14:30 - 16:30 - Session 8: Cluster Hands-on (MareNostrum) (2h) 16:30 - 16:45 - Session 9: COMPSs Installation & Final Notes (15') END of COURSE https://events.prace-ri.eu/event/1054/ 2021-01-26 08:30:00 UTC 2021-01-27 15:30:00 UTC [] [] [] workshops_and_courses [] []
  • [FORMAT TBD] Managing distributed data with Hecuba and dataClay @ BSC

    28 January 2021

    [FORMAT TBD] Managing distributed data with Hecuba and dataClay @ BSC https://tess.elixir-europe.org/events/format-tbd-managing-distributed-data-with-hecuba-and-dataclay-bsc Please, bring your own laptop. All the PATC courses at BSC are free of charge. Course conveners: Department and Research group: Computer Science - Workflows and Distributed Computing Yolanda Becerra, Data-driven Scientific Computing research line, Senior researcher Anna Queralt, Distributed Object Management research line, Senior researcher Course Lecturers: Department and Research group: Computer Sciences - Workflows and Distributed Computing Alex Barceló, Distributed object Management research line, Researcher Yolanda Becerra, Data-driven Scientific Computing research line, Senior researcher Adrián Espejo, Data-driven Scientific Computing research line, Junior research engineer Daniel Gasull, Distributed object Management research line, Research engineer Pol Santamaria, Data-driven Scientific Computing research line, Junior developer Anna Queralt, Distributed object Management research line, Senior researcher Objectives: The objective of this course is to give an overview of BSC storage solutions, Hecuba and dataClay. These two platforms allow to easily store and manipulate distributed data from object-oriented applications, enabling programmers to handle object persistence using the same classes they use in their programs, thus avoiding time consuming transformations between persistent and non-persistent data models. Also, Hecuba and dataClay enable programmers to transparently manage distributed data, without worrying about its location. This is achieved by adding a minimal set of annotations in the classes. Both Hecuba and dataClay can work independently or integrated with the COMPSs programming model and runtime to facilitate parallelization of applications that handle persistent data, thus providing a comprehensive mechanism that enables the efficient usage of persistent storage solutions from distributed programming environments. Both platforms offer a common interface to the application developer that facilitates using one solution or the other depending on the needs, without changing the application code. Also, both of them have additional features that allow the programmer to take advantage of their particularities. Learning Outcomes:   In the course, the Hecuba and dataClay syntax, programming methodology and an overview of their internals will be given. Also, an overview of COMPSs at user level will be provided in order to take advantage of the distribution of data with both platforms. The attendees will get a first lesson about programming with the common storage interface that will enable them to start programming with both frameworks. A hands-on with simple introductory exercises will be also performed for each platform, with and without COMPSs to distribute the computation. The students who finish this course will be able to develop simple Hecuba and dataClay applications and to run them both in a local resource and in a distributed platform (initially in a private cloud) Prerequisites: Basic programming skills in Python and Java. Previous attendance to PATC course on programming distributed systems with COMPSs is recommended.   Tentative Agenda (TBC):  Day 1 (Jan 28th, 2021) Session 1 / 9:30 – 13:00 9:30-10:00 Round table. Presentation and background of participants 10:00-11:00 Motivation, introduction and syntax of BSC storage platforms 11:00-11:30 Coffee break 11:30-12:15 Hands-on with storage API 12:15-13:00 COMPSs overview and how to parallelize a sequential application 13:00-14:30 Lunch break Session 2/ 14:30 – 18:00 14:30-16:00 Hecuba specifics and hands-on 16:00-16:30 Break 16:30-18:00 dataClay specifics and hands-on END of COURSE       https://events.prace-ri.eu/event/1055/ 2021-01-28 08:30:00 UTC 2021-01-28 17:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] OpenMP Programming Workshop @ LRZ

    17 - 19 February 2021

    [ONLINE] OpenMP Programming Workshop @ LRZ https://tess.elixir-europe.org/events/online-openmp-programming-workshop-lrz With the increasing prevalence of multicore processors, shared-memory programming models are essential. OpenMP is a popular, portable, widely supported, and easy-to-use shared-memory model. Since its advent in 1997, the OpenMP programming model has proved to be a key driver behind parallel programming for shared-memory architectures.  Its powerful and flexible programming model has allowed researchers from various domains to enable parallelism in their applications.  Over the more than two decades of its existence, OpenMP has tracked the evolution of hardware and the complexities of software to ensure that it stays as relevant to today’s high performance computing community as it was in 1997. This workshop will cover a wide range of  topics, reaching from the basics of OpenMP programming using the "OpenMP Common Core" to really advanced topics. During each day lectures will be mixed with hands-on sessions. Day 1 The first day will cover basic parallel programming with OpenMP. Most OpenMP programmers use only around 21 items from the specification. We call these the “OpenMP Common Core”. By focusing on the common core on the first day, we make OpenMP what it was always meant to be: a simple API for parallel application programmers. In this hands-on tutorial, students use active learning through a carefully selected set of exercises, to master the Common Core and learn to apply it to their own problems. Day 2 and 3 Day 2 and 3 will cover advanced topics like: Mastering Tasking with OpenMP, Taskloops, Dependencies and Cancellation Host Performance: SIMD / Vectorization Host Performance: NUMA Aware Programming, Memory Access, Task Affinity, Memory Management Tool Support for Performance and Correctness, VI-HPS Tools Offloading to Accelerators Other Advanced Features of OpenMP 5.0 Future Roadmap of OpenMP Developers usually find OpenMP easy to learn. However, they are often disappointed with the performance and scalability of the resulting code. This disappointment stems not from shortcomings of OpenMP but rather with the lack of depth with which it is employed. The lectures on Day 2 and Day 3 will address this critical need by exploring the implications of possible OpenMP parallelization strategies, both in terms of correctness and performance. We cover tasking with OpenMP and host performance, putting a focus on performance aspects, such as data and thread locality on NUMA architectures, false sharing, and exploitation of vector units. Also tools for performance and correctness will be presented. Current trends in hardware bring co-processors such as GPUs into the fold. A modern platform is often a heterogeneous system with CPU cores, GPU cores, and other specialized accelerators. OpenMP has responded by adding directives that map code and data onto a device, the target directives. We will also explore these directives as they apply to programming GPUs. OpenMP 5.0 features will be highlighted and the future roadmap of OpenMP will be presented. All topics are accompanied with extensive case studies and we discuss the corresponding language features in-depth. Topics might be still subject to change. For the hands-on sessions participants need to use their own laptops or systems with a C/C++ or Fortran compiler supporting at least OpenMP 4.5 installed (see https://www.openmp.org/resources/openmp-compilers-tools/). The course is organized as a PRACE training event by LRZ in collaboration with the OpenMP ARB and RWTH Aachen. Lecturers Dr. Michael Klemm holds an M.Sc.  and a Doctor of Engineering degree from the Friedrich-Alexander-University Erlangen-Nuremberg, Germany.  His areas of interest include compiler construction, design of programming languages, parallel programming, and performance analysis and tuning.  Michael Klemm joined the OpenMP organization in 2009 and was appointed CEO of the OpenMP ARB in 2016. Dr. Christian Terboven is a senior scientist and leads the HPC group at RWTH Aachen University. His research interests center around Parallel Programming and related Software Engineering aspects. Dr. Terboven has been involved in the Analysis, Tuning and Parallelization of several large-scale simulation codes for various architectures. He is responsible for several research projects in the area of programming models and approaches to improve the productivity and efficiency of modern HPC systems. He is further co-author of the new book “Using OpenMP – The Next Step“, https://www.openmp.org/tech/using-openmp-next-step/ https://events.prace-ri.eu/event/1084/ 2021-02-17 09:00:00 UTC 2021-02-19 16:00:00 UTC [] [] [] workshops_and_courses [] []
  • Uncertainty quantification @ MdlS

    10 - 12 May 2021

    Uncertainty quantification @ MdlS https://tess.elixir-europe.org/events/uncertainty-quantification-mdls-fdfcdb10-4d49-416c-8acb-0abe86443d9e Uncertainty in computer simulations, deterministic and probabilistic methods for quantifying uncertainty, OpenTurns software, Uranie software Content Uncertainty quantification takes into account the fact that most inputs to a simulation code are only known imperfectly. It seeks to translate this uncertainty of the data to improve the results of the simulation. This training will introduce the main methods and techniques by which this uncertainty propagation can be handled without resorting to an exhaustive exploration of the data space. HPC plays an important role in the subject, as it provides the computing power made necessary by the large number of simulations needed. The course will present the most important theoretical tools for probability and statistical analysis, and will illustrate the concepts using the OpenTurns software. Course Outline Day 1 : Methodology of Uncertainty Treatment – Basics of Probability and Statistics •    General Uncertainty Methodology (30’) : A. Dutfoy •    Probability and Statistics: Basics (45’) : G. Blondet •    General introduction to Open TURNS and Uranie (2 * 30’) : G. Blondet, J.B. Blanchard •    Introduction to Python and Jupyter (45’): practical work on distributions manipulations Lunch •    Uncertainty Quantification (45’) : J.B. Blanchard •    OpenTURNS – Uranie practical works: sections 1, 2 (1h): G. Blondet,  J.B. Blanchard,  A. Dutfoy •    Central tendency and Sensitivity analysis (1h): A. Dutfoy Day 2 : Quantification, Propagation and Ranking of Uncertainties •    Application to OpenTURNS and Uranie (1h): section 3 M. Baudin, G. Blondet, F. Gaudier, J.B. Blanchard •    Estimation of probability of rare events (1h): G. Blondet •    Application to OpenTURNS and Uranie (1h): M. Baudin, G. Blondet, F. Gaudier, J.B. Blanchard Lunch •    Distributed computing (1h) : Uranie (15’, F. Gaudier, J.B. Blanchard), OpenTURNS (15’, G. Blondet), Salome et OpenTURNS (30’, O. Mircescu) •    Optimisation and Calibration (1h) : J.B. Blanchard, M. Baudin •    Application to OpenTURNS and Uranie (1h): J.B. Blanchard, M. Baudin Day 3 : HPC aspects – Meta model •    HPC aspects specific to the Uncertainty treatment (1h) : K. Delamotte •    Introduction to Meta models (validation, over-fitting) – Polynomial chaos expansion (1h) : JB Blanchard, C. Mai, •    Kriging meta model (1h): C. Mai Lunch •    Application to OpenTURNS and Uranie (2h) : C. Mai, G. Blondet, J.B. Blanchard •    Discussion /  Participants projects Learning outcomes Learn to recognize when uncertainty quantification can bring new insight to simulations. Know the main tools and techniques to investigate uncertainty propagation. Gain familiarity with modern tools for actually carrying out the computations in a HPC context. Prerequisites Basic knowledge of probability will be useful, as will a basic familiarity with Linux. https://events.prace-ri.eu/event/1081/ 2021-05-10 07:30:00 UTC 2021-05-12 15:00:00 UTC [] [] [] workshops_and_courses [] []

Note, this map only displays events that have geolocation information in TeSS.
For the complete list of events in TeSS, click the grid tab.