PRACE

Partnership for Advanced Computing in Europe

PRACE https://tess.elixir-europe.org/content_providers/prace Partnership for Advanced Computing in Europe /system/content_providers/images/000/000/059/original/1c2d412b-c20c-43b8-926d-6febf042f341?1479828059
Found 0 materials.
Showing 25 upcoming events. Found 704 past events. View all results.
  • [ONLINE] ChEESE Advanced Training on HPC for Computational Seismology @ HLRS

    21 - 23 October 2020

    [ONLINE] ChEESE Advanced Training on HPC for Computational Seismology @ HLRS https://tess.elixir-europe.org/events/online-cheese-advanced-training-on-hpc-for-computational-seismology-hlrs Overview August 3: Note: Due to the COVID-19 pandemic we currently plan to provide this course as an ONLINE course (using Zoom). Are you interested in numerical wave simulations but feel like there are too many codes and methods out there and don’t know which option is right for you? Or, perhaps you just want to refresh your knowledge and gain some practical experience with the different possibilities? Either way, this fully virtual workshop is for you! During three full days on 21- 23 October 2020, you will receive introductory hands-on tutorials for ExaHyPE, Salvus, SeisSol, and SPECFEM3D, which are four different high-performance computational seismology software packages focused on waveform modeling and inversion. The workshop will provide you with an overview of their similarities, differences, and areas of applicability. Unique and in-depth half-day practicals will be provided for each package, with computational resources provided by HLRS. Participants will also be encouraged to present their own work and/or motivation as a one-minute lightning-talk during the wrap-up session of the workshop. This training event is hosted by ChEESE, the Center of Excellence for Exascale in Solid Earth and supported by the ENERXICO project. The organizers look forward to seeing you in October! Target audience Researchers and students of all levels interested in augmenting their science with numerical waveform and dynamic rupture simulations as well as full-waveform inversion.   For further information and registration please visit the HLRS course page. https://events.prace-ri.eu/event/1046/ 2020-10-21 07:00:00 UTC 2020-10-23 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Tools and techniques to quickly improve performances of HPC applications in Solid Earth@Cineca

    26 - 28 October 2020

    [ONLINE] Tools and techniques to quickly improve performances of HPC applications in Solid Earth@Cineca https://tess.elixir-europe.org/events/tools-and-techniques-to-quickly-improve-performances-of-hpc-applications-in-solid-earth-cineca It will  shown a course targeted to improve the overall performance of a code in Solid Earth, currently in use at the CHEESE Center of Excellence H2020 project. First, parallel performance profiling tools will be provided on the initial version of the code to find the so-called performance bottlenecks. Starting from the profiling analysis, it will show how and where to intervene with respect to the hardware characterization of the HPC machine used for the investigation. We will show also how debug tools will be useful in the development / optimization phase to eliminate any possible bugs introduced in the writing (or redesign) of new parts of the code. Finally, it will be shown how to improve the overall performance of the code with respect to other popular parameters such as I / O, vectorization, etc. Skills: At the end of the course the student will be able to: use a concrete methodology to improve the performance of a code in Solid Earth already in use in the context of ChEESE project find and solve all the main bottlenecks of an application with respect to appropriate computational metrics and the machine used use appropriate debugging tools to eliminate any bugs that may arise during the development / optimization phase Target audience:  Researchers in Solid Earth interested to learn and use all those techniques and related tools that may allow them to improve the performance of their code on current HPC architectures in the shortest possible time. Pre-requisites: Basic knowledge of LINUX/UNIX. Knowledge of C, FORTRAN, MPI or OpenMP is recommended. Notions of Parallel computing techniques and algorithms for Solid Earth applications Grant: The lunch for the three days will be offered to all the participants and some grants are available. The only requirement to be eligible is to be not funded by your institution to attend the course and to work or live in an institute outside the Roma area. The grant  will be 300 euros for students working and living outside Italy and 150 euros for students working and living in Italy (outside Roma). Some documentation will be required and the grant will be paid only after a certified presence of minimum 80% of the lectures. Further information about how to request the grant, will be provided at the confirmation of the course: about 3 weeks before the starting date. Coordinating Teacher: Dr. P. Lanucara   https://events.prace-ri.eu/event/973/ 2020-10-26 08:00:00 UTC 2020-10-28 17:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Heterogeneous computing with performance modelling @ SNIC

    4 - 5 November 2020

    [ONLINE] Heterogeneous computing with performance modelling @ SNIC https://tess.elixir-europe.org/events/online-heterogeneous-computing-with-performance-modelling-snic Modern high core count CPUs and accelerators, such as GPUs, have been shown to provide significant performance benefits in many different applications. However, for a novice, or even for a moderately experienced scientist or programmer, it is not always clear which applications could potentially benefit from these resources and which do not. For example, a Nvidia V100 GPU can perform artificial intelligence (AI) related computations in a fraction of the time it takes a regular CPU to perform the same computations but ill-informed OpenACC compiler pragmas can actually make a code run slower. Why is this? When should one invest time in parallelization and/or GPU-acceleration? How much speedup can be expected with a given application? The main goal of this two-day course is to start answering these questions. The course also covers the basics of GPU programming and aims to provide the necessary information for avoiding the most common pitfalls. More information, schedule, and registration can be found on the course webpage at HPC2N, Umeå University. https://events.prace-ri.eu/event/1064/ 2020-11-04 08:00:00 UTC 2020-11-05 16:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Parallel Programming Workshop (Train the Trainer) @ HLRS

    12 - 16 October 2020

    [ONLINE] Parallel Programming Workshop (Train the Trainer) @ HLRS https://tess.elixir-europe.org/events/online-parallel-programming-workshop-train-the-trainer-hlrs The Train the Trainer Program is provided in conjunction with the regular course Parallel Programming with MPI and OpenMP and Advanced Parallel Programming. Whereas the regular course teaches parallel programming, this program is an education for future trainers in parallel programming. Too few people can provide parallel programming courses on the level that is needed if scientists and PhD students want to learn how to parallelize a sequential application or to enhance parallel applications. Within Europe, currently only six PATC centres and several other national centres provide such courses on an European or national level. We would like to assist further trainers and centres to also provide such courses for whole Europe or at least within their countries. Prerequisites You are familiar with parallel programming with MPI and OpenMP on an advanced level and skilled in both programming languages C and Fortran. Your goal: You want to provide MPI and OpenMP courses for other scientists and PhD students in your country, i.e., you would like to provide at least the first three days of the regular course as a training block-course to PhD students. Background: (a) Your centre supports you to provide such PhD courses in a course room at your centre. The course room is equipped at least with one computer/laptop per two (or three) students and has access to a HPC resource that allows MPI and OpenMP programming in C and Fortran. Or (b), you as a future trainer would like to co-operate with a centre with the necessary course infrastructure. What does this Train the Trainer Program provide? We provide you with all necessary teaching material on a personal basis, i.e., with the copyright to use it and to provide pdf or paper copies to the students in your PhD courses. We provide all exercise material. You will listen the lectures that you get familiar with the training material. During the exercises, you will help the regular students to correct their errors. The regular students are advised to request help if they were stuck for more than a minute. You will be trained to detect their problems as fast as possible (typically in less than a minute) and to provide the students with the needed help.   The Train the Trainer Program includes the curriculum from Monday until Friday according the course agenda. The Train the Trainer Program starts on Monday with a short introductory meeting at 8:15 am. On Thursday evening we will have an additional meeting and dinner for all participants of this TtT program. For further information and registration please visit the HLRS course page. https://events.prace-ri.eu/event/1031/ 2020-10-12 06:15:00 UTC 2020-10-16 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • Shared-memory programming with OpenMP @ EPCC Online

    6 - 15 October 2020

    Shared-memory programming with OpenMP @ EPCC Online https://tess.elixir-europe.org/events/shared-memory-programming-with-openmp-epcc-online Outline Almost all modern computers now have a shared-memory architecture with multiple CPUs connected to the same physical memory, for example multicore laptops or large multi-processor compute servers. This course covers OpenMP, the industry standard for shared-memory programming, which enables serial programs to be parallelised easily using compiler directives. Users of desktop machines can use OpenMP on its own to improve program performance by running on multiple cores; users of parallel supercomputers can use OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of the compute nodes. This course will cover an introduction to the fundamental concepts of the shared variables model, followed by the syntax and semantics of OpenMP and how it can be used to parallelise real programs. Hands-on practical programming exercises will be included, with access to HPC provided for the duration of the course. Pre-requisites This course is aim at attendees with prior experience of programming in C, C++ or Fortran. Some familiarity with HPC systems is helpful but not essential.  Learning outcomes At the end of the course, attendees should be equipped with the skills to write correct parallel programs using the core functionality of OpenMP, and understand the main issues affecting performance of shared memory parallel programs Timetable This course will be held over four 2-hour afternoon sessions: Tue 6th Oct 14:00 - 16:00 BST Thu 8th Oct 14:00 - 16:00 BST Tue 13th Oct 14:00 - 16:00 BST Thu 15th Oct 14:00 - 16:00 BST Full details Register https://events.prace-ri.eu/event/1057/ 2020-10-06 13:00:00 UTC 2020-10-15 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Data Management with iRODS and Compute @SURFsara

    29 October 2020

    [ONLINE] Data Management with iRODS and Compute @SURFsara https://tess.elixir-europe.org/events/data-management-with-irods-and-compute-surfsara-11cb4922-2fd9-4e88-b5b0-1930d3c1afba Would you want to practice reproducible research in HPC and preserve your data for the future? Research Data Management is essential in effective and reproducible science. Due to ever increasing data volumes and complexity, researchers need modern tools to manage all stages of the data life cycle. Here we will use iRODS, a data management framework, to incorporate data management principles in compute pipelines.  What? In this course you will: - Learn about the the iRODS data management framework and icommands - Understand how to incorporate provenance in a compute workflow - Know about FAIR in data processing workflows Who? - Everyone interested in learning advanced data management tools for compute and data processing workflows Requirements - Basic knowledge of Linux and shell commands You should have - Your own laptop with an up-to-date browser and a terminal emulator. The use of the operating systems Linux and macOS is preferred, but not mandatory. For Windows users we recommend to download MobaXterm (portable version) as terminal emulator.   IMPORTANT INFORMATION: WAITING LIST If the course gets fully booked, no more registrations are accepted through this website. However, you can be included in the waiting list: for that, please send an email to training@surfsara.nl and you'll be informed when a place becomes available. https://events.prace-ri.eu/event/1058/ 2020-10-29 08:00:00 UTC 2020-10-29 16:30:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Parallel Programming Workshop (MPI, OpenMP and advanced topics) @ HLRS

    12 - 16 October 2020

    [ONLINE] Parallel Programming Workshop (MPI, OpenMP and advanced topics) @ HLRS https://tess.elixir-europe.org/events/online-parallel-programming-workshop-mpi-openmp-and-advanced-topics-hlrs Distributed memory parallelization with the Message Passing Interface MPI (Mon, for beginners – non-PRACE part): On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an introduction into MPI-1. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI). Shared memory parallelization with OpenMP (Tue, for beginners – non-PRACE part): The focus is on shared memory parallelization with OpenMP, the key concept on hyper-threading, dual-core, multi-core, shared memory, and ccNUMA platforms. This course teaches shared memory OpenMP parallelization. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the directives and other interfaces of OpenMP. Race-condition debugging tools are also presented. Intermediate and advanced topics in parallel programming (Wed-Fri – PRACE course): Topics are advanced usage of communicators and virtual topologies, one-sided communication, derived datatypes, MPI-2 parallel file I/O, hybrid mixed model MPI+OpenMP parallelization, parallelization of explicit and implicit solvers and of particle based applications, parallel numerics and libraries, and parallelization with PETSc. MPI-3.0 introduced a new shared memory programming interface, which can be combined with MPI message passing and remote memory access on the cluster interconnect. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared in the hybrid mixed model MPI+OpenMP parallelization session with various hybrid MPI+OpenMP approaches and pure MPI. Further aspects are domain decomposition, load balancing, and debugging. Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. For further information and registration please visit the HLRS course page. https://events.prace-ri.eu/event/1030/ 2020-10-12 06:30:00 UTC 2020-10-16 14:30:00 UTC [] [] [] workshops_and_courses [] []
  • [POSTPONED] Modern Scientific C++ @ MdlS/Idris

    5 - 8 October 2020

    [POSTPONED] Modern Scientific C++ @ MdlS/Idris https://tess.elixir-europe.org/events/modern-scientific-c-mdls-idris In recent years, the C ++ language has evolved. To stick to the 1998/2003 standard is to miss many new features that make modern C ++ more robust, more powerful, and often more readable. Through this training, it is proposed to become familiar with syntactic novelties that facilitate the writing of code, modernized best practices to avoid the language traps, and a programming way that is easier to parallelize. This training is for scientific programmers who want to discover "modern" C ++ (2011 to 2020 standards), and adapt their programming practices accordingly. Detailed Program : Day 1 (Victor ALESSANDRINI) Review of some basic C++ concepts, overview of C++ as a software development environment, with two major software engineering strategies: object oriented programming and generic programming. Object oriented programming: the virtual function mechanism enabling late binding at execution time, (software modules calling newly written routines without recompilation). Examples of the power and relevance of virtual functions Function objects as extended pointers to functions, examples Introduction to generic programming: function templates, examples Day 2 (Victor ALESSANDRINI) Generic programming: class templates, examples The core of generic programming: using function and class templates to parameterize behavior rather than just object types Overview of the Standard Template Library (STL): strategies, containers, iterators, algorithms Concurrency in the standard C++ library: overview of the thread class, discussion of the new threading interfaces (futures, promises) enabling easy synchronization of simple concurrency patterns. Day 3 (David CHAMONT): modern C++ syntax Basic features: type inference, stronger typing, user-defined literals, uniform initialization, rvalue references, move semantics. Object features: member variables initialization, delegated and inherited constructors, explicit deletion and overriding of member functions. Generic features: static assertions, template variables and type aliasing, constant expressions, variadic templates, perfect forwarding.  Functional features: lambda functions. Day 4 (David CHAMONT): modern C++ library Basic tools: smart pointers (unique_ptr, shared_ptr), new collections (array, unordered maps), views (span, string_array), wrapper types (function, ref). Generic tools: type traits, sfinae, concepts. Functional tools: algebraic types (tuple, variant), monadic types (optional, future), ranges. Optimization and parallelization: beyond double, random numbers, chrono, execution policies, structures of arrays, co-routines. Prerequisites : Knowledge of classical (pre-11) C++ syntax, and basic experience in programming with C++ objects. Participants should be familiar with the following concepts: namespaces references, difference with pointers basic memory allocation (new and delete) Properties of C++ functions (default values, overloading) Basic understanding of error handling (exceptions) C++ classes, programming with objects, public derivation of classes Basic knowledge of templates https://events.prace-ri.eu/event/977/ 2020-10-05 07:30:00 UTC 2020-10-08 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • Practical introduction to QM/MM using CP2K for biomolecular modelling @ EPCC online

    13 - 20 October 2020

    Practical introduction to QM/MM using CP2K for biomolecular modelling @ EPCC online https://tess.elixir-europe.org/events/practical-introduction-to-qm-mm-using-cp2k-for-biomolecular-modelling-epcc-online This online course is a collaboration between ARCHER2 Training and BioExcel.   This online course consists of interactive hands-on practical sessions presented remotely using Collaborate. Attendees will be given access to ARCHER to execute the practical. Outline CP2K is a quantum chemistry and solid state physics software package that can perform atomistic simulations of a wide variety of systems, ranging from solid state to biological systems. CP2K provides a general framework for running density functional theory (DFT) simulations, with extensions that enable users to run classical molecular dynamics (MD), mix MD and DFT to obtain quantum-mechanical/molecular-dynamics (QM/MM) runs, or perform other forms of metadynamics, Monte Carlo, or other simulations. This course is aiming to teach attendees how to prepare and run biochemical simulations in CP2K. Specifically, we will be looking at how CP2K can be used to track the reaction dynamics of a system undergoing biological catalysis. In this course, we shall be focusing on a Diels Alder transition taking place in an immunoglobulin. We will begin by considering the reaction site in a purely quantum-mechanical method. Then, we will slowly increase the complexity of the system to take into account the entire structure of the immunoglobulin (using QM/MM approach), the solution, and finally, we will add a ligand to the reaction. While we will be referencing some DFT and QM/MM theory (mainly the nudged elastic band method and metadynamics), this is not the principal aim of this course. Our aim is to use these systems to help familiarise attendees with using CP2K to run simulations of biochemical systems. CP2K can be used for non-biological systems, and the topics being taught here can be ported to other fields, but this will not be a primary focus of this course. Learning outcomes At the end of this session, you will be able to:  Run a geometry optimisation using CP2K  Run a frequency calculation using CP2K  Estimate the energy barrier of a chemical reaction  Run a nudged elastic band (NEB) simulation to fine-tune the energy barrier and profile of a chemical reaction  Prepare a QM/MM simulation for CP2K  Run a QM/MM simulation in CP2K Prerequisites We ask that course attendees have the following:  Familiar with basic bash functionality  Know how to SSH  Familiarity with a text editor (e.g. emacs/vim)  Have attempted to login to ARCHER prior to the start of the course (credentials and instructions will be supplied)   Outline Timetable The course will run over two Tuesday afternoons, 13th and 20th October.  Participants are expected to attend both sessions Each session will run from 13:30-16:00  with a break 14:30-15:00. There will be a follow-up consultancy session, where you can drop-in to ask questions of the tutor on Tuesday 3rd November 14:00-16:00   https://events.prace-ri.eu/event/1071/ 2020-10-13 12:30:00 UTC 2020-10-20 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Introduction to Parallel Computing @ CSC

    13 - 16 October 2020

    [ONLINE] Introduction to Parallel Computing @ CSC https://tess.elixir-europe.org/events/online-introduction-to-parallel-computing-csc This course will be delivered as an ONLINE COURSE for remote participation  due to the COVID-19 measures enforced by most European governments. Connection details will be provided later. Description The course introduces the basics of parallel programming with the message-passing interface (MPI) and OpenMP paradigms. MPI is the dominant parallelization paradigm in high performance computing and enables one to write programs that run on distributed memory machines, such as Puhti and Taito. OpenMP is a threading based approach which enables one to parallelize a program over a single shared memory machine, such as a single node in Puhti. The course consists of lectures and hands-on exercises on parallel programming.  Course will be given in the online form, lectures are given in the mornings (third day is exception with lectures also in the afternoon), and after that participants can work on hands-on exercise on their own pace. During the self-study sessions a chat room will be provided for peer-to-peer learning and for assistance from lecturers. Learning outcome After the course the participants should be able to write simple parallel programs and parallelize existing programs with basic features of MPI or OpenMP. This course is also a prerequisite for the PTC course "Advanced Parallel Programming" in 2021. Prerequisites The participants are assumed to have working knowledge of Fortran and/or C programming languages. In addition, fluent operation in a Linux/Unix environment will be assumed. Agenda Day 1, Tuesday 13.10   9:00 -  9:30  What is high-performance computing?              9:30 -9:35   Break   9:35 - 10:30  Parallel computing concepts             10:30 -10:45   Coffee break 10:45 - 12:00  [self-study] Exercises           12:00 - 13:00  Lunch time recommended 13:00 - 16:00  [self-study] Exercises (cont.) Day 2, Wednesday 14.10 9:00 -  9:45  Point-to-point communication            9:45 -9:50   Break   9:50 - 10:20  Non-blocking point-to-point communication 10:20 - 12:00  [self-study] Exercises           12:00 - 13:00  Lunch time recommended 13:00 - 16:00  [self-study] Exercises (cont.) 16:00 - 16:30  Q&A session + exercise walk through Day 3, Thursday 15.10   9:00 - 10:00  Collective communication 10:00 - 12:00  [self-study] Exercises           12:00 - 13:00  Lunch time recommended 13:00 - 13:30  [self-study] Exercises (cont.) 13:30 - 14:00  Q&A session + exercise walk through 14:00 - 14:45  Introduction to OpenMP 14:45 - 16:00  [self-study] Exercises Day 4, Friday 16.10 9:00 -  9:45  Data sharing            9:45 -9:50   Break   9:50 - 10:35  Reductions and execution control 10:35 - 12:00  [self-study] Exercises           12:00 - 13:00  Lunch time recommended 13:00 - 15:00  [self-study] Exercises (cont.) 15:00 - 16:00  Q&A session + exercise walk through Lecturers:  Jussi Enkovaara (CSC), Cristian Achim (CSC) Language:  English Price:           Free of charge (4 training days) REGISTRATION is OBLIGATORY since the details to access the online course will be provided to the registered and accepted attendees only. If you have registered to this course and you are not able to attend, please CANCEL your registration in advance by sending an email to patc@csc.fi https://events.prace-ri.eu/event/1074/ 2020-10-13 06:00:00 UTC 2020-10-16 13:00:00 UTC [] [] [] workshops_and_courses [] []
  • [POSTPONED] Introduction to ScaLAPACK and MAGMA libraries @ MdlS/Idris

    19 - 20 October 2020

    [POSTPONED] Introduction to ScaLAPACK and MAGMA libraries @ MdlS/Idris https://tess.elixir-europe.org/events/introduction-to-scalapack-and-magma-libraries-mdls-idris The aim of this course is to introduced the basic usages of the ScaLAPACK and MAGMA libraries ScaLAPACK : The ScaLAPACK (Scalable Linear Algebra PACKage) is a library for high-performance dense linear algebra based on routines for distributed-memory message passing computers. It is mostly based on a subset of LAPACK (Linear Algebra PACKage) and BLAS (Basic Linear Algebra Subprograms) routines redesigned for distributed memory MIMD parallel computers where all the MPI communications are handled by routines provided by the BLACS (Basic Linear Algebra Communication Subprograms) library. The lecture will be mostly based on how to use the PBLAS  (Parallel BLAS) and ScaLAPACK libraries for linear algebra problems in HPC:   General introduction about the PBLAS and ScaLAPACK libraries Main ideas how to decompose the linear algebra problems in parallel programming Examples of basic operations with PBLAS : vector-vector, vector-matrix and matrix-matrix operations Examples of basic operations with ScaLAPACK : inversion and diagonalization Main problem based on calculating an exponentiation of a matrix MAGMA: In the second part of the course, we present MAGMA (Matrix Algebra on GPU and Multicore Architectures) , a dense linear algebra library similar to LAPACK but for hybrid/heterogeneous architectures. We start by presenting basic concepts of GPU architecture and giving an overview of communication schemes between CPUs and GPUs. Then, we  briefly present hybrid CPU/GPU programming models using the CUDA language.  Finally, we present MAGMA and how it can be used to easily and efficiently accelerate scientific codes, particularly those already using BLAS and LAPACK. Trainers: Donfack Simplice (MAGMA) Hasnaoui Karim (ScaLAPACK) Prerequisites : C or C++ and Fortran programming. Notions of linear algebra, as well as notions of MPI, would be an asset. https://events.prace-ri.eu/event/919/ 2020-10-19 07:30:00 UTC 2020-10-20 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] GPU Programming with OpenACC @ CSC

    22 - 23 October 2020

    [ONLINE] GPU Programming with OpenACC @ CSC https://tess.elixir-europe.org/events/online-gpu-programming-with-openacc-csc This course will be delivered as an ONLINE COURSE for remote participation  due to the COVID-19 measures enforced by most European governments. Connection details will be provided later. Description This course is an introduction to GPU programming using the directive based OpenACC paradigm. The course consists of lectures and hands-on exercises. Topics covered in the course include basic usage of OpenACC, how to control data movement between CPUs and GPUs, and some more advanced issues related to profiling and interoperability with e.g. MPI. Learning outcome After the course the participants should have the basic skills needed for utilising OpenACC with new or existing programs. Prerequisites The participants are assumed to have a working knowledge of Fortran and/or C programming languages. In addition, fluent operation in a Linux/Unix environment will be assumed. Agenda Day 1, Thursday 22.10   9:00 -  9:30   Introduction to accelerators​              9:30 -9:35   Coffee break   9:35 - 10:30  Introduction to OpenACC 10:30 - 12:00  Exercises [offline]          12:00 - 13:00  Lunch time recommended 13:00 - 16:00  Exercises [offline cont.] 16:00 - 17:00  Q&A session Day 2, Friday 23.10 9:00 -  10:00  Data management​          10:00 -10:15  Coffee break 10:15 - 11:00  Profiling and performance optimisation 11:00 - 12:00  Exercises [offline]         12:00 - 13:00  Lunch time recommended 13:00 - 15:30  Exercises [offline cont.] 15:30 - 16:00  Advanced topic: Multiple GPUs with MPI 16:00 - 17:00  Q&A session Lecturers:  Martti Louhivuori (CSC), George Markomanolis (CSC) Language:  English Price:           Free of charge (2 training days) REGISTRATION is OBLIGATORY since the details to access the online course will be provided to the registered and accepted attendees only. If you have registered to this course and you are not able to attend, please CANCEL your registration in advance by sending an email to patc@csc.fi https://events.prace-ri.eu/event/1073/ 2020-10-22 06:00:00 UTC 2020-10-23 14:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Directive-based GPU programming with OpenACC @ JSC

    9 - 11 November 2020

    [ONLINE] Directive-based GPU programming with OpenACC @ JSC https://tess.elixir-europe.org/events/online-directive-based-gpu-programming-with-openacc-jsc GPU-accelerated computing drives current scientific research. Writing fast numeric algorithms for GPUs offers high application performance by offloading compute-intensive portions of the code to the GPU. The course will cover basic aspects of GPU architectures and programming. Focus is on the usage of the directive-based OpenACC programming model which allows for portable application development. Examples of increasing complexity will be used to demonstrate optimization and tuning of scientific applications. Topics covered will include: Introduction to GPU/Parallel computing Programming model OpenACC Interoperability of OpenACC with GPU libraries (like cuBLAS and cuFFT) and CUDA Multi-GPU Programming with MPI and OpenACC Tools for debugging and profiling Performance optimization The course consists of lectures and interactive hands-on sessions in C or Fortran (the attendee’s choice). This course will take place as an online event. The link to the online platform will be provided to the registrants only. Prerequisites: Some knowledge about Linux, e.g. make, command line editor, Linux shell (see for instance this overview), experience in C Application Registrations are only considered until 2 November 2020, the maximal number of participants is limited. Applicants will be notified, whether they are accepted for participitation. Instructors: Dr. Andreas Herten, Dr. Thorsten Hater, Dr. Kaveh Haghighi-Mood, JSC; Markus Hrywniak, Jiri Kraus, NVIDIA Contact For any questions concerning the course please send an e-mail to a.herten@fz-juelich.de https://events.prace-ri.eu/event/1075/ 2020-11-09 08:00:00 UTC 2020-11-11 12:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] High-performance scientific computing in C++ @ JSC

    26 - 29 October 2020

    [ONLINE] High-performance scientific computing in C++ @ JSC https://tess.elixir-europe.org/events/high-performance-scientific-computing-in-c-jsc-7577a0b2-ea9c-4d4d-831c-b75adb7ec714 Modern C++, with its support for procedural, objected oriented, generic and functional programming styles, offers many powerful abstraction mechanisms to express complexity at a high level while remaining very efficient. It is therefore the language of choice for many scientific projects. However, achieving high performance by today's standards requires understanding and exploiting multiple levels of parallelism, and understanding C++ code from a performance centric viewpoint. In this course, the participants will learn how to write C++ programs which better utilize typical HPC hardware resources of the present day. The course is geared towards scientists and engineers already familiar with C++17 (at the very least C++14), who wish to develop maintainable and fast applications. They will learn techniques to better utilize CPU caches, instruction pipelines, SIMD functionality and multi-threading. Shared memory parallel programming on multiple CPU cores will be introduced using parallel STL of C++17 and Intel (R) Threading Building Blocks. The participants will also learn basic GPGPU programming in C++ using NVidia CUDA and Thrust. Prerequisites: Good working knowledge of C++, especially the C++14 standard. Please check with these questions whether your C++ knowlegde fulfills the requirements. Application Registrations are only considered until 16 October 2020, the maximal number of participants is limited. Applicants will be notified, whether they are accepted for participitation. Instructor: Dr. Sandipan Mohanty, JSC Contact For any questions concerning the course please send an e-mail to s.mohanty@fz-juelich.de This course was originally planned for 15-17 June 2020 as an in-person course. https://events.prace-ri.eu/event/984/ 2020-10-26 08:00:00 UTC 2020-10-29 15:30:00 UTC [] [] [] workshops_and_courses [] []
  • [POSTPONED] Introduction to Deep Learning Models @ JSC

    21 - 23 October 2020

    [POSTPONED] Introduction to Deep Learning Models @ JSC https://tess.elixir-europe.org/events/introduction-to-deep-learning-models-jsc This course has to be postponed to 2021. This course focuses on a recent machine learning method known as deep learning that emerged as a promising disruptive approach, allowing knowledge discovery from large datasets in an unprecedented effectiveness and efficiency. It is particularly relevant in research areas, which are not accessible through modelling and simulation often performed in HPC. Traditional learning, which was introduced in the 1950s and became a data-driven paradigm in the 90s, is usually based on an iterative process of feature engineering, learning, and modelling. Although successful on many tasks, the resulting models are often hard to transfer to other datasets and research areas. This course provides an introduction into deep learning and its inherent ability to derive optimal and often quite generic problem representations from the data (aka ‘feature learning’). Concrete architectures such as Convolutional Neural Networks (CNNs) will be applied to real datasets of applications using known deep learning frameworks such as Tensorflow, Keras, or Torch. As the learning process with CNNs is extremely computational-intensive the course will cover aspects of how parallel computing can be leveraged in order to speed-up the learning process using general purpose computing on graphics processing units (GPGPUs). Hands-on exercises allow the participants to immediately turn the newly acquired skills into practice. After this course participants will have a general understanding for which problems CNN learning architectures are useful and how parallel and scalable computing is facilitating the learning process when facing big datasets. Prerequisites: Participants should be able to work on the Unix/Linux command line, have a basic level of understanding of batch scripts required for HPC application submissions, and have a minimal knowledge of probability, statistics, and linear algebra. Participants should bring their own notebooks (with an ssh-client). Application Applicants will be notified one month before the course starts, whether they are accepted for participitation. Instructors: Prof. Dr. Morris Riedel, Dr. Gabriele Cavallaro, Dr. Jenia Jitsev, Jülich Supercomputing Centre Contact For any questions concerning the course please send an e-mail to g.cavallaro@fz-juelich.de. https://events.prace-ri.eu/event/983/ 2020-10-21 11:00:00 UTC 2020-10-23 14:30:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] VASP Best Practices Workshop @ SNIC

    19 - 20 October 2020

    [ONLINE] VASP Best Practices Workshop @ SNIC https://tess.elixir-europe.org/events/online-vasp-best-practices-workshop-snic Introduction In this workshop, including seminars and hands-on sessions, we will consider how to run the Vienna Ab-initio Simulation Package (VASP) efficiently on supercomputers. The workshop is mostly suitable for beginners at different stages and the focus will be on best practices for running VASP calculations. The hands-on part will be done at the Tetralith cluster, NSC, Linköping University. Parts of the presentations (to be made available for download) can also be of interest for advanced users. For workshop materials and more info, check the event page at NSC (in future moved to here). The seminars will cover the following topics: basic theory, DFT and PAW-method general introduction to VASP, essential files and parameters, workflow running VASP at HPC centers, CPUs and GPUs (examples from SNIC) performance analysis common issues post-processing tools The hands-on sessions include: running VASP calculations step-by-step on the Tetralith cluster using ThinLinc virtual desktop for login to Tetralith Q&A Registration (18 Sep - 12 Oct) The number of hands-on participants are limited to ca. 30, but it will still be possible to follow seminars (arbitrary limit set to 90) and be put on a waiting list for hands-on access. Instructions for how to set up an account at Tetralith will be sent out in good time before the workshop. Tentative Schedule The workshop days 19-20th Oct are split into seminars before lunch and hands-on sessions after lunch. There will also be breaks. The guided hands-on sessions will be led by instructors. Monday 19th Oct 10:00 Seminar, introduction + theory 11:00 Seminar, basics 12:00 Lunch 13:00 Hands-on session (guided) 15:00 Hands-on session (open) 17:00 End of day Tuesday 20th Oct 10:00 Seminar, further topics 11:00 Seminar, cont. 12:00 Lunch 13:00 Hands-on session (guided) 15:00 Hands-on session (open) 17:00 End of workshop, closing Materials The materials for the workshop will be put up at the corresponding event page at NSC. For reference in the future, check under NSC past events. Venue Online via Zoom. Links and instructions for the workshop will be sent via email. Organizers Weine Olovsson, with helpers Diana Iusan (UPPMAX), Peter Kjellström and Hamish Struthers. https://events.prace-ri.eu/event/1066/ 2020-10-19 08:00:00 UTC 2020-10-20 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Shared memory parallelization with OpenMP @ VSC Vienna

    5 - 6 November 2020

    [ONLINE] Shared memory parallelization with OpenMP @ VSC Vienna https://tess.elixir-europe.org/events/online-shared-memory-parallelization-with-openmp-vsc-vienna This course will be delivered as an ONLINE COURSE (using Zoom) for remote participation. REGISTRATION is strictly NECESSARY since the details to access the online course will be provided to the registered and accepted attendees only. Registration follows the "first come – first served" principle. For the online course we can take a maximum of 40 persons. Additional registrations will be added to a waiting list. Furthermore, accepted participants will be contacted a few days before the course and asked to do a short PRE-ASSIGNMENT that has to be completed before the course starts. For doing the HANDS-ON LABS all you need is a C/C++ or Fortran compiler that supports OpenMP. Participants can choose between using their own laptops or any system they have access to that provides an OpenMP enabled compiler, it's even possible to do the exercises in Google Colab. Alternatively, participants can ask for a training user account at the VSC for the duration of the course. Overview The focus of this 2 days course is on shared memory parallelization with OpenMP for dual-core, multi-core, shared memory, and ccNUMA platforms. This course teaches OpenMP starting from a beginners level. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the OpenMP directives, environment variables, and library routines. Race-condition debugging tools are also presented. Content Levels: Beginners = 6:00h (50%) + Intermediate = 4:00h (33%) + Advanced = 2:00h (17%) This course is a PRACE training event. It is organized by the VSC Research Center, TU Wien, in cooperation with the High-Performance Computing-Center Stuttgart (HLRS). Agenda & Content 1st day 08:45  Join online 09:00    Welcome 09:10    Introduction to modern hardware (talk) 10:10    An overview of OpenMP (talk) 10:35   Coffee 10:50    OpenMP programming and execution model (talk+practical) 12:30   Lunch 13:30    OpenMP worksharing directives (talk+practical)  15:00   Coffee 15:15    More OpenMP (talk+practical) 16:00    Summary (talk) + Q & A 16:30   End of first day 2nd day 08:45  Join online 09:00    How to write correct OpenMP programs (talk) 09:30    Intel Inspector XE (demo+practical) 10:30   Coffee 10:45    How to write efficient OpenMP programs (talk) 11:00    Heat equation (talk+practical) 12:30   Lunch 13:30    Heat equation (practical — continued) 14:00    Vectorization with OpenMP (talk) 14:25    Thread affinity in OpenMP (talk) 15:00   Coffee 15:15     The taskloop directive (talk+practical) 16:30   End of second day (course) https://events.prace-ri.eu/event/1076/ 2020-11-05 07:45:00 UTC 2020-11-06 15:30:00 UTC [] [] [] workshops_and_courses [] []
  • [FORMAT TBD] Programming Distributed Computing Platforms with COMPSs @ BSC

    26 - 27 January 2021

    [FORMAT TBD] Programming Distributed Computing Platforms with COMPSs @ BSC https://tess.elixir-europe.org/events/format-tbd-programming-distributed-computing-platforms-with-compss-bsc Please, bring your own laptop. All the PATC courses at BSC are free of charge. Course convener: Rosa Badia, Workflows and Distributed Computing Group Manager, Computer Sciences - Workflows and Distributed Computing Department Lecturers:  Rosa M Badia, Workflows and Distributed Computing Group Manager, Computer Sciences - Workflows and Distributed Computing Department, BSC Javier Conejero, Senior Researcher, Computer Sciences - Workflows and Distributed Computing Department, BSC Jorge Ejarque, Researcher, Computer Sciences - Workflows and Distributed Computing Department, BSC Daniele Lezzi, Senior Researcher, Computer Sciences - Workflows and Distributed Computing Department, BSC Objectives: The objective of this course is to give an overview of the COMPSs programming model, which is able to exploit the inherent concurrency of sequential applications and execute them in a transparent manner to the application developer in distributed computing platform. This is achieved by annotating part of the code as tasks, and building at execution a task-dependence graph based on the actual data used consumed/produced by the tasks. The COMPSs runtime is able to schedule the tasks in the computing nodes and take into account facts like data locality and the different nature of the computing nodes in case of heterogeneous platforms. Additionally, recently COMPSs has been enhanced with the possibility of coordinating Web Services as part of the applications. COMPSs supports Java, C/C++ and Python as programming languages. Learning Outcomes:  In the course, the COMPSs syntax, programming methodology and an overview of the runtime internals will be given. The attendees will get a first lesson about programming with COMPSs that will enable them to start programming with this framework. A hands-on with simple introductory exercises will be also performed. The students who finish this course will be able to develop simple COMPSs applications and to run them both in a local resource and in a distributed platform (initially in a private cloud). The exercises will be delivered in Python and Java. In case of Python, Jupyter notebooks will be used in some of the exercises. Level: for trainees with some theoretical and practical knowledge. INTERMEDIATE: for trainees with some theoretical and practical knowledge; those who finished the beginners course ADVANCED: for trainees able to work independently and requiring guidance for solving complex problems Prerequisites: Programming skills in Java and Python  Agenda:  Day 1 (January 26th, 2021) 9:30 - 10:00 - Welcome and round table (30') 10:00 - 10:30 - Session 1: Introduction to COMPSs (30') 10:30 - 11:15 - Session 2: PyCOMPSs: Writing Python applications (45') 11:15 - 11:45 - Coffee break 11:45 - 13:00 - Session 3: Python Hands-on using Jupyter notebooks (1h15') 13:00 - 14:30 - Lunch break 14:30 - 15:00 - Session 4: Machine learning with dislib (30') 15:00 - 16:30 - Session 5: Hands-on with dislib (1h30') Day 2 (January 27th, 2021) 9:30 - 11:00 - Session 6: Java & C++ (1h30') 11:00 - 11:30 - Cofee break 11:30 - 13:00 - Session 7: COMPSs Advanced Features (1h30') 13:00 - 14:30 - Lunch break 14:30 - 16:30 - Session 8: Cluster Hands-on (MareNostrum) (2h) 16:30 - 16:45 - Session 9: COMPSs Installation & Final Notes (15') END of COURSE https://events.prace-ri.eu/event/1054/ 2021-01-26 08:30:00 UTC 2021-01-27 15:30:00 UTC [] [] [] workshops_and_courses [] []
  • [FORMAT TBD] Managing distributed data with Hecuba and dataClay @ BSC

    28 January 2021

    [FORMAT TBD] Managing distributed data with Hecuba and dataClay @ BSC https://tess.elixir-europe.org/events/format-tbd-managing-distributed-data-with-hecuba-and-dataclay-bsc Please, bring your own laptop. All the PATC courses at BSC are free of charge. Course conveners: Department and Research group: Computer Science - Workflows and Distributed Computing Yolanda Becerra, Data-driven Scientific Computing research line, Senior researcher Anna Queralt, Distributed Object Management research line, Senior researcher Course Lecturers: Department and Research group: Computer Sciences - Workflows and Distributed Computing Alex Barceló, Distributed object Management research line, Researcher Yolanda Becerra, Data-driven Scientific Computing research line, Senior researcher Adrián Espejo, Data-driven Scientific Computing research line, Junior research engineer Daniel Gasull, Distributed object Management research line, Research engineer Pol Santamaria, Data-driven Scientific Computing research line, Junior developer Anna Queralt, Distributed object Management research line, Senior researcher Objectives: The objective of this course is to give an overview of BSC storage solutions, Hecuba and dataClay. These two platforms allow to easily store and manipulate distributed data from object-oriented applications, enabling programmers to handle object persistence using the same classes they use in their programs, thus avoiding time consuming transformations between persistent and non-persistent data models. Also, Hecuba and dataClay enable programmers to transparently manage distributed data, without worrying about its location. This is achieved by adding a minimal set of annotations in the classes. Both Hecuba and dataClay can work independently or integrated with the COMPSs programming model and runtime to facilitate parallelization of applications that handle persistent data, thus providing a comprehensive mechanism that enables the efficient usage of persistent storage solutions from distributed programming environments. Both platforms offer a common interface to the application developer that facilitates using one solution or the other depending on the needs, without changing the application code. Also, both of them have additional features that allow the programmer to take advantage of their particularities. Learning Outcomes:   In the course, the Hecuba and dataClay syntax, programming methodology and an overview of their internals will be given. Also, an overview of COMPSs at user level will be provided in order to take advantage of the distribution of data with both platforms. The attendees will get a first lesson about programming with the common storage interface that will enable them to start programming with both frameworks. A hands-on with simple introductory exercises will be also performed for each platform, with and without COMPSs to distribute the computation. The students who finish this course will be able to develop simple Hecuba and dataClay applications and to run them both in a local resource and in a distributed platform (initially in a private cloud) Prerequisites: Basic programming skills in Python and Java. Previous attendance to PATC course on programming distributed systems with COMPSs is recommended.   Tentative Agenda (TBC):  Day 1 (Jan 28th, 2021) Session 1 / 9:30 – 13:00 9:30-10:00 Round table. Presentation and background of participants 10:00-11:00 Motivation, introduction and syntax of BSC storage platforms 11:00-11:30 Coffee break 11:30-12:15 Hands-on with storage API 12:15-13:00 COMPSs overview and how to parallelize a sequential application 13:00-14:30 Lunch break Session 2/ 14:30 – 18:00 14:30-16:00 Hecuba specifics and hands-on 16:00-16:30 Break 16:30-18:00 dataClay specifics and hands-on END of COURSE       https://events.prace-ri.eu/event/1055/ 2021-01-28 08:30:00 UTC 2021-01-28 17:00:00 UTC [] [] [] workshops_and_courses [] []
  • [POSTPONED] Performance portability for GPU application using high-level programming approaches with Kokkos @ MdlS/Idris

    12 - 13 October 2020

    [POSTPONED] Performance portability for GPU application using high-level programming approaches with Kokkos @ MdlS/Idris https://tess.elixir-europe.org/events/performance-portability-for-gpu-application-using-high-level-programming-approaches-with-kokkos-mdls-idris When developing a numerical simulation code with high performance and efficiency in mind, one is often compelled to accept a trade-off between using a native-hardware programming model (like CUDA or OpenCL), which has become tremendously challenging, and loosing some cross-platform portability. Porting a large existing legacy code to a modern HPC platform, and developing a new simulation code, are two different tasks that may be benefit from a high-level programming model, which abstracts the low-level hardware details. This training presents existing high-level programming solutions that can preserve at best as possible performance, maintainability and portability across the vast diversity of modern hardware architectures (multicore CPU, manycore, GPU, ARM, ..) and software development productivity. We will  provide an introduction to the high-level C++ programming model Kokkos https://github.com/kokkos, and show basic code examples  to illustrate the following concepts through hands-on sessions: hardware portability: design an algorithm once and let the Kokkos back-end (OpenMP, CUDA, ...) actually derive an efficient low-level implementation; efficient architecture-aware memory containers: what is a Kokkos::view; revisit fundamental parallel patterns with Kokkos: parallel for, reduce, scan, ... ; explore some mini-applications. Several detailed examples in C/C++/Fortran will be used in hands-on session on the high-end hardware platform Jean Zay (http://www.idris.fr/jean-zay/), equipped with Nvidia Tesla V100 GPUs. Prerequisites: Some basic knowledge of the CUDA programming model and of C++. https://events.prace-ri.eu/event/892/ 2020-10-12 07:30:00 UTC 2020-10-13 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Node-Level Performance Engineering @ LRZ

    2 - 4 December 2020

    [ONLINE] Node-Level Performance Engineering @ LRZ https://tess.elixir-europe.org/events/online-node-level-performance-engineering-lrz This online course covers performance engineering approaches on the compute node level. Even application developers who are fluent in OpenMP and MPI often lack a good grasp of how much performance could at best be achieved by their code. This is because parallelism takes us only half the way to good performance. Even worse, slow serial code tends to scale very well, hiding the fact that resources are wasted. This course conveys the required knowledge to develop a thorough understanding of the interactions between software and hardware. This process must start at the core, socket, and node level, where the code gets executed that does the actual computational work. We introduce the basic architectural features and bottlenecks of modern processors and compute nodes. Pipelining, SIMD, superscalarity, caches, memory interfaces, ccNUMA, etc., are covered. A cornerstone of node-level performance analysis is the Roofline model, which is introduced in due detail and applied to various examples from computational science. We also show how simple software tools can be used to acquire knowledge about the system, run code in a reproducible way, and validate hypotheses about resource consumption. Finally, once the architectural requirements of a code are understood and correlated with performance measurements, the potential benefit of code changes can often be predicted, replacing hope-for-the-best optimizations by a scientific process.   The course is a PRACE training event. Introduction Our approach to performance engineering Basic architecture of multicore systems: threads, cores, caches, sockets, memory The important role of system topology Tools: topology & affinity in multicore environments Overview likwid-topology and likwid-pin Microbenchmarking for architectural exploration Properties of data paths in the memory hierarchy Bottlenecks OpenMP barrier overhead Roofline model: basics Model assumptions and construction Simple examples Limitations of the Roofline model Pattern-based performance engineering Optimal use of parallel resources Single Instruction Multiple Data (SIMD) Cache-coherent Non-Uniform Memory Architecture (ccNUMA) Simultaneous Multi-Threading (SMT) Tools: hardware performance counters Why hardware performance counters? likwid-perfctr Validating performance models Roofline case studies Dense matrix-vector multiplication Sparse matrix-vector multiplication Jacobi (stencil) smoother Optional: The ECM performance model https://events.prace-ri.eu/event/1052/ 2020-12-02 08:00:00 UTC 2020-12-04 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to parallel programming @ UL

    20 - 23 October 2020

    Introduction to parallel programming @ UL https://tess.elixir-europe.org/events/introduction-to-parallel-programming-2f81120c-c12b-44ac-a2c2-bff2a21d8dc5 First part of this course (Day 1&2) is part of Slovenian EuroHPC Competence Centre training event. The second part of this course (Day 3&4) is a PRACE training event. Event is organized by LECAD Laboratory, University of Ljubljana, Slovenia. Participation is free of charge! In case of too many registrations (over 70) we will make a selection of registrants. In light of COVID-19 situation, this will be an online event. https://events.prace-ri.eu/event/1049/ 2020-10-20 06:00:00 UTC 2020-10-23 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Earth Sciences Simulation Environments @ BSC

    10 - 12 November 2020

    [ONLINE] Earth Sciences Simulation Environments @ BSC https://tess.elixir-europe.org/events/earth-sciences-simulation-environments-bsc-599900ce-5ae4-4c46-81cd-d14a424670dd The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge. Course conveners: Marc Guevara and Carles Tena This training will be online via zoom. Course lecturers: Earth Sciences Department researchers involved in the group of Computational Earth Sciences, Atmospheric Composition, Climate Prediction and Earth System Services. Final lectures TBA. Objectives: The objective of this PATC course is to cover the basics of a high performance computing (HPC) environment oriented towards earth science applications, specifically chemical weather modelling and climate modelling. More precisely, the course will cover: Introduction to earth science fundamentals and modelling; Basic usage of an HPC environment: shell, compilers, libraries, file systems, queuing system and parallel computing; Build and configure targeted earth science applications with the NMMB/BSC-CTM chemical transport model and with the EC-EARTH climate model; Execute and monitor numerical experiments using a workflow manager; Analyse and visualise model outputs with a wide set of tools. Learning outcomes: Participants will learn and gain experience in accessing an HPC facility, installing earth science numerical models and related utilities and libraries, running numerical simulations, monitoring the execution of supercomputing jobs, analysing and visualising model results. Level: (All courses are designed for specialists with at least 1st cycle degree or similar background experience) For trainees with some theoretical and practical knowledge Prerequisites: At least University degree in progress on Earth Sciences, Computer Sciences or related area Basic knowledge of UNIX Knowledge of C, FORTRAN, MPI or openMP is recommended Knowledge of Earth Sciences data formats is recommended (grib, netcdf, hdf,…) Knowledge of R and Python Agenda: Day 1 (Nov. 10) – 4.5h 10:00-10:15 Welcome and introduction of the BSC Earth Sciences Department. (Francisco Doblas-Reyes, Head of BSC-ES Department) 10:15-10:30 Technical and logistic information. (Marc Guevara and Carles Tena) 10:30-11:00 Introduction to earth science fundamentals and modelling. (Raffaele Bernardello) 11:00-11:30 Break 11:30-13:00 Introduction to Linux and HPC, BSC facilities, PRACE and RES + preparation/checks for hands-on tutorial after lunch break (Mario Acosta) 13:00-14:00 Lunch break 14:00-16:00 HPC hands-on tutorial. At the end of this tutorial, the students will be able to compile and execute jobs on Mare Nostrum 4. (Mario Acosta and Miguel Castrillo and Carles Tena) Day 2 (Nov 11) – 3.5h 9:30-10:00 Lecture on the EC-EARTH climate model. (Raffaele Bernardello) 10:00-10:30 Lecture on the NMMB-MONARCH atmosphere chemistry model. (Oriol Jorba) 10:30-11:00 Lecture on setting up and running the NMMB-MONARCH model with Autosubmit. (Francesca Macchia and Gilbert Montane) 11:00-11:30 Break 11:30-13:30 Model hands-on tutorial. Students will run a prepared case. (Francesca Macchia and Gilbert Montane and Carles Tena and Marc Guevara) Day 3 (Nov 12) – 3h 10:00-11:00 Introduction to analysis and visualisation tools for model outputs (s2dverification, MapGenerator, CDO, NCO, panoply, ncview) (Francesco Benincasa) 11:00-11:30 Break 11:30-13:30 Analysis hands-on tutorial. Students will apply tools for analysis and visualisation to the outputs created the day before. (Francesco Benincasa and Carles Tena and Marc Guevara) END of COURSE https://events.prace-ri.eu/event/1042/ 2020-11-10 08:30:00 UTC 2020-11-12 15:30:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Parallel Programming Workshop @ BSC

    13 - 22 October 2020

    [ONLINE] Parallel Programming Workshop @ BSC https://tess.elixir-europe.org/events/online-parallel-programming-workshop-bsc The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge. Course Convener: Xavier Martorell Sessions will be in October 13th-16th and 19th-22nd from 2pm to 5.30pm and delivered via Zoom. Level: Intermediate: For trainees with some theoretical and practical knowledge, some programming experience. Advanced: For trainees able to work independently and requiring guidance for solving complex problems. Attendants can bring their own applications and work with them during the course for parallelization and analysis. Prerequisites: Fortran, C or C++ programming. All examples in the course will be done in C Software requirements: Zoom (recommended), SSH client (to connect HPC systems), X Server (enabling remote visual tools). Objectives: The objectives of this course are to understand the fundamental concepts supporting message-passing and shared memory programming models. The course covers the two widely used programming models: MPI for the distributed-memory environments, and OpenMP for the shared-memory architectures. The course also presents the main tools developed at BSC to get information and analyze the execution of parallel applications, Paraver and Extrae. It also presents the Parallware Assistant tool, which is able to automatically parallelize a large number of program structures, and provide hints to the programmer with respect to how to change the code to improve parallelization. It deals with debugging alternatives, including the use of GDB and Totalview. The use of OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of current compute nodes in clustered architectures is also considered. Paraver will be used along the course as the tool to understand the behavior and performance of parallelized codes. The course is taught using formal lectures and practical/programming sessions to reinforce the key concepts and set up the compilation/execution environment. Attendants can bring their own applications and work with them during the course for parallelization and analysis. Learning Outcomes: The students who finish this course will be able to develop benchmarks and applications with the MPI, OpenMP and mixed MPI/OpenMP programming models, as well as analyze their execution and tune their behaviour in parallel architectures. Agenda:  all times are 2pm to 5.30pm with two breaks of 15' Tuesday 13/10/2020 1. Introduction to parallel architectures, algorithms design and performance parameters 2. Introduction to the MPI programming model 3. Practical: How to compile and run MPI applications Wednesday 14/10/2020 1. MPI: Point-to-point communication, collective communication 2. Practical: Simple matrix computations 3. MPI: Blocking and non-blocking communications 4. Practical: matrix computations with non-blocking communication Thursday 15/10/2020 1. Introduction to Paraver: tool to analyze and understand performance 2. Practical: Trace generation and trace analysis Friday 16/10/2020 1. MPI: Collectives, Communicators, Topologies 2. Practical: Heat equation example Monday 19/10/2020 1. OpenMP Fundamentals: the fork-join model (lecture) 2. OpenMP Fundamentals: the fork-join model (hands-on) 3. OpenMP Fundamentals: the data environment (lecture) 4. OpenMP Fundamentals: the data environment (hands-on) Tuesday 20/10/2020 1. OpenMP Work-sharing: distributing work among threads (lecture)  2. OpenMP Work-sharing: distributing work among threads (hands-on)  3. OpenMP Work-sharing: loop distribution (lecture)  4. OpenMP Work-sharing: loop distribution (hands-on)  Wednesday 21/10/2020 1. OpenMP Tasking model: basics (lecture)    The task construct    The taskwait 2. OpenMP Tasking model: basics (hands-on) 3. OpenMP Tasking model: intermediate (lecture) 4. OpenMP Tasking model: intermediate (hands-on) Thursday 22/10/2020 1. Hybrid MPI+OpenMP    Standard (threading level, synchronous/asynchronous MPI)    TAMPI 2. Practical: Heat, nbody End of Course https://events.prace-ri.eu/event/1041/ 2020-10-13 12:00:00 UTC 2020-10-22 15:30:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] School on Scientific Data Analytics and Deep Learning @ Cineca

    5 - 9 October 2020

    [ONLINE] School on Scientific Data Analytics and Deep Learning @ Cineca https://tess.elixir-europe.org/events/online-school-on-scientific-data-analytics-and-deep-learning-cineca Description: The increasing amount of scientific data collected through sensors or computational simulations can take advantage of new techniques for being processed in order to extract new insights out of raw data. The purpose of this one-week school is to present researchers and scientists with methods, tools and techniques for exploring and mining, large data sets using Cineca high performance resources. The school is an introductory set of lectures aimed at training beginner participants in the application of relevant statistical, machine and deep learning algorithms to create classification and predictive models using Cineca resources to execute efficient processing jobs. The school will consist of introductory lectures held by data scientists, and hands-on sessions. Furthermore some practical insights on a few use cases addressed in the field of reasearch projects in Cineca will be introduced. Skills: At the end of the course, the student will possess and know how to use the following skills: - Use of Cineca HPC resources - Python basic programming for ML - Machine Learning algorithms and libraries - Deep Learning frameworks Target audience: Young students, PhD, and researchers in computational sciences and scientific areas with different backgrounds, looking for new technologies and methods to process and analyse large amount of data. Pre-requisites: Participants must have basic knowledge in statistics, fundamentals of computer programming with Python and use of GNU/Linux-based systems. The number of participants is limited to 25 students. Applicants will be selected according to their experience, qualification and scientific interest BASED ON WHAT WRITTEN IN THE "Reason for participation" FIELD OF THE REGISTRATION FORM.    APPLICATION DEADLINE Sept.11th, 2020.  STUDENTS WILL BE NOTIFIED ON THEIR ADMISSION OR NOT WITH AN EMAIL ON MONDAY SEPT. 21st.  Attendance is FREE.    https://events.prace-ri.eu/event/1044/ 2020-10-05 07:00:00 UTC 2020-10-09 16:00:00 UTC [] [] [] workshops_and_courses [] []