Register event
15 events found

Content provider: ELIXIR Slovenia  or PRACE 

  • 33rd VI-HPS Tuning Workshop @ EPCC online

    28 - 30 July 2020

    33rd VI-HPS Tuning Workshop @ EPCC online 33rd VI-HPS Tuning Workshop Date Tuesday 28th to Thursday 30th, from 09:00 to 17:00 BST each day Location The workshop will take place online. Goals This workshop is organised by VI-HPS for the UK PRACE Advanced Training Centre to: give an overview of the VI-HPS programming tools suite explain the functionality of individual tools, and how to use them effectively offer hands-on experience and expert assistance using the tools On completion participants should be familiar with common performance analysis and diagnosis techniques and how they can be employed in practice (on a range of HPC systems). Those who prepared their own application test cases will have been coached in the tuning of their measurement and analysis, and provided optimization suggestions. Programme Overview Presentations and hands-on sessions are on the following topics: Paraver/Dimemas (BSC) TAU (UOregon) Score-P/Scalasca/Cube (JSC) MAQAO (UVSQ) ARM Forge (ARM) A brief overview of the capabilities of these and associated tools is provided in the VI-HPS Tools Guide. The workshop will be held in English and run from 09:00 to not later than 17:00 each day, with breaks for lunch and refreshments. There is no fee for participation. Course capacity is limited, therefore priority will be given to applicants with MPI, OpenMP and hybrid OpenMP+MPI parallel codes already running on the workshop computer systems, and those bringing codes from similar systems to work on. Attendees will need to bring their own notebook computers (with SSH and X11 configured) and use (eduroam) wifi to connect to the workshop computer systems. Outline The workshop introduces tools that provide a practical basis for portable performance analysis of parallel application execution, covering both profiling and tracing. It will be delivered as a series of presentations with associated hands-on practical exercises using the ARM-based Isambard Cray XC50 computer. While analysis of provided example codes will be used to guide the class through the relevant steps and familiarise with usage of the tools, coaching will also be available to assist participants to analyse their own parallel application codes and may suggest opportunities for improving their execution performance and scalability. Programme Details to follow.  Sessions will run 09:00 to 17:00 BST each day. Registration Please register via the Registration tab on this page. 2020-07-28 08:00:00 UTC 2020-07-30 16:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Introduction to GROMACS - A SNIC/PRACE workshop in collaboration with BioExcel

    3 - 4 September 2020

    [ONLINE] Introduction to GROMACS - A SNIC/PRACE workshop in collaboration with BioExcel GROMACS ( is a free, open-source and high-performance software suite for molecular dynamics simulation and analysis.The workshop will provide an introduction to use of  GROMACS. The course will be delivered by KTH staff. The workshop aims to provide the basis to perform molecular dynamics simulations and simple free energy calculations using GROMACS. The workshop will include online lectures and hands-on sessions on the following topics: The basics of molecular dynamics simulations Introduction to free energy calculations The capabilities of GROMACS  During the hands-on computer practicals you will work on protein simulation and solvation free energy using GROMACS. The workshop is organized jointly by PRACE, the main HPC resource provider in Europe, BioExcel, the leading European Center of Excellence for Computational Biomolecular Research and SNIC, Swedish National Infrastructure for Computing. Prerequisites The workshop is at beginners level and assumes no prior experience in GROMACS. The participants are excepted to remotely attend the whole workshop. Zoom will be used to run the workshop remotely. The participants need to have access to a computer where the following software were already installed: Jupiter notebook,  GROMACS, VMD (visualisation), Xmgrace (plotting tool) to actively follow the tutorial. More information after registration. Registration will open 22nd June 2020.  Note the workshop is full, but you are welcome to register to the waiting list, using the link below.     number of participants: 30 max  Preliminary Schedule Thursday 3 September 2020 (Day I) 09:00    Welcome  09:10-10:10 lecture I + Q&A session 10:10-10:40 break  10:40-11:40 lecture II + Q&A session 11:40-13:00 lunch 13:00-14:30 tutorial 14:30-15:00 break  15:00-16:00 lecture III  + Q&A session Friday 4 September 2020 (Day II) 09:00-10:00 lecture IV + Q&A session 10:00-10:30 break  10:30-11:30 lecture V + Q&A session 11:30-13:00 lunch 13:00-15:00 tutorial 15:00-15:30 final remarks   Course material All the course material will be available online.  Contact For more information, contact Alessandra Villa 2020-09-03 07:00:00 UTC 2020-09-04 14:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Deep Learning and GPU Programming Workshop @ CSC

    7 - 10 September 2020

    [ONLINE] Deep Learning and GPU Programming Workshop @ CSC This workshop will be delivered as an ONLINE COURSE for remote participation  due to the COVID-19 measures enforced by most European governments. The workshop will take place ONLINE on MONDAY, 7 - THURSDAY, 10 SEPTEMBER 2020 at 10:00-12:00 and 13:00-16:00 EEST [09:00-11:00 and 12:00-15:00 CEST] each day. You will be additionally informed on connection details. REGISTRATION is OBLIGATORY since the details to access the online course will be provided to the registered and accepted attendees only. If you have registered to this course and you are not able to attend, please CANCEL your registration in advance by sending an email to patc”at” If needed we will also process a LIMITED WAITING LIST. For setting on the waiting list, please send your request to patc”at”, and we will keep those on the waiting list informed of whether participation will be possible. Please note that the registration/wait list is at a first come, first served basis. Overview NVIDIA Deep Learning Institute (DLI) offers hands-on training for developers, data scientists, and researchers looking to solve challenging problems with deep learning. Learn how to train and deploy a neural network to solve real-world problems, how to generate effective descriptions of content within images and video clips, how to effectively parallelize training of deep neural networks on Multi-GPUs and how to accelerate your applications with CUDA C/C++ and OpenACC. This 4-days workshop  combines lectures about fundamentals of Deep Learning for Multiple Data Types and Multi-GPUs with lectures about Accelerated Computing with CUDA C/C++ and OpenACC. The lectures are interleaved with many hands-on sessions using Jupyter Notebooks. The exercises will be done on a fully configured GPU-accelerated workstation in the cloud. The workshop is part of PRACE Training Centres activity and co-organized by LRZ – Leibniz Supercomputing Centre (Garching near Munich) as part of Gauss Centre for Supercomputing (Germany), IT4I – National Supercomputing Center VSB Technical University of Ostrava (Czech Republic), CSC – IT Center for Science Ltd (Finland) and NVIDIA Deep Learning Institute (DLI) for the Partnership for Advanced Computing in Europe (PRACE). Lecturers:  Dr. Momme Allalen, Dr. Juan Durillo Barrionuevo, Dr. Volker Weinberg (LRZ and NVIDIA University Ambassadors), Georg Zitzlsberger (IT4Innovations and NVIDIA University Ambassador) Language:  English Price:           Free of charge (4 training days) Prerequisites and content level Please note, that the workshop is exclusively for verifiable students, staff, and researchers from any academic institution (for industrial participants, please contact NVIDIA for industrial specific training). Technical background, basic understanding of machine learning concepts, basic C/C++ or Fortran programming skills. In addition, basics in Python will be helpful. Since Python 2.7 is used, the following tutorial can be used to learn the syntax: For the 1st day familiarity with TensorFlow will be a plus as all the hands-on sessions are using TensorFlow. For those who do not program in TensorFlow, please go over TensorFlow tutorial (especially the "Learn and use ML" section): The content level of the course is broken down as: beginner's - 5,2 h (20%), intermediate - 14,3 h (55%), advanced - 6,5 h (25%), community-targeted content - 0,0 h (0%). Important information After you are accepted, please create an account under . Ensure your laptop / PC will run smoothly by going to Make sure that WebSockets work for you by seeing under Environment, WebSockets is supported and Data Receive, Send and Echo Test all check Yes under WebSockets (Port 80). If there are issues with WebSockets, try updating your browser. If you have any questions, please contact Marjut Dieringer at: mdieringer"at"  AGENDA / Description and learning outcomes Day 1: Fundamentals of Deep Learning for Multiple Data Types This day explores how convolutional and recurrent neural networks can be combined to generate effective descriptions of content within images and video clips. Learn how to train a network using TensorFlow and the Microsoft Common Objects in Context (COCO) dataset to generate captions from images and video by: Implementing deep learning workflows like image segmentation and text generation Comparing and contrasting data types, workflows, and frameworks Combining computer vision and natural language processing Upon completion, you’ll be able to solve deep learning problems that require multiple types of data inputs. Day 2: Fundamentals of Accelerated Computing with OpenACC On the 2d day you learn the basics of OpenACC, a high-level programming language for programming on GPUs. Discover how to accelerate the performance of your applications beyond the limits of CPU-only programming with simple pragmas. You’ll learn: How to profile and optimize your CPU-only applications to identify hot spots for acceleration How to use OpenACC directives to GPU accelerate your codebase How to optimize data movement between the CPU and GPU accelerator Upon completion, you'll be ready to use OpenACC to GPU accelerate CPU-only applications. Day 3: Fundamentals of Accelerated Computing with CUDA C/C++ The CUDA computing platform enables the acceleration of CPU-only applications to run on the world’s fastest massively parallel GPUs. On the 3rd day you experience C/C++ application acceleration by: Accelerating CPU-only applications to run their latent parallelism on GPUs Utilizing essential CUDA memory management techniques to optimize accelerated applications Exposing accelerated application potential for concurrency and exploiting it with CUDA streams Leveraging command line and visual profiling to guide and check your work Upon completion, you’ll be able to accelerate and optimize existing C/C++ CPU-only applications using the most essential CUDA tools and techniques. You’ll understand an iterative style of CUDA development that will allow you to ship accelerated applications fast. Day 4: Fundamentals of Deep Learning for Multi-GPUs The computational requirements of deep neural networks used to enable AI applications like self-driving cars are enormous. A single training cycle can take weeks on a single GPU or even years for larger datasets like those used in self-driving car research. Using multiple GPUs for deep learning can significantly shorten the time required to train lots of data, making solving complex problems with deep learning feasible. On the last day we will teach you how to use multiple GPUs to train neural networks. You'll learn: Approaches to multi-GPUs training Algorithmic and engineering challenges to large-scale training Key techniques used to overcome the challenges mentioned above Upon completion, you'll be able to effectively parallelize training of deep neural networks using TensorFlow. 2020-09-07 07:00:00 UTC 2020-09-10 13:00:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to PETSc @ MdlS/Idris

    10 - 11 September 2020

    Introduction to PETSc @ MdlS/Idris The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations ( It enables researchers to delegate the linear algebra part of their applications to a specialized team, and to test various solution methods. The course will provide the necessary basis to get started with PETSc and give an overview of its possibilities. Presentations will alternate with hands-on sessions (in C or Fortran). Learning outcomes : On completion of this course, the participant should - Be able to build and solve simple PDE examples - Use and compare different solvers on these examples - Be familiar with using the on-line documentation - Be able to easily explore other PETsc possibilities relevant to his/her application. Prerequisites : C or Fortran programming. Notions of linear algebra, as well as notions of MPI, would be an asset. 2020-09-10 07:30:00 UTC 2020-09-11 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to GPU programming with OpenACC @ MdlS/Idris

    17 - 18 September 2020

    Introduction to GPU programming with OpenACC @ MdlS/Idris As the use of GPUs in HPC is spreading fast, several models are now available to programmers. During the tutorial we will introduce the directive-based approach to GPU programming. While being quite simple to implement, it allows to reach performance similar to lower level paradigms. We will present the basic concepts of OpenACC model through several examples and hands-on sessions. The main pitfalls and good practice will be presented. Also a part of the tutorial will be dedicated to more advanced topics such as multi-GPU usage. We will also give some insights of the tools available to assess the performance Learning Outcomes : - Ability to execute code on GPUs using OpenACC - Assessment of the performance of the code Requirements : - Basic knowledge of Fortran and/or C and/or C++ - No prior knowledge of OpenACC is required 2020-09-17 07:30:00 UTC 2020-09-18 16:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Advanced Fortran Topics @ LRZ

    22 - 25 September 2020

    [ONLINE] Advanced Fortran Topics @ LRZ This course will be delivered as an ONLINE COURSE for remote participation because of the COVID-19 measures enforced by most European governments. REGISTRATION is strictly NECESSARY since the details to access the online course will be provided to the registered and accepted attendees only. The workshop will take place online 10:00-12:00 and 13:00-16:00 CEST each day. Overview This course is targeted at scientists who wish to extend their knowledge of Fortran to cover advanced features of the language. Topics covered include (still preliminary) Best Practices global objects and interfaces abstract interfaces and the IMPORT statement object based programming Object-Oriented Programming type extension, polymorphism and inheritance binding of procedures to types and objects generic type-bound procedures abstract types and deferred bindings IEEE features and floating point exceptions Interoperability with C mixed language programming patterns Fortran 2003 I/O extensions OO Design Patterns: application of object-oriented programming creation and destruction of objects polymorphic objects and function arguments interacting objects dependency inversion: submodules and plugins Coarrays PGAS concepts and coarray basics dynamic entities advanced synchronization parallel programming patterns recent enhancements: collectives, events, teams, atomic subroutines performance aspects of coarray programming The course combines lectures and hands-on sessions. 2020-09-22 08:00:00 UTC 2020-09-25 14:00:00 UTC [] [] [] workshops_and_courses [] []
  • Modern Scientific C++ @ MdlS/Idris

    5 - 8 October 2020

    Modern Scientific C++ @ MdlS/Idris In recent years, the C ++ language has evolved. To stick to the 1998/2003 standard is to miss many new features that make modern C ++ more robust, more powerful, and often more readable. Through this training, it is proposed to become familiar with syntactic novelties that facilitate the writing of code, modernized best practices to avoid the language traps, and a programming way that is easier to parallelize. This training is for scientific programmers who want to discover "modern" C ++ (2011 to 2020 standards), and adapt their programming practices accordingly. Detailed Program : Day 1 (Victor ALESSANDRINI) Review of some basic C++ concepts, overview of C++ as a software development environment, with two major software engineering strategies: object oriented programming and generic programming. Object oriented programming: the virtual function mechanism enabling late binding at execution time, (software modules calling newly written routines without recompilation). Examples of the power and relevance of virtual functions Function objects as extended pointers to functions, examples Introduction to generic programming: function templates, examples Day 2 (Victor ALESSANDRINI) Generic programming: class templates, examples The core of generic programming: using function and class templates to parameterize behavior rather than just object types Overview of the Standard Template Library (STL): strategies, containers, iterators, algorithms Concurrency in the standard C++ library: overview of the thread class, discussion of the new threading interfaces (futures, promises) enabling easy synchronization of simple concurrency patterns. Day 3 (David CHAMONT): modern C++ syntax Basic features: type inference, stronger typing, user-defined literals, uniform initialization, rvalue references, move semantics. Object features: member variables initialization, delegated and inherited constructors, explicit deletion and overriding of member functions. Generic features: static assertions, template variables and type aliasing, constant expressions, variadic templates, perfect forwarding.  Functional features: lambda functions. Day 4 (David CHAMONT): modern C++ library Basic tools: smart pointers (unique_ptr, shared_ptr), new collections (array, unordered maps), views (span, string_array), wrapper types (function, ref). Generic tools: type traits, sfinae, concepts. Functional tools: algebraic types (tuple, variant), monadic types (optional, future), ranges. Optimization and parallelization: beyond double, random numbers, chrono, execution policies, structures of arrays, co-routines. Prerequisites : Knowledge of classical (pre-11) C++ syntax, and basic experience in programming with C++ objects. Participants should be familiar with the following concepts: namespaces references, difference with pointers basic memory allocation (new and delete) Properties of C++ functions (default values, overloading) Basic understanding of error handling (exceptions) C++ classes, programming with objects, public derivation of classes Basic knowledge of templates 2020-10-05 07:30:00 UTC 2020-10-08 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Parallel Programming Workshop (Train the Trainer) @ HLRS

    12 - 16 October 2020

    [ONLINE] Parallel Programming Workshop (Train the Trainer) @ HLRS The Train the Trainer Program is provided in conjunction with the regular course Parallel Programming with MPI and OpenMP and Advanced Parallel Programming. Whereas the regular course teaches parallel programming, this program is an education for future trainers in parallel programming. Too few people can provide parallel programming courses on the level that is needed if scientists and PhD students want to learn how to parallelize a sequential application or to enhance parallel applications. Within Europe, currently only six PATC centres and several other national centres provide such courses on an European or national level. We would like to assist further trainers and centres to also provide such courses for whole Europe or at least within their countries. Prerequisites You are familiar with parallel programming with MPI and OpenMP on an advanced level and skilled in both programming languages C and Fortran. Your goal: You want to provide MPI and OpenMP courses for other scientists and PhD students in your country, i.e., you would like to provide at least the first three days of the regular course as a training block-course to PhD students. Background: (a) Your centre supports you to provide such PhD courses in a course room at your centre. The course room is equipped at least with one computer/laptop per two (or three) students and has access to a HPC resource that allows MPI and OpenMP programming in C and Fortran. Or (b), you as a future trainer would like to co-operate with a centre with the necessary course infrastructure. What does this Train the Trainer Program provide? We provide you with all necessary teaching material on a personal basis, i.e., with the copyright to use it and to provide pdf or paper copies to the students in your PhD courses. We provide all exercise material. You will listen the lectures that you get familiar with the training material. During the exercises, you will help the regular students to correct their errors. The regular students are advised to request help if they were stuck for more than a minute. You will be trained to detect their problems as fast as possible (typically in less than a minute) and to provide the students with the needed help.   The Train the Trainer Program includes the curriculum from Monday until Friday according the course agenda. The Train the Trainer Program starts on Monday with a short introductory meeting at 8:15 am. On Thursday evening we will have an additional meeting and dinner for all participants of this TtT program. For further information and registration please visit the HLRS course page. 2020-10-12 06:15:00 UTC 2020-10-16 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Parallel Programming Workshop (MPI, OpenMP and advanced topics) @ HLRS

    12 - 16 October 2020

    [ONLINE] Parallel Programming Workshop (MPI, OpenMP and advanced topics) @ HLRS Distributed memory parallelization with the Message Passing Interface MPI (Mon, for beginners – non-PRACE part): On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an introduction into MPI-1. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI). Shared memory parallelization with OpenMP (Tue, for beginners – non-PRACE part): The focus is on shared memory parallelization with OpenMP, the key concept on hyper-threading, dual-core, multi-core, shared memory, and ccNUMA platforms. This course teaches shared memory OpenMP parallelization. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the directives and other interfaces of OpenMP. Race-condition debugging tools are also presented. Intermediate and advanced topics in parallel programming (Wed-Fri – PRACE course): Topics are advanced usage of communicators and virtual topologies, one-sided communication, derived datatypes, MPI-2 parallel file I/O, hybrid mixed model MPI+OpenMP parallelization, parallelization of explicit and implicit solvers and of particle based applications, parallel numerics and libraries, and parallelization with PETSc. MPI-3.0 introduced a new shared memory programming interface, which can be combined with MPI message passing and remote memory access on the cluster interconnect. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared in the hybrid mixed model MPI+OpenMP parallelization session with various hybrid MPI+OpenMP approaches and pure MPI. Further aspects are domain decomposition, load balancing, and debugging. Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. For further information and registration please visit the HLRS course page. 2020-10-12 06:30:00 UTC 2020-10-16 14:30:00 UTC [] [] [] workshops_and_courses [] []
  • Performance portability for GPU application using high-level programming approaches with Kokkos @ MdlS/Idris

    12 - 13 October 2020

    Performance portability for GPU application using high-level programming approaches with Kokkos @ MdlS/Idris When developing a numerical simulation code with high performance and efficiency in mind, one is often compelled to accept a trade-off between using a native-hardware programming model (like CUDA or OpenCL), which has become tremendously challenging, and loosing some cross-platform portability. Porting a large existing legacy code to a modern HPC platform, and developing a new simulation code, are two different tasks that may be benefit from a high-level programming model, which abstracts the low-level hardware details. This training presents existing high-level programming solutions that can preserve at best as possible performance, maintainability and portability across the vast diversity of modern hardware architectures (multicore CPU, manycore, GPU, ARM, ..) and software development productivity. We will  provide an introduction to the high-level C++ programming model Kokkos, and show basic code examples  to illustrate the following concepts through hands-on sessions: hardware portability: design an algorithm once and let the Kokkos back-end (OpenMP, CUDA, ...) actually derive an efficient low-level implementation; efficient architecture-aware memory containers: what is a Kokkos::view; revisit fundamental parallel patterns with Kokkos: parallel for, reduce, scan, ... ; explore some mini-applications. Several detailed examples in C/C++/Fortran will be used in hands-on session on the high-end hardware platform Jean Zay (, equipped with Nvidia Tesla V100 GPUs. Prerequisites: Some basic knowledge of the CUDA programming model and of C++. 2020-10-12 07:30:00 UTC 2020-10-13 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Parallel Programming Workshop @ BSC

    13 - 22 October 2020

    [ONLINE] Parallel Programming Workshop @ BSC The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge. Course Convener: Xavier Martorell Sessions will be in October 13th-16th and 19th-22nd from 3pm to 6.30pm and delivered via Zoom. Level: Intermediate: For trainees with some theoretical and practical knowledge, some programming experience. Advanced: For trainees able to work independently and requiring guidance for solving complex problems. Attendants can bring their own applications and work with them during the course for parallelization and analysis. Prerequisites: Fortran, C or C++ programming. All examples in the course will be done in C Objectives: The objectives of this course are to understand the fundamental concepts supporting message-passing and shared memory programming models. The course covers the two widely used programming models: MPI for the distributed-memory environments, and OpenMP for the shared-memory architectures. The course also presents the main tools developed at BSC to get information and analyze the execution of parallel applications, Paraver and Extrae. It also presents the Parallware Assistant tool, which is able to automatically parallelize a large number of program structures, and provide hints to the programmer with respect to how to change the code to improve parallelization. It deals with debugging alternatives, including the use of GDB and Totalview. The use of OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of current compute nodes in clustered architectures is also considered. Paraver will be used along the course as the tool to understand the behavior and performance of parallelized codes. The course is taught using formal lectures and practical/programming sessions to reinforce the key concepts and set up the compilation/execution environment. Attendants can bring their own applications and work with them during the course for parallelization and analysis. Learning Outcomes: The students who finish this course will be able to develop benchmarks and applications with the MPI, OpenMP and mixed MPI/OpenMP programming models, as well as analyze their execution and tune their behaviour in parallel architectures. Agenda: TBA 2020-10-13 13:00:00 UTC 2020-10-22 16:30:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to ScaLAPACK and MAGMA libraries @ MdlS/Idris

    19 - 20 October 2020

    Introduction to ScaLAPACK and MAGMA libraries @ MdlS/Idris The aim of this course is to introduced the basic usages of the ScaLAPACK and MAGMA libraries ScaLAPACK : The ScaLAPACK (Scalable Linear Algebra PACKage) is a library for high-performance dense linear algebra based on routines for distributed-memory message passing computers. It is mostly based on a subset of LAPACK (Linear Algebra PACKage) and BLAS (Basic Linear Algebra Subprograms) routines redesigned for distributed memory MIMD parallel computers where all the MPI communications are handled by routines provided by the BLACS (Basic Linear Algebra Communication Subprograms) library. The lecture will be mostly based on how to use the PBLAS  (Parallel BLAS) and ScaLAPACK libraries for linear algebra problems in HPC:   General introduction about the PBLAS and ScaLAPACK libraries Main ideas how to decompose the linear algebra problems in parallel programming Examples of basic operations with PBLAS : vector-vector, vector-matrix and matrix-matrix operations Examples of basic operations with ScaLAPACK : inversion and diagonalization Main problem based on calculating an exponentiation of a matrix MAGMA: In the second part of the course, we present MAGMA (Matrix Algebra on GPU and Multicore Architectures) , a dense linear algebra library similar to LAPACK but for hybrid/heterogeneous architectures. We start by presenting basic concepts of GPU architecture and giving an overview of communication schemes between CPUs and GPUs. Then, we  briefly present hybrid CPU/GPU programming models using the CUDA language.  Finally, we present MAGMA and how it can be used to easily and efficiently accelerate scientific codes, particularly those already using BLAS and LAPACK. Trainers: Donfack Simplice (MAGMA) Hasnaoui Karim (ScaLAPACK) Prerequisites : C or C++ and Fortran programming. Notions of linear algebra, as well as notions of MPI, would be an asset. 2020-10-19 07:30:00 UTC 2020-10-20 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to Deep Learning Models @ JSC

    21 - 23 October 2020

    Introduction to Deep Learning Models @ JSC This course was originally scheduled to take place on 12-14 May 2020 but has since been rescheduled to its current dates due to the Covid-19 pandemic. This course focuses on a recent machine learning method known as deep learning that emerged as a promising disruptive approach, allowing knowledge discovery from large datasets in an unprecedented effectiveness and efficiency. It is particularly relevant in research areas, which are not accessible through modelling and simulation often performed in HPC. Traditional learning, which was introduced in the 1950s and became a data-driven paradigm in the 90s, is usually based on an iterative process of feature engineering, learning, and modelling. Although successful on many tasks, the resulting models are often hard to transfer to other datasets and research areas. This course provides an introduction into deep learning and its inherent ability to derive optimal and often quite generic problem representations from the data (aka ‘feature learning’). Concrete architectures such as Convolutional Neural Networks (CNNs) will be applied to real datasets of applications using known deep learning frameworks such as Tensorflow, Keras, or Torch. As the learning process with CNNs is extremely computational-intensive the course will cover aspects of how parallel computing can be leveraged in order to speed-up the learning process using general purpose computing on graphics processing units (GPGPUs). Hands-on exercises allow the participants to immediately turn the newly acquired skills into practice. After this course participants will have a general understanding for which problems CNN learning architectures are useful and how parallel and scalable computing is facilitating the learning process when facing big datasets. Prerequisites: Participants should be able to work on the Unix/Linux command line, have a basic level of understanding of batch scripts required for HPC application submissions, and have a minimal knowledge of probability, statistics, and linear algebra. Participants should bring their own notebooks (with an ssh-client). Application Applicants will be notified one month before the course starts, whether they are accepted for participitation. Instructors: Prof. Dr. Morris Riedel, Dr. Gabriele Cavallaro, Dr. Jenia Jitsev, Jülich Supercomputing Centre Contact For any questions concerning the course please send an e-mail to 2020-10-21 11:00:00 UTC 2020-10-23 14:30:00 UTC [] [] [] workshops_and_courses [] []
  • [MOOC] Tools and techniques to quickly improve performances of HPC applications in Solid Earth@Cineca

    26 - 28 October 2020

    [MOOC] Tools and techniques to quickly improve performances of HPC applications in Solid Earth@Cineca It will  shown a course targeted to improve the overall performance of a code in Solid Earth, currently in use at the CHEESE Center of Excellence H2020 project. First, parallel performance profiling tools will be provided on the initial version of the code to find the so-called performance bottlenecks. Starting from the profiling analysis, it will show how and where to intervene with respect to the hardware characterization of the HPC machine used for the investigation. We will show also how debug tools will be useful in the development / optimization phase to eliminate any possible bugs introduced in the writing (or redesign) of new parts of the code. Finally, it will be shown how to improve the overall performance of the code with respect to other popular parameters such as I / O, vectorization, etc. Skills: At the end of the course the student will be able to: use a concrete methodology to improve the performance of a code in Solid Earth already in use in the context of ChEESE project find and solve all the main bottlenecks of an application with respect to appropriate computational metrics and the machine used use appropriate debugging tools to eliminate any bugs that may arise during the development / optimization phase Target audience:  Researchers in Solid Earth interested to learn and use all those techniques and related tools that may allow them to improve the performance of their code on current HPC architectures in the shortest possible time. Pre-requisites: Basic knowledge of LINUX/UNIX. Knowledge of C, FORTRAN, MPI or OpenMP is recommended. Notions of Parallel computing techniques and algorithms for Solid Earth applications Grant: The lunch for the three days will be offered to all the participants and some grants are available. The only requirement to be eligible is to be not funded by your institution to attend the course and to work or live in an institute outside the Roma area. The grant  will be 300 euros for students working and living outside Italy and 150 euros for students working and living in Italy (outside Roma). Some documentation will be required and the grant will be paid only after a certified presence of minimum 80% of the lectures. Further information about how to request the grant, will be provided at the confirmation of the course: about 3 weeks before the starting date. Coordinating Teacher: Dr. P. Lanucara 2020-10-26 08:00:00 UTC 2020-10-28 17:00:00 UTC [] [] [] workshops_and_courses [] []
  • Earth Sciences Simulation Environments @ BSC

    10 - 12 November 2020

    Earth Sciences Simulation Environments @ BSC The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge. Course conveners: Marc Guevara and Carles Tena This training will be online via zoom. Course lecturers: Earth Sciences Department researchers involved in the group of Computational Earth Sciences, Atmospheric Composition, Climate Prediction and Earth System Services. Final lectures TBA. Objectives: The objective of this PATC course is to cover the basics of a high performance computing (HPC) environment oriented towards earth science applications, specifically chemical weather modelling and climate modelling. More precisely, the course will cover: Introduction to earth science fundamentals and modelling; Basic usage of an HPC environment: shell, compilers, libraries, file systems, queuing system and parallel computing; Build and configure targeted earth science applications with the NMMB/BSC-CTM chemical transport model and with the EC-EARTH climate model; Execute and monitor numerical experiments using a workflow manager; Analyse and visualise model outputs with a wide set of tools. Learning outcomes: Participants will learn and gain experience in accessing an HPC facility, installing earth science numerical models and related utilities and libraries, running numerical simulations, monitoring the execution of supercomputing jobs, analysing and visualising model results. Level: (All courses are designed for specialists with at least 1st cycle degree or similar background experience) For trainees with some theoretical and practical knowledge Prerequisites: At least University degree in progress on Earth Sciences, Computer Sciences or related area Basic knowledge of UNIX Knowledge of C, FORTRAN, MPI or openMP is recommended Knowledge of Earth Sciences data formats is recommended (grib, netcdf, hdf,…) Knowledge of R and Python Agenda: TBA 2020-11-10 08:30:00 UTC 2020-11-12 15:30:00 UTC [] [] [] workshops_and_courses [] []

Note, this map only displays events that have geolocation information in TeSS.
For the complete list of events in TeSS, click the grid tab.