Register event
15 events found

Content provider: Babraham Institute Bioinfor...  or PRACE 

  • [CANCELLED] Parallel Programming with Python @ BSC

    6 - 8 July 2020

    [CANCELLED] Parallel Programming with Python @ BSC Please, bring your own laptop. All the PATC courses at BSC are free of charge. Course Convener: Xavier Martorell, CS/Programming Models Course Lecturers: Rosa M. Badia, CS/Workflows and Distributed Computing, Bryan Jiménez (University of Utrecht), Joan Verdaguer-Codina (COEIC) LOCATION: UPC Campus Nord premises.Vertex Building, Room VS208 Level:  BASIC: for students with little previous experience with Python Prerequisites: Basic Python programming, all examples of the course will be presented in Python. Objectives: The objectives of this course are to understand the basic concepts on programming with Python and its support for parallelism. Learning Outcomes: The students who finish this course will be able to develop simple parallel benchmarks with Python, and analyze their execution and tune their behaviour in parallel architectures. Agenda: Day 1 (Monday, July 6th, 2020) Session 1 / 9:30 am – 1:00 pm (2 h lectures, 1 h practical) 1. Introduction to parallel programming and Python 11:00  Coffee break 2. Practical: How to compile and run python applications   Session 2 / 2:00pm – 5:30 pm (2h lectures, 1h practical) 1. Scientific Python: NumPy, SciPy, MatplotLib, Bokeh 16:00 Coffee break 2. Practical: Simple python programs and optimizations     Day 2 (Tuesday, July 7th, 2020) Session 1 / 9:30 am - 1:00 pm (1.5 h lectures, 1.5 h practical) 1. Parallelism in Python. Shared memory 2. Introduction to performance analysis. Paraver: tool to analyze and understand performance 3. Python pools 11:00 Coffee break 3. Practical: Examples of python parallelism   Session 2 / 2:00 pm - 5:30 pm (1.5 h lectures, 1.5 h practical) 1. Distributed memory                  Visualizing distributed environments with Paraver 2. Python queues 16:00 Coffee break 3. Practical: Trace generation and trace analysis 4. Practical: environment on RPi   Day 3 (Wednesday, July 8th, 2020) Session 1 / 9:30 am - 1:00 pm (1 h lecture, 2h practical) 1. Introduction to PyCOMPSs 2. PyCOMPSs syntax 11:00 Coffee break 3.PyCOMPSs hands-on   Session 2 / 2:00 pm - 5:30 pm (2 h lectures, 1 h practical) 1. PyCUDA and support for accelerators 2. Debugging 16:00 Coffee break 3. Hands-on with PyCUDA   END of COURSE 2020-07-06 07:30:00 UTC 2020-07-08 15:30:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Reproducible computational environments using containers @ EPCC

    13 - 14 July 2020

    [ONLINE] Reproducible computational environments using containers @ EPCC This course aims to introduce the use of containers with the goal of using them to effect reproducible computational environments. Such environments are useful for ensuring reproducible research outputs and for simplifying the setup of complex software dependencies across different systems. The course will mostly be based around the use of Docker containers but the material will be of use for whatever container technology you plan to, or end up, using. We will also briefly introduce the Singularity container environment which is compatible with Docker and designed for use on multi-user systems (such as HPC resources). On completion of this course attendees should:   * Understand what containers are and what they are used for * Understand how to manage and create Docker containers * Appreciate decisions that need to be made around containerising research workflows * Understand the differences between Docker and Singularity containers and why Singularity is more suitable for multi-user systems (e.g. HPC) * Understand how to manage and create Singularity containers * Appreciate how containers can be used to enable and improve reproducibility in research Trainers: Andy Turner (EPCC) and Jeremy Cohen (Imperial College)   This course has been extremely popular and is fully booked with a long waiting list.  We will not be accepting further registrations for this run of the course but we hope to run it again sometime soon. 2020-07-13 08:00:00 UTC 2020-07-14 16:00:00 UTC [] [] [] workshops_and_courses [] []
  • Advanced Python

    23 - 28 July 2020

    Advanced Python For more details please see Course status: Spaces Available. 2020-07-23 09:30:00 UTC 2020-07-28 12:30:00 UTC Babraham Campus Babraham Campus CB22 3AT [] [] [] [] [] []
  • [ONLINE] Introduction to GROMACS - A SNIC/PRACE workshop in collaboration with BioExcel

    3 - 4 September 2020

    [ONLINE] Introduction to GROMACS - A SNIC/PRACE workshop in collaboration with BioExcel GROMACS ( is a free, open-source and high-performance software suite for molecular dynamics simulation and analysis.The workshop will provide an introduction to use of  GROMACS. The course will be delivered by KTH staff. The workshop aims to provide the basis to perform molecular dynamics simulations and simple free energy calculations using GROMACS. The workshop will include online lectures and hands-on sessions on the following topics: The basics of molecular dynamics simulations Introduction to free energy calculations The capabilities of GROMACS  During the hands-on computer practicals you will work on protein simulation and solvation free energy using GROMACS. The workshop is organized jointly by PRACE, the main HPC resource provider in Europe, BioExcel, the leading European Center of Excellence for Computational Biomolecular Research and SNIC, Swedish National Infrastructure for Computing. Prerequisites The workshop is at beginners level and assumes no prior experience in GROMACS. The participants are excepted to remotely attend the whole workshop. Zoom will be used to run the workshop remotely. The participants need to have access to a computer where the following software were already installed: Jupiter notebook,  GROMACS, VMD (visualisation), Xmgrace (plotting tool) to actively follow the tutorial. More information after registration. Registration will open 22nd June 2020.  number of participants: 30 max  Preliminary Schedule Thursday 3 September 2020 (Day I) 09:00    Welcome  09:10-10:10 lecture I + Q&A session 10:10-10:40 break  10:40-11:40 lecture II + Q&A session 11:40-13:00 lunch 13:00-14:30 tutorial 14:30-15:00 break  15:00-16:00 lecture III  + Q&A session Friday 4 September 2020 (Day II) 09:00-10:00 lecture IV + Q&A session 10:00-10:30 break  10:30-11:30 lecture V + Q&A session 11:30-13:00 lunch 13:00-15:00 tutorial 15:00-15:30 final remarks   Course material All the course material will be available online.  Contact For more information, contact Alessandra Villa 2020-09-03 07:00:00 UTC 2020-09-04 14:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Deep Learning and GPU Programming Workshop @ CSC

    7 - 10 September 2020

    [ONLINE] Deep Learning and GPU Programming Workshop @ CSC This workshop will be delivered as an ONLINE COURSE for remote participation  due to the COVID-19 measures enforced by most European governments. The workshop will take place ONLINE on MONDAY, 7 - THURSDAY, 10 SEPTEMBER 2020 at 10:00-12:00 and 13:00-16:00 EEST [09:00-11:00 and 12:00-15:00 CEST] each day. You will be additionally informed on connection details. REGISTRATION is OBLIGATORY since the details to access the online course will be provided to the registered and accepted attendees only. If you have registered to this course and you are not able to attend, please CANCEL your registration in advance by sending an email to patc”at” If needed we will also process a LIMITED WAITING LIST. For setting on the waiting list, please send your request to patc”at”, and we will keep those on the waiting list informed of whether participation will be possible. Please note that the registration/wait list is at a first come, first served basis. Overview NVIDIA Deep Learning Institute (DLI) offers hands-on training for developers, data scientists, and researchers looking to solve challenging problems with deep learning. Learn how to train and deploy a neural network to solve real-world problems, how to generate effective descriptions of content within images and video clips, how to effectively parallelize training of deep neural networks on Multi-GPUs and how to accelerate your applications with CUDA C/C++ and OpenACC. This 4-days workshop  combines lectures about fundamentals of Deep Learning for Multiple Data Types and Multi-GPUs with lectures about Accelerated Computing with CUDA C/C++ and OpenACC. The lectures are interleaved with many hands-on sessions using Jupyter Notebooks. The exercises will be done on a fully configured GPU-accelerated workstation in the cloud. The workshop is part of PRACE Training Centres activity and co-organized by LRZ – Leibniz Supercomputing Centre (Garching near Munich) as part of Gauss Centre for Supercomputing (Germany), IT4I – National Supercomputing Center VSB Technical University of Ostrava (Czech Republic), CSC – IT Center for Science Ltd (Finland) and NVIDIA Deep Learning Institute (DLI) for the Partnership for Advanced Computing in Europe (PRACE). Lecturers:  Dr. Momme Allalen, Dr. Juan Durillo Barrionuevo, Dr. Volker Weinberg (LRZ and NVIDIA University Ambassadors), Georg Zitzlsberger (IT4Innovations and NVIDIA University Ambassador) Language:  English Price:           Free of charge (4 training days) Prerequisites and content level Please note, that the workshop is exclusively for verifiable students, staff, and researchers from any academic institution (for industrial participants, please contact NVIDIA for industrial specific training). Technical background, basic understanding of machine learning concepts, basic C/C++ or Fortran programming skills. In addition, basics in Python will be helpful. Since Python 2.7 is used, the following tutorial can be used to learn the syntax: For the 1st day familiarity with TensorFlow will be a plus as all the hands-on sessions are using TensorFlow. For those who do not program in TensorFlow, please go over TensorFlow tutorial (especially the "Learn and use ML" section): The content level of the course is broken down as: beginner's - 5,2 h (20%), intermediate - 14,3 h (55%), advanced - 6,5 h (25%), community-targeted content - 0,0 h (0%). Important information After you are accepted, please create an account under . Ensure your laptop / PC will run smoothly by going to Make sure that WebSockets work for you by seeing under Environment, WebSockets is supported and Data Receive, Send and Echo Test all check Yes under WebSockets (Port 80). If there are issues with WebSockets, try updating your browser. If you have any questions, please contact Marjut Dieringer at: mdieringer"at"  AGENDA / Description and learning outcomes Day 1: Fundamentals of Deep Learning for Multiple Data Types This day explores how convolutional and recurrent neural networks can be combined to generate effective descriptions of content within images and video clips. Learn how to train a network using TensorFlow and the Microsoft Common Objects in Context (COCO) dataset to generate captions from images and video by: Implementing deep learning workflows like image segmentation and text generation Comparing and contrasting data types, workflows, and frameworks Combining computer vision and natural language processing Upon completion, you’ll be able to solve deep learning problems that require multiple types of data inputs. Day 2: Fundamentals of Accelerated Computing with OpenACC On the 2d day you learn the basics of OpenACC, a high-level programming language for programming on GPUs. Discover how to accelerate the performance of your applications beyond the limits of CPU-only programming with simple pragmas. You’ll learn: How to profile and optimize your CPU-only applications to identify hot spots for acceleration How to use OpenACC directives to GPU accelerate your codebase How to optimize data movement between the CPU and GPU accelerator Upon completion, you'll be ready to use OpenACC to GPU accelerate CPU-only applications.c Day 3: Fundamentals of Accelerated Computing with CUDA C/C++ The CUDA computing platform enables the acceleration of CPU-only applications to run on the world’s fastest massively parallel GPUs. On the 3rd day you experience C/C++ application acceleration by: Accelerating CPU-only applications to run their latent parallelism on GPUs Utilizing essential CUDA memory management techniques to optimize accelerated applications Exposing accelerated application potential for concurrency and exploiting it with CUDA streams Leveraging command line and visual profiling to guide and check your work Upon completion, you’ll be able to accelerate and optimize existing C/C++ CPU-only applications using the most essential CUDA tools and techniques. You’ll understand an iterative style of CUDA development that will allow you to ship accelerated applications fast. Day 4: Fundamentals of Deep Learning for Multi-GPUs The computational requirements of deep neural networks used to enable AI applications like self-driving cars are enormous. A single training cycle can take weeks on a single GPU or even years for larger datasets like those used in self-driving car research. Using multiple GPUs for deep learning can significantly shorten the time required to train lots of data, making solving complex problems with deep learning feasible. On the last day we will teach you how to use multiple GPUs to train neural networks. You'll learn: Approaches to multi-GPUs training Algorithmic and engineering challenges to large-scale training Key techniques used to overcome the challenges mentioned above Upon completion, you'll be able to effectively parallelize training of deep neural networks using TensorFlow. 2020-09-07 07:00:00 UTC 2020-09-10 13:00:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to PETSc @ MdlS/Idris

    10 - 11 September 2020

    Introduction to PETSc @ MdlS/Idris The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations ( It enables researchers to delegate the linear algebra part of their applications to a specialized team, and to test various solution methods. The course will provide the necessary basis to get started with PETSc and give an overview of its possibilities. Presentations will alternate with hands-on sessions (in C or Fortran). Learning outcomes : On completion of this course, the participant should - Be able to build and solve simple PDE examples - Use and compare different solvers on these examples - Be familiar with using the on-line documentation - Be able to easily explore other PETsc possibilities relevant to his/her application. Prerequisites : C or Fortran programming. Notions of linear algebra, as well as notions of MPI, would be an asset. 2020-09-10 07:30:00 UTC 2020-09-11 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to GPU programming with OpenACC @ MdlS/Idris

    17 - 18 September 2020

    Introduction to GPU programming with OpenACC @ MdlS/Idris As the use of GPUs in HPC is spreading fast, several models are now available to programmers. During the tutorial we will introduce the directive-based approach to GPU programming. While being quite simple to implement, it allows to reach performance similar to lower level paradigms. We will present the basic concepts of OpenACC model through several examples and hands-on sessions. The main pitfalls and good practice will be presented. Also a part of the tutorial will be dedicated to more advanced topics such as multi-GPU usage. We will also give some insights of the tools available to assess the performance Learning Outcomes : - Ability to execute code on GPUs using OpenACC - Assessment of the performance of the code Requirements : - Basic knowledge of Fortran and/or C and/or C++ - No prior knowledge of OpenACC is required 2020-09-17 07:30:00 UTC 2020-09-18 16:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Advanced Fortran Topics @ LRZ

    22 - 25 September 2020

    [ONLINE] Advanced Fortran Topics @ LRZ This course will be delivered as an ONLINE COURSE for remote participation because of the COVID-19 measures enforced by most European governments. REGISTRATION is strictly NECESSARY since the details to access the online course will be provided to the registered and accepted attendees only. The workshop will take place online 10:00-12:00 and 13:00-16:00 CEST each day. Overview This course is targeted at scientists who wish to extend their knowledge of Fortran to cover advanced features of the language. Topics covered include (still preliminary) Best Practices global objects and interfaces abstract interfaces and the IMPORT statement object based programming Object-Oriented Programming type extension, polymorphism and inheritance binding of procedures to types and objects generic type-bound procedures abstract types and deferred bindings IEEE features and floating point exceptions Interoperability with C mixed language programming patterns Fortran 2003 I/O extensions OO Design Patterns: application of object-oriented programming creation and destruction of objects polymorphic objects and function arguments interacting objects dependency inversion: submodules and plugins Coarrays PGAS concepts and coarray basics dynamic entities advanced synchronization parallel programming patterns recent enhancements: collectives, events, teams, atomic subroutines performance aspects of coarray programming The course combines lectures and hands-on sessions. 2020-09-22 08:00:00 UTC 2020-09-25 14:00:00 UTC [] [] [] workshops_and_courses [] []
  • Modern Scientific C++ @ MdlS/Idris

    5 - 8 October 2020

    Modern Scientific C++ @ MdlS/Idris In recent years, the C ++ language has evolved. To stick to the 1998/2003 standard is to miss many new features that make modern C ++ more robust, more powerful, and often more readable. Through this training, it is proposed to become familiar with syntactic novelties that facilitate the writing of code, modernized best practices to avoid the language traps, and a programming way that is easier to parallelize. This training is for scientific programmers who want to discover "modern" C ++ (2011 to 2020 standards), and adapt their programming practices accordingly. Detailed Program : Day 1 (Victor ALESSANDRINI) Review of some basic C++ concepts, overview of C++ as a software development environment, with two major software engineering strategies: object oriented programming and generic programming. Object oriented programming: the virtual function mechanism enabling late binding at execution time, (software modules calling newly written routines without recompilation). Examples of the power and relevance of virtual functions Function objects as extended pointers to functions, examples Introduction to generic programming: function templates, examples Day 2 (Victor ALESSANDRINI) Generic programming: class templates, examples The core of generic programming: using function and class templates to parameterize behavior rather than just object types Overview of the Standard Template Library (STL): strategies, containers, iterators, algorithms Concurrency in the standard C++ library: overview of the thread class, discussion of the new threading interfaces (futures, promises) enabling easy synchronization of simple concurrency patterns. Day 3 (David CHAMONT): modern C++ syntax Basic features: type inference, stronger typing, user-defined literals, uniform initialization, rvalue references, move semantics. Object features: member variables initialization, delegated and inherited constructors, explicit deletion and overriding of member functions. Generic features: static assertions, template variables and type aliasing, constant expressions, variadic templates, perfect forwarding.  Functional features: lambda functions. Day 4 (David CHAMONT): modern C++ library Basic tools: smart pointers (unique_ptr, shared_ptr), new collections (array, unordered maps), views (span, string_array), wrapper types (function, ref). Generic tools: type traits, sfinae, concepts. Functional tools: algebraic types (tuple, variant), monadic types (optional, future), ranges. Optimization and parallelization: beyond double, random numbers, chrono, execution policies, structures of arrays, co-routines. Prerequisites : Knowledge of classical (pre-11) C++ syntax, and basic experience in programming with C++ objects. Participants should be familiar with the following concepts: namespaces references, difference with pointers basic memory allocation (new and delete) Properties of C++ functions (default values, overloading) Basic understanding of error handling (exceptions) C++ classes, programming with objects, public derivation of classes Basic knowledge of templates 2020-10-05 07:30:00 UTC 2020-10-08 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Parallel Programming Workshop (Train the Trainer) @ HLRS

    12 - 16 October 2020

    [ONLINE] Parallel Programming Workshop (Train the Trainer) @ HLRS The Train the Trainer Program is provided in conjunction with the regular course Parallel Programming with MPI and OpenMP and Advanced Parallel Programming. Whereas the regular course teaches parallel programming, this program is an education for future trainers in parallel programming. Too few people can provide parallel programming courses on the level that is needed if scientists and PhD students want to learn how to parallelize a sequential application or to enhance parallel applications. Within Europe, currently only six PATC centres and several other national centres provide such courses on an European or national level. We would like to assist further trainers and centres to also provide such courses for whole Europe or at least within their countries. Prerequisites You are familiar with parallel programming with MPI and OpenMP on an advanced level and skilled in both programming languages C and Fortran. Your goal: You want to provide MPI and OpenMP courses for other scientists and PhD students in your country, i.e., you would like to provide at least the first three days of the regular course as a training block-course to PhD students. Background: (a) Your centre supports you to provide such PhD courses in a course room at your centre. The course room is equipped at least with one computer/laptop per two (or three) students and has access to a HPC resource that allows MPI and OpenMP programming in C and Fortran. Or (b), you as a future trainer would like to co-operate with a centre with the necessary course infrastructure. What does this Train the Trainer Program provide? We provide you with all necessary teaching material on a personal basis, i.e., with the copyright to use it and to provide pdf or paper copies to the students in your PhD courses. We provide all exercise material. You will listen the lectures that you get familiar with the training material. During the exercises, you will help the regular students to correct their errors. The regular students are advised to request help if they were stuck for more than a minute. You will be trained to detect their problems as fast as possible (typically in less than a minute) and to provide the students with the needed help.   The Train the Trainer Program includes the curriculum from Monday until Friday according the course agenda. The Train the Trainer Program starts on Monday with a short introductory meeting at 8:15 am. On Thursday evening we will have an additional meeting and dinner for all participants of this TtT program. For further information and registration please visit the HLRS course page. 2020-10-12 06:15:00 UTC 2020-10-16 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Parallel Programming Workshop (MPI, OpenMP and advanced topics) @ HLRS

    12 - 16 October 2020

    [ONLINE] Parallel Programming Workshop (MPI, OpenMP and advanced topics) @ HLRS Distributed memory parallelization with the Message Passing Interface MPI (Mon, for beginners – non-PRACE part): On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an introduction into MPI-1. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI). Shared memory parallelization with OpenMP (Tue, for beginners – non-PRACE part): The focus is on shared memory parallelization with OpenMP, the key concept on hyper-threading, dual-core, multi-core, shared memory, and ccNUMA platforms. This course teaches shared memory OpenMP parallelization. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the directives and other interfaces of OpenMP. Race-condition debugging tools are also presented. Intermediate and advanced topics in parallel programming (Wed-Fri – PRACE course): Topics are advanced usage of communicators and virtual topologies, one-sided communication, derived datatypes, MPI-2 parallel file I/O, hybrid mixed model MPI+OpenMP parallelization, parallelization of explicit and implicit solvers and of particle based applications, parallel numerics and libraries, and parallelization with PETSc. MPI-3.0 introduced a new shared memory programming interface, which can be combined with MPI message passing and remote memory access on the cluster interconnect. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared in the hybrid mixed model MPI+OpenMP parallelization session with various hybrid MPI+OpenMP approaches and pure MPI. Further aspects are domain decomposition, load balancing, and debugging. Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. For further information and registration please visit the HLRS course page. 2020-10-12 06:30:00 UTC 2020-10-16 14:30:00 UTC [] [] [] workshops_and_courses [] []
  • Performance portability for GPU application using high-level programming approaches with Kokkos @ MdlS/Idris

    12 - 13 October 2020

    Performance portability for GPU application using high-level programming approaches with Kokkos @ MdlS/Idris When developing a numerical simulation code with high performance and efficiency in mind, one is often compelled to accept a trade-off between using a native-hardware programming model (like CUDA or OpenCL), which has become tremendously challenging, and loosing some cross-platform portability. Porting a large existing legacy code to a modern HPC platform, and developing a new simulation code, are two different tasks that may be benefit from a high-level programming model, which abstracts the low-level hardware details. This training presents existing high-level programming solutions that can preserve at best as possible performance, maintainability and portability across the vast diversity of modern hardware architectures (multicore CPU, manycore, GPU, ARM, ..) and software development productivity. We will  provide an introduction to the high-level C++ programming model Kokkos, and show basic code examples  to illustrate the following concepts through hands-on sessions: hardware portability: design an algorithm once and let the Kokkos back-end (OpenMP, CUDA, ...) actually derive an efficient low-level implementation; efficient architecture-aware memory containers: what is a Kokkos::view; revisit fundamental parallel patterns with Kokkos: parallel for, reduce, scan, ... ; explore some mini-applications. Several detailed examples in C/C++/Fortran will be used in hands-on session on the high-end hardware platform Jean Zay (, equipped with Nvidia Tesla V100 GPUs. Prerequisites: Some basic knowledge of the CUDA programming model and of C++. 2020-10-12 07:30:00 UTC 2020-10-13 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to ScaLAPACK and MAGMA libraries @ MdlS/Idris

    19 - 20 October 2020

    Introduction to ScaLAPACK and MAGMA libraries @ MdlS/Idris The aim of this course is to introduced the basic usages of the ScaLAPACK and MAGMA libraries ScaLAPACK : The ScaLAPACK (Scalable Linear Algebra PACKage) is a library for high-performance dense linear algebra based on routines for distributed-memory message passing computers. It is mostly based on a subset of LAPACK (Linear Algebra PACKage) and BLAS (Basic Linear Algebra Subprograms) routines redesigned for distributed memory MIMD parallel computers where all the MPI communications are handled by routines provided by the BLACS (Basic Linear Algebra Communication Subprograms) library. The lecture will be mostly based on how to use the PBLAS  (Parallel BLAS) and ScaLAPACK libraries for linear algebra problems in HPC:   General introduction about the PBLAS and ScaLAPACK libraries Main ideas how to decompose the linear algebra problems in parallel programming Examples of basic operations with PBLAS : vector-vector, vector-matrix and matrix-matrix operations Examples of basic operations with ScaLAPACK : inversion and diagonalization Main problem based on calculating an exponentiation of a matrix MAGMA: In the second part of the course, we present MAGMA (Matrix Algebra on GPU and Multicore Architectures) , a dense linear algebra library similar to LAPACK but for hybrid/heterogeneous architectures. We start by presenting basic concepts of GPU architecture and giving an overview of communication schemes between CPUs and GPUs. Then, we  briefly present hybrid CPU/GPU programming models using the CUDA language.  Finally, we present MAGMA and how it can be used to easily and efficiently accelerate scientific codes, particularly those already using BLAS and LAPACK. Trainers: Donfack Simplice (MAGMA) Hasnaoui Karim (ScaLAPACK) Prerequisites : C or C++ and Fortran programming. Notions of linear algebra, as well as notions of MPI, would be an asset. 2020-10-19 07:30:00 UTC 2020-10-20 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to Deep Learning Models @ JSC

    21 - 23 October 2020

    Introduction to Deep Learning Models @ JSC This course was originally scheduled to take place on 12-14 May 2020 but has since been rescheduled to its current dates due to the Covid-19 pandemic. This course focuses on a recent machine learning method known as deep learning that emerged as a promising disruptive approach, allowing knowledge discovery from large datasets in an unprecedented effectiveness and efficiency. It is particularly relevant in research areas, which are not accessible through modelling and simulation often performed in HPC. Traditional learning, which was introduced in the 1950s and became a data-driven paradigm in the 90s, is usually based on an iterative process of feature engineering, learning, and modelling. Although successful on many tasks, the resulting models are often hard to transfer to other datasets and research areas. This course provides an introduction into deep learning and its inherent ability to derive optimal and often quite generic problem representations from the data (aka ‘feature learning’). Concrete architectures such as Convolutional Neural Networks (CNNs) will be applied to real datasets of applications using known deep learning frameworks such as Tensorflow, Keras, or Torch. As the learning process with CNNs is extremely computational-intensive the course will cover aspects of how parallel computing can be leveraged in order to speed-up the learning process using general purpose computing on graphics processing units (GPGPUs). Hands-on exercises allow the participants to immediately turn the newly acquired skills into practice. After this course participants will have a general understanding for which problems CNN learning architectures are useful and how parallel and scalable computing is facilitating the learning process when facing big datasets. Prerequisites: Participants should be able to work on the Unix/Linux command line, have a basic level of understanding of batch scripts required for HPC application submissions, and have a minimal knowledge of probability, statistics, and linear algebra. Participants should bring their own notebooks (with an ssh-client). Application Applicants will be notified one month before the course starts, whether they are accepted for participitation. Instructors: Prof. Dr. Morris Riedel, Dr. Gabriele Cavallaro, Dr. Jenia Jitsev, Jülich Supercomputing Centre Contact For any questions concerning the course please send an e-mail to 2020-10-21 11:00:00 UTC 2020-10-23 14:30:00 UTC [] [] [] workshops_and_courses [] []
  • [MOOC] Tools and techniques to quickly improve performances of HPC applications in Solid Earth@Cineca

    26 - 28 October 2020

    [MOOC] Tools and techniques to quickly improve performances of HPC applications in Solid Earth@Cineca It will  shown a course targeted to improve the overall performance of a code in Solid Earth, currently in use at the  CHEESE Center of Excellence H2020 project. First, parallel performance profiling tools will be provided on the initial version of the code to find the so-called performance bottlenecks. Starting from the profiling analysis, it will show how and where to intervene with respect to the hardware characterization of the HPC machine used for the investigation. We will show also how debug tools will be useful in the development / optimization phase to eliminate any possible bugs introduced in the writing (or redesign) of new parts of the code. Finally, it will be shown how to improve the overall performance of the code with respect to other popular parameters such as I / O, vectorization, etc. Skills: At the end of the course the student will be able to: - use a concrete methodology to improve the performance of a code in Solid Earth already in use in the context of    ChEESE project - find and solve all the main bottlenecks of an application with respect to appropriate computational metrics    and the machine used - use appropriate debugging tools to eliminate any bugs that may arise during the development / optimization phase Target audience:  Researchers in Solid Earth interested to learn and use all those techniques and related tools that may allow them to improve the performance of their code on current HPC architectures in the shortest possible time. Pre-requisites: -Basic knowledge of LINUX/UNIX. -Knowledge of C, FORTRAN, MPI or OpenMP is recommended. Notions of Parallel computing techniques and algorithms   for Solid Earth applications Grant: The lunch for the three days will be offered to all the participants and some grants are available. The only requirement to be eligible is to be not funded by your institution to attend the course and to work or live in an institute outside the Roma area. The grant  will be 300 euros for students working and living outside Italy and 150 euros for students working and living in Italy (outside Roma). Some documentation will be required and the grant will be paid only after a certified presence of minimum 80% of the lectures. Further information about how to request the grant, will be provided at the confirmation of the course: about 3 weeks before the starting date. Coordinating Teacher: Dr. P. Lanucara 2020-10-26 08:00:00 UTC 2020-10-28 17:00:00 UTC [] [] [] workshops_and_courses [] []

Note, this map only displays events that have geolocation information in TeSS.
For the complete list of events in TeSS, click the grid tab.