Register event
24 events found

Content provider: PRACE 

  • [POSTPONED] Basic Parallel Programming with MPI and OpenMP @SURFsara

    2 - 3 June 2020

    [POSTPONED] Basic Parallel Programming with MPI and OpenMP @SURFsara https://tess.elixir-europe.org/events/basic-parallel-programming-with-mpi-and-openmp-surfsara NOTE: this is the basic part of a 4-day course at SURFsara. If you wish to attend the full course, you should also register for the advanced part separately. Please check here the advanced part.   This 2-day basic course is focused on providing an introduction to parallel programming using the most widely used approaches: Message Passing Interface (MPI) and Open Multi-Processing (OpenMP). You will start exploring the benefits (and limitations) of parallel programming, and then you will explore the possibilities that task and data parallelism offer to develop multi-core and multi-node programs. Each session includes hands-on exercises to facilitate the understanding of the different constructs.   As a participant, you have already some familiarity with C or Fortran programming, and in these two days you will be able to start the development of your own parallel application. Moreover, right after this short course you will have the opportunity to continue exploring advanced topics on parallel programming and everyday use of MPI/OpenMP. For this you'll need to register for the advanced course too. https://events.prace-ri.eu/event/1004/ 2020-06-02 07:00:00 UTC 2020-06-03 15:35:00 UTC [] [] [] workshops_and_courses [] []
  • [POSTPONED] MPI and OpenMP in Scientific Software Development @SURFsara

    4 - 5 June 2020

    [POSTPONED] MPI and OpenMP in Scientific Software Development @SURFsara https://tess.elixir-europe.org/events/advanced-parallel-programming-with-mpi-and-openmp-surfsara NOTE: this is the advanced part of a 4-day course at SURFsara. If you wish to attend the full course, you should also register for the basic part separately. Please check here the basic part.   This 2-day advanced course describes different everyday challenges that developers of parallel code have to face in everyday work, and provides working solutions for them. Here you will see how to deal with parallel profiling and explore the knobs and dials that make your code exploit the best possible performance, just like domain decomposition techniques and parallel I/O. Each of these sessions includes hands-on exercises to facilitate the understanding of the different constructs. Moreover, you will also obtain some insight on useful parallel libraries and routines for scientific code development, like ParMETIS and PETSc.   As a participant for these two days, you already have basic knowledge on parallel programming with MPI and OpenMP using the C or Fortran programming languages, which are covered in our basic course. If you are not very familiar with these basic concepts, please register for the basic course and then you can also register for this advanced part! https://events.prace-ri.eu/event/1005/ 2020-06-04 07:00:00 UTC 2020-06-05 15:35:00 UTC [] [] [] workshops_and_courses [] []
  • [POSTPONED] High-performance computing with Python @ JSC

    8 - 10 June 2020

    [POSTPONED] High-performance computing with Python @ JSC https://tess.elixir-europe.org/events/high-performance-computing-with-python-jsc-8f572b9d-fe3d-4357-a015-1be3342e47a8 This course was postponed to autumn 2020. The exact date will be fixed later. Python is increasingly used in high-performance computing projects. It can be used either as a high-level interface to existing HPC applications and libraries, as embedded interpreter, or directly. This course combines lectures and hands-on sessions. We will show how Python can be used on parallel architectures and how to optimize critical parts of the kernel using various tools. The following topics will be covered: Interactive parallel programming with IPython Profiling and optimization High-performance NumPy Just-in-time compilation with numba Distributed-memory parallel programming with Python and MPI Bindings to other programming languages and HPC libraries Interfaces to GPUs This course is aimed at scientists who wish to explore the productivity gains made possible by Python for HPC. Prerequisites: Good working knowledge of Python and NumPy Application Registrations are only considered until 7 May 2020 due to available space, the maximal number of participants is limited. Applicants will be notified, whether they are accepted for participitation. Instructors: Dr. Jan Meinke, Dr. Olav Zimmermann, JSC Contact For any questions concerning the course please send an e-mail to j.meinke@fz-juelich.de https://events.prace-ri.eu/event/982/ 2020-06-08 07:00:00 UTC 2020-06-10 14:30:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] HPC code optimisation workshop @ LRZ

    8 - 10 June 2020

    [ONLINE] HPC code optimisation workshop @ LRZ https://tess.elixir-europe.org/events/hpc-code-optimisation-workshop-lrz-8793a2fd-4991-4524-97f3-e424f669005a          This course will be delivered as an ONLINE COURSE for remote participation because of the COVID-19 measures enforced by most European governments. REGISTRATION is strictly NECESSARY since the details to access the online course will be provided to the registered and accepted attendees only. The AGENDA is still subject to be adopted to online training. Please mind that the time has been changed to 10:00-12:00 & 13:00-16:00 CEST.   Contents In the ever-growing complexity of computer architectures, code optimization has become the main route to keep pace with hardware advancements and effectively make use of current and upcoming High Performance Computing systems. Have you ever asked yourself: Where does the performance of my application lay? What is the maximum speed-up achievable on the architecture I am using? Is my implementation matching the HPC objectives? In this workshop, we will answer these questions and provide a unique opportunity to learn techniques, methods and solutions on how to improve code, how to enable the new hardware features and how to use the roofline model to visualize the potential benefits of an optimization process. We will begin with a description of the latest micro-processor architectures and how the developers can efficiently use modern HPC hardware, in particular the vector units via SIMD programming and AVX-512 optimization and the memory hierarchy. The attendees are then conducted along the optimization process by means of hands-on exercises and learn how to enable vectorization using simple pragmas and more effective techniques, like changing data layout and alignment. The work is guided by the hints from the Intel® compiler reports, and using Intel® Advisor. Besides Intel® Advisor, the participants will also be guided to the use of Intel® VTune™ Amplifier, Intel® Application Performance Snapshot and LIKWID as tools for investigating and improving the performance of a HPC application. We further cover the Intel® Math Kernel Library (MKL), in order to show how to gain performance through the use of libraries. We provide an N-body code, to support the described optimization solutions with practical hands-on. You can ask Intel in the Q&A session about how to optimise your code. Please provide a description of your code in the registration form. Learning Goals Through a sequence of simple, guided examples of code modernization, the attendees will develop awareness on features of multi and many-core architecture which are crucial for writing modern, portable and efficient applications. A special focus will be dedicated to scalar and vector optimizations for the latest Intel® Xeon® Scalable processor, code-named Skylake, utilized in the SuperMUC-NG machine at LRZ. The workshop interleaves frontal and practical sessions. Here is a preliminary outline: Day 1 Introduction to systems and software Code modernization approach Scalar optimization Compiler autovectorization Data layout from AoS to SoA Memory access optimization SDLT (Intel® SIMD Layout Templates) / Explicit vectorization / Skylake optimization Day 2 Introduction to roofline model Intel® Advisor analysis Intel® Math Kernel Library (MKL) and other libraries Day 3 Introduction to Intel® VTune™ Amplifier Introduction to Intel® Application Performance Snapshot (APS) LIKWID ( “Like I Knew What I’m Doing.”)  Performance Tools Q&A Session Please use your own laptop or PC (with X11 support and an ssh client installed) for the hands-on sessions. For GUI applications we require the installation of vncviewer (https://www.realvnc.com/en/connect/download/viewer/ )”. The workshop is a PRACE training event organized by LRZ in cooperation with Intel and RRZE. About the Lecturers Fabio Baruffa is a senior software application engineer at Intel. He provides customer support in the high-performance computing (HPC) area and artificial intelligence software solutions at large scale. He collaborates with several research institutes in Europe to develop prototypes of quantum computing algorithm simulations running on current HPC systems. Prior at Intel, he has been working as HPC application specialist and developer in the largest supercomputing centers in Europe, mainly the Leibniz Supercomputing Center and the Max-Plank Computing and Data Facility in Munich, as well as Cineca in Italy. He has been involved in software development, analysis of scientific code and optimization for HPC systems. He holds a PhD in Physics from University of Regensburg for his research in spintronics devices and quantum computing. Gennady Fedorov is a Technical Consulting Engineer supporting technical and Intel Performance Libraries ( IPP, MKL and DAAL) within the Intel Architecture, Graphics and Software Group at Intel in Russia. His focus areas are Image Processing, Crypto, Compressing techniques, High Performance Computing and Artificial Intelligence. Thomas Gruber (né Röhl) collected experience with all kinds of clustering approaches during his apprenticeship at the Erlangen Regional Computing Center (RRZE), the IT service provider for the Friedrich-Alexander-University Erlangen-Nuernberg (FAU). Afterwards, he studied Computer Science at RWTH Aachen University with emphasis on parallel programming and operating system kernel development. At the same time, he worked as a research assistant for the HPC group of the RWTH IT center. After receiving his M. Sc. degree, he went back to RRZE to work for the HPC group. Thomas Gruber leads the development of the performance tool suite LIKWID, which comprises easy-to-use tools for hardware performance monitoring, affinity control and micro-benchmarking. He also works on projects involving monitoring and analysis of hardware performance data. Carla Guillen works as a researcher in the application support group at the LRZ. She obtained her PhD in computer science at the Technische Universitaet Muenchen in 2015. She joined the LRZ in 2009, and has been working in the fields of system-wide performance monitoring and energy optimization of large scale clusters. Gerald Mathias works in the application support for the HPC systems at LRZ since 2015 and leads the Biolab@LRZ. After his PhD in Computational Biopyhsics at the LMU Munich he joined the chair of Theoretical Chemistry at the RUB in Bochum afterwards as a postdoc. He is experienced in the development and optimization of highly parallel ab initio and force field based molecular dynamics codes, both in Fortran and C. Michael Steyer is a Technical Consulting Engineer supporting technical and High Performance Computing segments within the Intel Architecture, Graphics and Software Group at Intel in Germany. His focus areas are High Performance Computing and Artificial Intelligence. Igor Vorobtsov has more than 11 years of experience in the areas of C/C++ and Fortran compilers, application tuning and developer support. Igor got a Master of Science degree in Applied Mathematics. Since joining Intel in 2008, Igor has worked as a Technical Consulting Engineer supporting software developers throughout EMEA region. Igor has a broad array of application experience, including enterprise applications and high performance computing environments. https://events.prace-ri.eu/event/1003/ 2020-06-08 08:00:00 UTC 2020-06-10 14:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Parallel Programming with OpenMP and MPI @ VSC Belgium & SURFsara

    9 - 12 June 2020

    [ONLINE] Parallel Programming with OpenMP and MPI @ VSC Belgium & SURFsara https://tess.elixir-europe.org/events/online-parallel-programming-with-openmp-and-mpi-vsc-belgium-surfsara Overview The Flemish Supercomputing Center (VSC, Belgium) and SURFsara (The Netherlands) are organizing this online course on parallel programming using OpenMP and MPI. The materials of this course have been developed by Dr. Rolf Rabenseifner (HLRS, Stuttgart). Course format The training sessions will be provided online using the Blackboard Collaborate system. More information on how to access the system for the training sessions and how to follow the course will be provided soon. Course level Basic content: 40 % Intermediate content: 40 % Advanced content: 20 % After every day there will be a round of "Questions and Answers". Indicative timetable Tue 9th of June 09:00    Welcome and presentation 09:10    Introduction to parallel computing 10:00    An overview of OpenMP: execution model 10:30    Coffee break 10:45    Worksharing directives 12:30    Lunch break 13:30    Data environment and combined constructs 15:00    Coffee break 15:15    Optimization and pitfalls 16:00    Summary / Questions and Answers    16:30    End of first day Wed 10th of June 09:00   MPI Overview 10:30   Coffee break 10:45   MPI Process Model 11:30   Messages and Point-to-Point Communication 12:30   Lunch break 13:30   Messages and Point-to-Point Communication (cont.) 14:00   Non-Blocking Communication 15:00   Coffee break 15:15   Error handling and new Fortran module 16:00   Summary / Questions and Answers    16:30   End of second day Thu 11th of June 09:00   Collective Communication 10:30   Coffee break 10:45   Groups & Communicators 12:30   Lunch break 13:30   Virtual Topologies 15:00   Coffee break 15:15   Derived Datatypes 16:00   Summary / Questions and Answers 16:30   End of third day Fri 12th of June 09:00   One-sided Communication 10:30   Coffee break 10:45   Shared Memory 12:30   Lunch break 13:30   MPI + Threads 14:15   Parallel I/O 15:00   Coffee break 15:15   Best practices 16:00   Summary / Questions and Answers   16:25   Closing  16:30   End of fourth day https://events.prace-ri.eu/event/1028/ 2020-06-09 07:00:00 UTC 2020-06-12 14:30:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Working effectively with HPC systems @ SNIC

    10 June 2020

    [ONLINE] Working effectively with HPC systems @ SNIC https://tess.elixir-europe.org/events/online-working-effectively-with-hpc-systems In brief The seminar will present useful tools and best practices for working effectively on HPC systems. It is expected to be of interest for a general HPC system user, both at a more familiar (intermediate) or starting (beginner) level. To participate, register down below. Introduction Working efficiently with HPC starts with the tools you use to interact with the HPC system. It is also helpful to understand the general anatomy of HPC systems and storage. Following on from these fundamentals, we will give some recommendations for data organization on the system and examples of various types of file systems (e.g. parallel vs. local) and their individual strengths and weaknesses. We will then discuss the concepts of parallelism, scalability, scheduling and what types of OS and software you can expect of HPC systems. We will go through some important things to consider when building and installing software. Finally, we will look at different ways of running software on HPC systems and ways to monitor your software as it is running, with the aim of ensuring that your jobs are not poorly configured or wasting resources. While the content and the practices are useful for HPC systems in general, we will show examples and tools more specific for the NSC clusters, e.g. Tetralith and Sigma. Schedule The schedule for the day is divided into two main parts, before and after lunch break. The parts include several blocks of 20-40 minutes with breaks in between. Each block will include opportunities for questions. 10:00 -12:00 Part I 12:00 -13:00 L u n c h 13:00 -15:00 Part II Topics/blocks (preliminary) Welcome, introductions and practicalities Tools at your end (e.g. terminal, ssh config., file transfer tools, VNC) HPC system anatomy (login and compute nodes, interconnect, storage) Properties and features of storage areas (e.g. quotas, performance, locality, backups, snapshots, scratch) Concept of parallelism (Amdahl’s law), scalability, scheduling and practical advice for good performance Software on an HPC system (OS, modules, python envs., concept of build envs., containers with Singularity) Ideas and strategies for organizing your workflow (data and file management, traceability and reproducibility) Interacting with the Slurm queueing system (requesting resources interactively or in batch) Practical examples (preparing, submitting, monitoring and evaluating job efficiency) Presenters Peter Kjellström, Weine Olovsson, Torben Rasmussen, Hamish Struthers, all at NSC, LiU, Sweden.   https://events.prace-ri.eu/event/1026/ 2020-06-10 08:00:00 UTC 2020-06-10 13:00:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to new accelerated partition of Marconi, for users and developers@CINECA

    15 - 17 June 2020

    Introduction to new accelerated partition of Marconi, for users and developers@CINECA https://tess.elixir-europe.org/events/introduction-to-new-accelerated-partition-of-marconi-for-users-and-developers-cineca Description: The present course intends to support the scientific community to efficiently exploit the architecture of the new accelerated partition of Marconi system. More precisely, the course aims at providing a full description of its  configuration, with special emphasis on main crucial aspects for users and application developers. For instance, details about compilation, debugging and optimization procedures will be provided, together with an overview of the libraries, tools and applications available on the system. Examples of submission jobs will be discussed, together with scheduler commands and queue definitions. Skills: By the end of the course each student should be able to: •    compile a code on this architecture in a performing way •    run a code taking advantage of accelerated resources •    move easily in the configured hpc environment Target Audience: Researchers and programmers who want to use this new accelerated partition of Marconi  Pre-requisites: No pre-requisites Grant: A grant of 200 EUR (for foreign students) and 100 EUR (for Italian students) will be available for participants not funded by their institution and not working in the Bologna area. Some documentation will be required and the grant will be paid only after a certified presence of minimum 80% of the lessons and about 1 month after the ending of the course. For further information about how to submit for the grant, please wait the confirmation email that you are accepted to the course about 3 weeks before the date of the beginning of the lessons.  The lunch for the 2 days will be provided by Cineca. Coordinating Teacher: Dr. S.Giuliani https://events.prace-ri.eu/event/975/ 2020-06-15 07:00:00 UTC 2020-06-17 16:00:00 UTC [] [] [] workshops_and_courses [] []
  • [POSTPONED] High-performance scientific computing in C++ @ JSC

    15 - 17 June 2020

    [POSTPONED] High-performance scientific computing in C++ @ JSC https://tess.elixir-europe.org/events/high-performance-scientific-computing-in-c-jsc-7577a0b2-ea9c-4d4d-831c-b75adb7ec714 This course has been postponed to autumn 2020. Modern C++, with its support for procedural, objected oriented, generic and functional programming styles, offers many powerful abstraction mechanisms to express complexity at a high level while remaining very efficient. It is therefore the language of choice for many scientific projects. However, achieving high performance by today's standards requires understanding and exploiting multiple levels of parallelism, and understanding C++ code from a performance centric viewpoint. In this course, the participants will learn how to write C++ programs which better utilize typical HPC hardware resources of the present day. The course is geared towards scientists and engineers already familiar with C++17 (at the very least C++14), who wish to develop maintainable and fast applications. They will learn techniques to better utilize CPU caches, instruction pipelines, SIMD functionality and multi-threading. Shared memory parallel programming on multiple CPU cores will be introduced using parallel STL of C++17 and Intel (R) Threading Building Blocks. The participants will also learn basic GPGPU programming in C++ using NVidia CUDA and Thrust. Prerequisites: Good working knowledge of C++, especially the C++14 standard. Please check with these questions whether your C++ knowlegde fulfills the requirements. Application Registrations are only considered until 15 May 2020 due to available space, the maximal number of participants is limited. Applicants will be notified, whether they are accepted for participitation. Instructor: Dr. Sandipan Mohanty, JSC Contact For any questions concerning the course please send an e-mail to s.mohanty@fz-juelich.de https://events.prace-ri.eu/event/984/ 2020-06-15 07:00:00 UTC 2020-06-17 14:30:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Advanced topics in scientific visualization with Blender: geometry, scripts, animation, action! @ SURFsara

    15 - 26 June 2020

    [ONLINE] Advanced topics in scientific visualization with Blender: geometry, scripts, animation, action! @ SURFsara https://tess.elixir-europe.org/events/online-advanced-topics-in-scientific-visualization-with-blender-geometry-scripts-animation-action-surfsara Overview In this course you will learn the basics of the Python API in Blender. You will learn how the API is structured and eventually start making some simple geometry with code. Another important part of the course is that you will work on a visualisation based on your own research or other endeavours. The course is set up to run over two weeks, beginning with an introductory webinar. During the course period you work on the assignments in your own time. We provide a communication channel with the course teachers that you can use for questions and feedback during the course. There are also severak webinars were we discuss your projects and make sure you are able to make progress. Part of the course material, like our YouTube videos and tutorials, is available outside the course timetable. Your own project and data To get the most out of the course we would like you to take a dataset from your work that you want to use for visualisation with Blender. This course will give you the skills to start on your own project. During the interactive moments (webinars and chat channel) our visualisation team will help you further with your specific project and challenges. So please collect some data you think would be great to make a 3D visualisation of. During the webinars we will expect input from you also. We would like you to present your visualisation goals, your data and your first try at making a visualisation with Blender. This course is a pilot Due to the Covid-19 situation we have decided to start an online course, as alternative to our regular on-site courses. This course module is intentionally small because we want to test how and see if we can effectively use online tools for our training programs, especially for a hands-on topic like using Blender. We hope that you are open to giving us feedback on the setup and execution of this course. The next parts of the course will be improved based on your input. All the contents and more detailed information Please visit the main web page of this course in the GitHub page of SURFsara: https://surfsara-visualization.github.io/blendercourses/ https://events.prace-ri.eu/event/1032/ 2020-06-15 08:00:00 UTC 2020-06-26 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Deep Learning and GPU programming workshop

    15 - 18 June 2020

    [ONLINE] Deep Learning and GPU programming workshop https://tess.elixir-europe.org/events/deep-learning-and-gpu-programming-workshop-lrz-ba641b1b-54b7-4b0e-82d4-770259954fd0 This course will be delivered as an ONLINE COURSE for remote participation because of the COVID-19 measures enforced by most European governments. REGISTRATION is strictly NECESSARY since the details to access the online course will be provided to the registered and accepted attendees only. The workshop will take place online 10:00-12:00 and 13:00-16:00 CEST each day. If you want to be set on the waiting list, please contact weinberg@lrz.de. Overview Learn how to train and deploy a neural network to solve real-world problems, how to generate effective descriptions of content within images and video clips, how to effectively parallelize training of deep neural networks on Multi-GPUs and how to accelerate your applications with CUDA C/C++ and OpenACC. This 4-days workshop  combines lectures about fundamentals of Deep Learning for Multiple Data Types and Multi-GPUs with lectures about Accelerated Computing with CUDA C/C++ and OpenACC. The lectures are interleaved with many hands-on sessions using Jupyter Notebooks. The exercises will be done on a fully configured GPU-accelerated workstation in the cloud. The workshop is co-organized by LRZ, IT4Innovations and NVIDIA Deep Learning Institute (DLI) for the Partnership for Advanced Computing in Europe (PRACE). LRZ as part of GCS and IT4Innovations are both PRACE Training Centres which serve as European hubs and key drivers of advanced high-quality training for researchers working in the computational sciences. NVIDIA DLI offers hands-on training for developers, data scientists, and researchers looking to solve challenging problems with deep learning.  All instructors are NVIDIA certified University Ambassadors. Agenda 1st day: Fundamentals of Deep Learning for Multiple Data Types This day explores how convolutional and recurrent neural networks can be combined to generate effective descriptions of content within images and video clips. Learn how to train a network using TensorFlow and the Microsoft Common Objects in Context (COCO) dataset to generate captions from images and video by: Implementing deep learning workflows like image segmentation and text generation Comparing and contrasting data types, workflows, and frameworks Combining computer vision and natural language processing Upon completion, you’ll be able to solve deep learning problems that require multiple types of data inputs. 2nd day: Fundamentals of Accelerated Computing with OpenACC On the last day you learn the basics of OpenACC, a high-level programming language for programming on GPUs. Discover how to accelerate the performance of your applications beyond the limits of CPU-only programming with simple pragmas. You’ll learn: How to profile and optimize your CPU-only applications to identify hot spots for acceleration How to use OpenACC directives to GPU accelerate your codebase How to optimize data movement between the CPU and GPU accelerator Upon completion, you'll be ready to use OpenACC to GPU accelerate CPU-only applications. 3rd day: Fundamentals of Accelerated Computing with CUDA C/C++ The CUDA computing platform enables the acceleration of CPU-only applications to run on the world’s fastest massively parallel GPUs. On the 3rd day you experience C/C++ application acceleration by: Accelerating CPU-only applications to run their latent parallelism on GPUs Utilizing essential CUDA memory management techniques to optimize accelerated applications Exposing accelerated application potential for concurrency and exploiting it with CUDA streams Leveraging command line and visual profiling to guide and check your work Upon completion, you’ll be able to accelerate and optimize existing C/C++ CPU-only applications using the most essential CUDA tools and techniques. You’ll understand an iterative style of CUDA development that will allow you to ship accelerated applications fast. 4th day: Fundamentals of Deep Learning for Multi-GPUs The computational requirements of deep neural networks used to enable AI applications like self-driving cars are enormous. A single training cycle can take weeks on a single GPU or even years for larger datasets like those used in self-driving car research. Using multiple GPUs for deep learning can significantly shorten the time required to train lots of data, making solving complex problems with deep learning feasible. On the 2nd day we will teach you how to use multiple GPUs to train neural networks. You'll learn: Approaches to multi-GPUs training Algorithmic and engineering challenges to large-scale training Key techniques used to overcome the challenges mentioned above Upon completion, you'll be able to effectively parallelize training of deep neural networks using TensorFlow. Important information After you are accepted, please create an account under courses.nvidia.com/join . Ensure your laptop / PC will run smoothly by going to http://websocketstest.com/ Make sure that WebSockets work for you by seeing under Environment, WebSockets is supported and Data Receive, Send and Echo Test all check Yes under WebSockets (Port 80).If there are issues with WebSockets, try updating your browser. If you have any questions, please contact Marjut Dieringer at mdieringer"at"nvidia.com.  PRACE Training and Education The mission of PRACE (Partnership for Advanced Computing in Europe) is to enable high-impact scientific discovery and engineering research and development across all disciplines to enhance European competitiveness for the benefit of society.  PRACE has an extensive education and training effort through seasonal schools, workshops and scientific and industrial seminars throughout Europe. Seasonal Schools target broad HPC audiences, whereas workshops are focused on particular technologies, tools or disciplines or research areas. NVIDIA Deep Learning Institute The NVIDIA Deep Learning Institute delivers hands-on training for developers, data scientists, and engineers. The program is designed to help you get started with training, optimizing, and deploying neural networks to solve real-world problems across diverse industries such as self-driving cars, healthcare, online services, and robotics. https://events.prace-ri.eu/event/1007/ 2020-06-15 08:00:00 UTC 2020-06-18 14:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Introduction to Hybrid Programming in HPC @ VSC Vienna & LRZ & HLRS

    17 - 19 June 2020

    [ONLINE] Introduction to Hybrid Programming in HPC @ VSC Vienna & LRZ & HLRS https://tess.elixir-europe.org/events/introduction-to-hybrid-programming-in-hpc-vsc-vienna This course will be delivered as an ONLINE COURSE for remote participation because of the COVID-19 measures enforced by most European governments. REGISTRATION is strictly NECESSARY since the details to access the online course will be provided to the registered and accepted attendees only. Furthermore, accepted participants will be contacted about two weeks in advance and asked to provide some more data for the hands-on labs that will be needed for user account creation on the VSC clusters. At that time there will also be a short PRE-ASSIGNMENT that has to be completed before the course starts. Overview Most HPC systems are clusters of shared memory nodes. To use such systems efficiently both memory consumption and communication time has to be optimized. Therefore, hybrid programming may combine the distributed memory parallelization on the node interconnect (e.g., with MPI) with the shared memory parallelization inside of each node (e.g., with OpenMP or MPI-3.0 shared memory). This course analyzes the strengths and weaknesses of several parallel programming models on clusters of SMP nodes. Multi-socket-multi-core systems in highly parallel environments are given special consideration. MPI-3.0 has introduced a new shared memory programming interface, which can be combined with inter-node MPI communication. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared with various hybrid MPI+OpenMP approaches and pure MPI. Numerous case studies and micro-benchmarks demonstrate the performance-related aspects of hybrid programming. Hands-on sessions are included on all days. Tools for hybrid programming such as thread/process placement support and performance analysis are presented in a "how-to" section. This course provides scientific training in Computational Science and, in addition, the scientific exchange of the participants among themselves. Content Levels: Beginners = 0:00h (0%) + Intermediate = 1:00h (8%) + Advanced = 11:00h (92%) (Please note that the content levels refer to the originally planned two days face-to-face course.) This course is a PRACE training event. It is organized by the VSC Research Center, TU Wien, in cooperation with LRZ, HLRS and RRZE. Agenda & Content The online version of the course has been extended to three days (as compared to the originally planned two days face-to-face course) to allow for more time and flexibility with doing the hands-on labs remotely. 1st day – 17 June 2020 08:45   Join online 09:00    Welcome 09:05    Motivation 09:15    Introduction 09:45    Programming Models 09:50       - MPI + OpenMP 10:30   Coffee 11:00       - continue: MPI + OpenMP 12:00          Practical (how to compile and start) 13:00   Lunch 14:30          Practical (hybrid through OpenMP parallelization) 16:00   End of first day 2nd day – 18 June 2020 08:45   Join online 09:00       - Overlapping Communication and Computation 09:30          Practical (taskloops) 10:30   Coffee 11:00       - MPI + OpenMP Conclusions 11:15       - MPI + Accelerators 11:45    Tools 12:00    Programming Models (continued) 12:05       - MPI + MPI-3.0 Shared Memory 13:00   Lunch 14:30          Practical (replicated data) 16:00   End of second day 3rd day – 19 June 2020 08:45   Join online 09:00       - MPI Memory Models and Synchronization 09:40       - Pure MPI 10:00       - Recap - MPI Virtual Topologies 10:30   Coffee 11:00       - Topology Optimization 11:30          Practical/Demo (application aware Cartesian topology) 12:30       - Topology Optimization (Wrap up) 12:45     Conclusions 13:00   Lunch 14:30     Finish the hands-on labs, Discussion, Q & A, Feedback 16:00   End of third day (course) https://events.prace-ri.eu/event/1009/ 2020-06-17 06:45:00 UTC 2020-06-19 14:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Efficient Parallel Programming with GASPI @ HLRS

    18 - 19 June 2020

    [ONLINE] Efficient Parallel Programming with GASPI @ HLRS https://tess.elixir-europe.org/events/efficient-parallel-programming-with-gaspi-hlrs-3acff6f4-6bb7-4f7c-9e66-f0e0b0c799ab Overview In this tutorial we present an asynchronous data flow programming model for Partitioned Global Address Spaces (PGAS) as an alternative to the programming model of MPI. GASPI, which stands for Global Address Space Programming Interface, is a partitioned global address space (PGAS) API. The GASPI API is designed as a C/C++/Fortran library and focused on three key objectives: scalability, flexibility and fault tolerance. In order to achieve its much improved scaling behaviour GASPI aims at asynchronous dataflow with remote completion, rather than bulk-synchronous message exchanges. GASPI follows a single/multiple program multiple data (SPMD/MPMD) approach and offers a small, yet powerful API (see also http://www.gaspi.de and http://www.gpi-site.com). GASPI is successfully used in academic and industrial simulation applications. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of GASPI. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. For further information and registration please visit the HLRS course page. https://events.prace-ri.eu/event/997/ 2020-06-18 07:00:00 UTC 2020-06-19 13:30:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Node-Level Performance Engineering @ HLRS

    29 June - 1 July 2020

    [ONLINE] Node-Level Performance Engineering @ HLRS https://tess.elixir-europe.org/events/node-level-performance-engineering-hlrs-a71c9c00-22ea-4da8-b071-b7886cd7acc0 Overview This course teaches performance engineering approaches on the compute node level. "Performance engineering" as we define it is more than employing tools to identify hotspots and bottlenecks. It is about developing a thorough understanding of the interactions between software and hardware. This process must start at the core, socket, and node level, where the code gets executed that does the actual computational work. Once the architectural requirements of a code are understood and correlated with performance measurements, the potential benefit of optimizations can often be predicted. We introduce a "holistic" node-level performance engineering strategy, apply it to different algorithms from computational science, and also show how an awareness of the performance features of an application may lead to notable reductions in power consumption.This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. For further information and registration please visit the HLRS course page. https://events.prace-ri.eu/event/1006/ 2020-06-29 06:45:00 UTC 2020-07-01 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [CANCELLED] Parallel Programming with Python @ BSC

    6 - 8 July 2020

    [CANCELLED] Parallel Programming with Python @ BSC https://tess.elixir-europe.org/events/parallel-programming-with-python-bsc Please, bring your own laptop. All the PATC courses at BSC are free of charge. Course Convener: Xavier Martorell, CS/Programming Models Course Lecturers: Rosa M. Badia, CS/Workflows and Distributed Computing, Bryan Jiménez (University of Utrecht), Joan Verdaguer-Codina (COEIC) LOCATION: UPC Campus Nord premises.Vertex Building, Room VS208 Level:  BASIC: for students with little previous experience with Python Prerequisites: Basic Python programming, all examples of the course will be presented in Python. Objectives: The objectives of this course are to understand the basic concepts on programming with Python and its support for parallelism. Learning Outcomes: The students who finish this course will be able to develop simple parallel benchmarks with Python, and analyze their execution and tune their behaviour in parallel architectures. Agenda: Day 1 (Monday, July 6th, 2020) Session 1 / 9:30 am – 1:00 pm (2 h lectures, 1 h practical) 1. Introduction to parallel programming and Python 11:00  Coffee break 2. Practical: How to compile and run python applications   Session 2 / 2:00pm – 5:30 pm (2h lectures, 1h practical) 1. Scientific Python: NumPy, SciPy, MatplotLib, Bokeh 16:00 Coffee break 2. Practical: Simple python programs and optimizations     Day 2 (Tuesday, July 7th, 2020) Session 1 / 9:30 am - 1:00 pm (1.5 h lectures, 1.5 h practical) 1. Parallelism in Python. Shared memory 2. Introduction to performance analysis. Paraver: tool to analyze and understand performance 3. Python pools 11:00 Coffee break 3. Practical: Examples of python parallelism   Session 2 / 2:00 pm - 5:30 pm (1.5 h lectures, 1.5 h practical) 1. Distributed memory                  Visualizing distributed environments with Paraver 2. Python queues 16:00 Coffee break 3. Practical: Trace generation and trace analysis 4. Practical: environment on RPi   Day 3 (Wednesday, July 8th, 2020) Session 1 / 9:30 am - 1:00 pm (1 h lecture, 2h practical) 1. Introduction to PyCOMPSs 2. PyCOMPSs syntax 11:00 Coffee break 3.PyCOMPSs hands-on   Session 2 / 2:00 pm - 5:30 pm (2 h lectures, 1 h practical) 1. PyCUDA and support for accelerators 2. Debugging 16:00 Coffee break 3. Hands-on with PyCUDA   END of COURSE https://events.prace-ri.eu/event/999/ 2020-07-06 07:30:00 UTC 2020-07-08 15:30:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Reproducible computational environments using containers @ EPCC

    13 - 14 July 2020

    [ONLINE] Reproducible computational environments using containers @ EPCC https://tess.elixir-europe.org/events/reproducible-computational-environments-using-containers-epcc-online This course aims to introduce the use of containers with the goal of using them to effect reproducible computational environments. Such environments are useful for ensuring reproducible research outputs and for simplifying the setup of complex software dependencies across different systems. The course will mostly be based around the use of Docker containers but the material will be of use for whatever container technology you plan to, or end up, using. We will also briefly introduce the Singularity container environment which is compatible with Docker and designed for use on multi-user systems (such as HPC resources). On completion of this course attendees should:   * Understand what containers are and what they are used for * Understand how to manage and create Docker containers * Appreciate decisions that need to be made around containerising research workflows * Understand the differences between Docker and Singularity containers and why Singularity is more suitable for multi-user systems (e.g. HPC) * Understand how to manage and create Singularity containers * Appreciate how containers can be used to enable and improve reproducibility in research Trainers: Andy Turner (EPCC) and Jeremy Cohen (Imperial College)   This course has been extremely popular and is fully booked with a long waiting list.  We will not be accepting further registrations for this run of the course but we hope to run it again sometime soon. https://events.prace-ri.eu/event/962/ 2020-07-13 08:00:00 UTC 2020-07-14 16:00:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to PETSc @ MdlS/Idris

    10 - 11 September 2020

    Introduction to PETSc @ MdlS/Idris https://tess.elixir-europe.org/events/introduction-to-petsc-mdls-idris-624caaa7-0a2e-4ef9-a356-9b77de431a2c The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations (www.mcs.anl.gov/petsc/). It enables researchers to delegate the linear algebra part of their applications to a specialized team, and to test various solution methods. The course will provide the necessary basis to get started with PETSc and give an overview of its possibilities. Presentations will alternate with hands-on sessions (in C or Fortran). Learning outcomes : On completion of this course, the participant should - Be able to build and solve simple PDE examples - Use and compare different solvers on these examples - Be familiar with using the on-line documentation - Be able to easily explore other PETsc possibilities relevant to his/her application. Prerequisites : C or Fortran programming. Notions of linear algebra, as well as notions of MPI, would be an asset. https://events.prace-ri.eu/event/891/ 2020-09-10 07:30:00 UTC 2020-09-11 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to GPU programming with OpenACC @ MdlS/Idris

    17 - 18 September 2020

    Introduction to GPU programming with OpenACC @ MdlS/Idris https://tess.elixir-europe.org/events/introduction-to-gpu-programming-with-openacc-mdls-idris As the use of GPUs in HPC is spreading fast, several models are now available to programmers. During the tutorial we will introduce the directive-based approach to GPU programming. While being quite simple to implement, it allows to reach performance similar to lower level paradigms. We will present the basic concepts of OpenACC model through several examples and hands-on sessions. The main pitfalls and good practice will be presented. Also a part of the tutorial will be dedicated to more advanced topics such as multi-GPU usage. We will also give some insights of the tools available to assess the performance Learning Outcomes : - Ability to execute code on GPUs using OpenACC - Assessment of the performance of the code Requirements : - Basic knowledge of Fortran and/or C and/or C++ - No prior knowledge of OpenACC is required https://events.prace-ri.eu/event/1008/ 2020-09-17 07:30:00 UTC 2020-09-18 16:00:00 UTC [] [] [] workshops_and_courses [] []
  • Modern Scientific C++ @ MdlS/Idris

    5 - 8 October 2020

    Modern Scientific C++ @ MdlS/Idris https://tess.elixir-europe.org/events/modern-scientific-c-mdls-idris In recent years, the C ++ language has evolved. To stick to the 1998/2003 standard is to miss many new features that make modern C ++ more robust, more powerful, and often more readable. Through this training, it is proposed to become familiar with syntactic novelties that facilitate the writing of code, modernized best practices to avoid the language traps, and a programming way that is easier to parallelize. This training is for scientific programmers who want to discover "modern" C ++ (2011 to 2020 standards), and adapt their programming practices accordingly. Detailed Program : Day 1 (Victor ALESSANDRINI) Review of some basic C++ concepts, overview of C++ as a software development environment, with two major software engineering strategies: object oriented programming and generic programming. Object oriented programming: the virtual function mechanism enabling late binding at execution time, (software modules calling newly written routines without recompilation). Examples of the power and relevance of virtual functions Function objects as extended pointers to functions, examples Introduction to generic programming: function templates, examples Day 2 (Victor ALESSANDRINI) Generic programming: class templates, examples The core of generic programming: using function and class templates to parameterize behavior rather than just object types Overview of the Standard Template Library (STL): strategies, containers, iterators, algorithms Concurrency in the standard C++ library: overview of the thread class, discussion of the new threading interfaces (futures, promises) enabling easy synchronization of simple concurrency patterns. Day 3 (David CHAMONT): modern C++ syntax Basic features: type inference, stronger typing, user-defined literals, uniform initialization, rvalue references, move semantics. Object features: member variables initialization, delegated and inherited constructors, explicit deletion and overriding of member functions. Generic features: static assertions, template variables and type aliasing, constant expressions, variadic templates, perfect forwarding.  Functional features: lambda functions. Day 4 (David CHAMONT): modern C++ library Basic tools: smart pointers (unique_ptr, shared_ptr), new collections (array, unordered maps), views (span, string_array), wrapper types (function, ref). Generic tools: type traits, sfinae, concepts. Functional tools: algebraic types (tuple, variant), monadic types (optional, future), ranges. Optimization and parallelization: beyond double, random numbers, chrono, execution policies, structures of arrays, co-routines. Prerequisites : Knowledge of classical (pre-11) C++ syntax, and basic experience in programming with C++ objects. Participants should be familiar with the following concepts: namespaces references, difference with pointers basic memory allocation (new and delete) Properties of C++ functions (default values, overloading) Basic understanding of error handling (exceptions) C++ classes, programming with objects, public derivation of classes Basic knowledge of templates https://events.prace-ri.eu/event/977/ 2020-10-05 07:30:00 UTC 2020-10-08 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Parallel Programming Workshop (Train the Trainer) @ HLRS

    12 - 16 October 2020

    [ONLINE] Parallel Programming Workshop (Train the Trainer) @ HLRS https://tess.elixir-europe.org/events/online-parallel-programming-workshop-train-the-trainer-hlrs The Train the Trainer Program is provided in conjunction with the regular course Parallel Programming with MPI and OpenMP and Advanced Parallel Programming. Whereas the regular course teaches parallel programming, this program is an education for future trainers in parallel programming. Too few people can provide parallel programming courses on the level that is needed if scientists and PhD students want to learn how to parallelize a sequential application or to enhance parallel applications. Within Europe, currently only six PATC centres and several other national centres provide such courses on an European or national level. We would like to assist further trainers and centres to also provide such courses for whole Europe or at least within their countries. Prerequisites You are familiar with parallel programming with MPI and OpenMP on an advanced level and skilled in both programming languages C and Fortran. Your goal: You want to provide MPI and OpenMP courses for other scientists and PhD students in your country, i.e., you would like to provide at least the first three days of the regular course as a training block-course to PhD students. Background: (a) Your centre supports you to provide such PhD courses in a course room at your centre. The course room is equipped at least with one computer/laptop per two (or three) students and has access to a HPC resource that allows MPI and OpenMP programming in C and Fortran. Or (b), you as a future trainer would like to co-operate with a centre with the necessary course infrastructure. What does this Train the Trainer Program provide? We provide you with all necessary teaching material on a personal basis, i.e., with the copyright to use it and to provide pdf or paper copies to the students in your PhD courses. We provide all exercise material. You will listen the lectures that you get familiar with the training material. During the exercises, you will help the regular students to correct their errors. The regular students are advised to request help if they were stuck for more than a minute. You will be trained to detect their problems as fast as possible (typically in less than a minute) and to provide the students with the needed help.   The Train the Trainer Program includes the curriculum from Monday until Friday according the course agenda. The Train the Trainer Program starts on Monday with a short introductory meeting at 8:15 am. On Thursday evening we will have an additional meeting and dinner for all participants of this TtT program. For further information and registration please visit the HLRS course page. https://events.prace-ri.eu/event/1031/ 2020-10-12 06:15:00 UTC 2020-10-16 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Parallel Programming Workshop (MPI, OpenMP and advanced topics) @ HLRS

    12 - 16 October 2020

    [ONLINE] Parallel Programming Workshop (MPI, OpenMP and advanced topics) @ HLRS https://tess.elixir-europe.org/events/online-parallel-programming-workshop-mpi-openmp-and-advanced-topics-hlrs Distributed memory parallelization with the Message Passing Interface MPI (Mon, for beginners – non-PRACE part): On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an introduction into MPI-1. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI). Shared memory parallelization with OpenMP (Tue, for beginners – non-PRACE part): The focus is on shared memory parallelization with OpenMP, the key concept on hyper-threading, dual-core, multi-core, shared memory, and ccNUMA platforms. This course teaches shared memory OpenMP parallelization. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the directives and other interfaces of OpenMP. Race-condition debugging tools are also presented. Intermediate and advanced topics in parallel programming (Wed-Fri – PRACE course): Topics are advanced usage of communicators and virtual topologies, one-sided communication, derived datatypes, MPI-2 parallel file I/O, hybrid mixed model MPI+OpenMP parallelization, parallelization of explicit and implicit solvers and of particle based applications, parallel numerics and libraries, and parallelization with PETSc. MPI-3.0 introduced a new shared memory programming interface, which can be combined with MPI message passing and remote memory access on the cluster interconnect. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared in the hybrid mixed model MPI+OpenMP parallelization session with various hybrid MPI+OpenMP approaches and pure MPI. Further aspects are domain decomposition, load balancing, and debugging. Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. For further information and registration please visit the HLRS course page. https://events.prace-ri.eu/event/1030/ 2020-10-12 06:30:00 UTC 2020-10-16 14:30:00 UTC [] [] [] workshops_and_courses [] []
  • Performance portability for GPU application using high-level programming approaches with Kokkos @ MdlS/Idris

    12 - 13 October 2020

    Performance portability for GPU application using high-level programming approaches with Kokkos @ MdlS/Idris https://tess.elixir-europe.org/events/performance-portability-for-gpu-application-using-high-level-programming-approaches-with-kokkos-mdls-idris When developing a numerical simulation code with high performance and efficiency in mind, one is often compelled to accept a trade-off between using a native-hardware programming model (like CUDA or OpenCL), which has become tremendously challenging, and loosing some cross-platform portability. Porting a large existing legacy code to a modern HPC platform, and developing a new simulation code, are two different tasks that may be benefit from a high-level programming model, which abstracts the low-level hardware details. This training presents existing high-level programming solutions that can preserve at best as possible performance, maintainability and portability across the vast diversity of modern hardware architectures (multicore CPU, manycore, GPU, ARM, ..) and software development productivity. We will  provide an introduction to the high-level C++ programming model Kokkos https://github.com/kokkos, and show basic code examples  to illustrate the following concepts through hands-on sessions: hardware portability: design an algorithm once and let the Kokkos back-end (OpenMP, CUDA, ...) actually derive an efficient low-level implementation; efficient architecture-aware memory containers: what is a Kokkos::view; revisit fundamental parallel patterns with Kokkos: parallel for, reduce, scan, ... ; explore some mini-applications. Several detailed examples in C/C++/Fortran will be used in hands-on session on the high-end hardware platform Jean Zay (http://www.idris.fr/jean-zay/), equipped with Nvidia Tesla V100 GPUs. Prerequisites: Some basic knowledge of the CUDA programming model and of C++. https://events.prace-ri.eu/event/892/ 2020-10-12 07:30:00 UTC 2020-10-13 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to ScaLAPACK and MAGMA libraries @ MdlS/Idris

    19 - 20 October 2020

    Introduction to ScaLAPACK and MAGMA libraries @ MdlS/Idris https://tess.elixir-europe.org/events/introduction-to-scalapack-and-magma-libraries-mdls-idris The aim of this course is to introduced the basic usages of the ScaLAPACK and MAGMA libraries ScaLAPACK : The ScaLAPACK (Scalable Linear Algebra PACKage) is a library for high-performance dense linear algebra based on routines for distributed-memory message passing computers. It is mostly based on a subset of LAPACK (Linear Algebra PACKage) and BLAS (Basic Linear Algebra Subprograms) routines redesigned for distributed memory MIMD parallel computers where all the MPI communications are handled by routines provided by the BLACS (Basic Linear Algebra Communication Subprograms) library. The lecture will be mostly based on how to use the PBLAS  (Parallel BLAS) and ScaLAPACK libraries for linear algebra problems in HPC:   General introduction about the PBLAS and ScaLAPACK libraries Main ideas how to decompose the linear algebra problems in parallel programming Examples of basic operations with PBLAS : vector-vector, vector-matrix and matrix-matrix operations Examples of basic operations with ScaLAPACK : inversion and diagonalization Main problem based on calculating an exponentiation of a matrix MAGMA: In the second part of the course, we present MAGMA (Matrix Algebra on GPU and Multicore Architectures) , a dense linear algebra library similar to LAPACK but for hybrid/heterogeneous architectures. We start by presenting basic concepts of GPU architecture and giving an overview of communication schemes between CPUs and GPUs. Then, we  briefly present hybrid CPU/GPU programming models using the CUDA language.  Finally, we present MAGMA and how it can be used to easily and efficiently accelerate scientific codes, particularly those already using BLAS and LAPACK. Trainers: Donfack Simplice (MAGMA) Hasnaoui Karim (ScaLAPACK) Prerequisites : C or C++ and Fortran programming. Notions of linear algebra, as well as notions of MPI, would be an asset. https://events.prace-ri.eu/event/919/ 2020-10-19 07:30:00 UTC 2020-10-20 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to Deep Learning Models @ JSC

    21 - 23 October 2020

    Introduction to Deep Learning Models @ JSC https://tess.elixir-europe.org/events/introduction-to-deep-learning-models-jsc This course was originally scheduled to take place on 12-14 May 2020 but has since been rescheduled to its current dates due to the Covid-19 pandemic. This course focuses on a recent machine learning method known as deep learning that emerged as a promising disruptive approach, allowing knowledge discovery from large datasets in an unprecedented effectiveness and efficiency. It is particularly relevant in research areas, which are not accessible through modelling and simulation often performed in HPC. Traditional learning, which was introduced in the 1950s and became a data-driven paradigm in the 90s, is usually based on an iterative process of feature engineering, learning, and modelling. Although successful on many tasks, the resulting models are often hard to transfer to other datasets and research areas. This course provides an introduction into deep learning and its inherent ability to derive optimal and often quite generic problem representations from the data (aka ‘feature learning’). Concrete architectures such as Convolutional Neural Networks (CNNs) will be applied to real datasets of applications using known deep learning frameworks such as Tensorflow, Keras, or Torch. As the learning process with CNNs is extremely computational-intensive the course will cover aspects of how parallel computing can be leveraged in order to speed-up the learning process using general purpose computing on graphics processing units (GPGPUs). Hands-on exercises allow the participants to immediately turn the newly acquired skills into practice. After this course participants will have a general understanding for which problems CNN learning architectures are useful and how parallel and scalable computing is facilitating the learning process when facing big datasets. Prerequisites: Participants should be able to work on the Unix/Linux command line, have a basic level of understanding of batch scripts required for HPC application submissions, and have a minimal knowledge of probability, statistics, and linear algebra. Participants should bring their own notebooks (with an ssh-client). Application Applicants will be notified one month before the course starts, whether they are accepted for participitation. Instructors: Prof. Dr. Morris Riedel, Dr. Gabriele Cavallaro, Dr. Jenia Jitsev, Jülich Supercomputing Centre Contact For any questions concerning the course please send an e-mail to g.cavallaro@fz-juelich.de. https://events.prace-ri.eu/event/983/ 2020-10-21 11:00:00 UTC 2020-10-23 14:30:00 UTC [] [] [] workshops_and_courses [] []
  • [MOOC] Tools and techniques to quickly improve performances of HPC applications in Solid Earth@Cineca

    26 - 28 October 2020

    [MOOC] Tools and techniques to quickly improve performances of HPC applications in Solid Earth@Cineca https://tess.elixir-europe.org/events/tools-and-techniques-to-quickly-improve-performances-of-hpc-applications-in-solid-earth-cineca It will  shown a course targeted to improve the overall performance of a code in Solid Earth, currently in use at the  CHEESE Center of Excellence H2020 project. First, parallel performance profiling tools will be provided on the initial version of the code to find the so-called performance bottlenecks. Starting from the profiling analysis, it will show how and where to intervene with respect to the hardware characterization of the HPC machine used for the investigation. We will show also how debug tools will be useful in the development / optimization phase to eliminate any possible bugs introduced in the writing (or redesign) of new parts of the code. Finally, it will be shown how to improve the overall performance of the code with respect to other popular parameters such as I / O, vectorization, etc. Skills: At the end of the course the student will be able to: - use a concrete methodology to improve the performance of a code in Solid Earth already in use in the context of    ChEESE project - find and solve all the main bottlenecks of an application with respect to appropriate computational metrics    and the machine used - use appropriate debugging tools to eliminate any bugs that may arise during the development / optimization phase Target audience:  Researchers in Solid Earth interested to learn and use all those techniques and related tools that may allow them to improve the performance of their code on current HPC architectures in the shortest possible time. Pre-requisites: -Basic knowledge of LINUX/UNIX. -Knowledge of C, FORTRAN, MPI or OpenMP is recommended. Notions of Parallel computing techniques and algorithms   for Solid Earth applications Grant: The lunch for the three days will be offered to all the participants and some grants are available. The only requirement to be eligible is to be not funded by your institution to attend the course and to work or live in an institute outside the Roma area. The grant  will be 300 euros for students working and living outside Italy and 150 euros for students working and living in Italy (outside Roma). Some documentation will be required and the grant will be paid only after a certified presence of minimum 80% of the lectures. Further information about how to request the grant, will be provided at the confirmation of the course: about 3 weeks before the starting date. Coordinating Teacher: Dr. P. Lanucara   https://events.prace-ri.eu/event/973/ 2020-10-26 08:00:00 UTC 2020-10-28 17:00:00 UTC [] [] [] workshops_and_courses [] []

Note, this map only displays events that have geolocation information in TeSS.
For the complete list of events in TeSS, click the grid tab.