Register event
21 events found

Content provider: PRACE 

  • [ONLINE] Advanced Fortran Topics @ LRZ

    22 - 25 September 2020

    [ONLINE] Advanced Fortran Topics @ LRZ https://tess.elixir-europe.org/events/online-advanced-fortran-topics-lrz This course will be delivered as an ONLINE COURSE for remote participation because of the COVID-19 measures enforced by most European governments. REGISTRATION is strictly NECESSARY since the details to access the online course will be provided to the registered and accepted attendees only. The workshop will take place online 10:00-12:00 and 13:00-16:00 CEST each day. Overview This course is targeted at scientists who wish to extend their knowledge of Fortran to cover advanced features of the language. Topics covered include (still preliminary) Best Practices global objects and interfaces abstract interfaces and the IMPORT statement object based programming Object-Oriented Programming type extension, polymorphism and inheritance binding of procedures to types and objects generic type-bound procedures abstract types and deferred bindings IEEE features and floating point exceptions Interoperability with C mixed language programming patterns Fortran 2003 I/O extensions OO Design Patterns: application of object-oriented programming creation and destruction of objects polymorphic objects and function arguments interacting objects dependency inversion: submodules and plugins Coarrays PGAS concepts and coarray basics dynamic entities advanced synchronization parallel programming patterns recent enhancements: collectives, events, teams, atomic subroutines performance aspects of coarray programming The course combines lectures and hands-on sessions. https://events.prace-ri.eu/event/1039/ 2020-09-22 08:00:00 UTC 2020-09-25 14:00:00 UTC [] [] [] workshops_and_courses [] []
  • Parallel Visualization of Scientific Data using Blender @ IT4Innovations

    24 September 2020

    Parallel Visualization of Scientific Data using Blender @ IT4Innovations https://tess.elixir-europe.org/events/parallel-visualization-of-scientific-data-using-blender-it4innovations-aba7ac32-70fb-4844-b730-5c045b0c8bbe Annotation The course  will focus on visualization of scientific data that can come from simulations of different physical phenomena (e.g. fluid dynamics, structural analysis, etc.). To create visually pleasing outputs of such data a path tracing rendering method will be used. All of the course aspects will be covered within the popular 3D creation suite Blender. We will work with the 2.8 version and introduce two of our plug-ins we have developed. The first one, called Covise Nodes is used to extend Blender capabilities to process scientific data. The second add-on is called Bheappe and it integrates cluster rendering in Blender. Within the course we will demonstrate some of the basics of Blender, followed by a data visualization example, and we will finish the course with rendering of a created scene on a supercomputing cluster. This course, postponed from April 2020, due to the COVID-19 pandemic, is an enriched rerun of a successful training from 2019. NOTE: The organization of the course will be adapted to the current COVID-19 regulations and participants must comply with them. In case of the forced reduction of the number of participants, earlier registrations will be given priority. Level Beginner Language English Purpose of the course (benefits for the attendees) Attendees will learn how to visualize different simulation data in Blender and how to provide visually pleasing outputs with help of a cluster. About the tutor(s) Petr Strakoš obtained his Ph.D. from CTU (the Czech Technical University in Prague) in Mechanical Engineering. Now he is a member of the Infrastructure Research Lab and the VaVR (Visualization and Virtual Reality) group, where he focuses on research in the area of visualization, image processing, and efficient utilization of these topics on a cluster. He also cooperates with partners from industry and other institutions in applied research. Milan Jaroš is a researcher in the Infrastructure Research Lab at IT4Innovations. He has nine years of experience in professional programming (C++, C#, Java, etc.). He has developed several pieces of commercial software (including mobile applications). In recent years he has been focusing on research in the area of HPC computing (including support of GPU and Intel Xeon Phi coprocessor), processing of medical images, and visualizations of engineering data (virtual reality, rendering, post-processing of CFD calculation, etc.). He is a co-developer of plugins for multiple pieces of software (Blender, COVISE/OpenCOVER, Unity, etc.). Alena Ješko is a researcher in the Infrastructure Research Lab at IT4Innovations. She has worked on mesh transformation topics for cranial orthosis design and photogrammetry for treating orbital fractures. She has recently started to work on AI and Machine Learning in Image Processing. Acknowledgements This event was partially supported by The Ministry of Education, Youth and Sports from the Large Infrastructures for Research, Experimental Development and Innovations project "e-Infrastruktura CZ – LM2018140“ and partially by the PRACE-6IP project - the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 823767. This work was also partially supported by the SGC grant No. SP2020/21 "Infrastructure research and development of HPC libraries and tools II", VŠB - Technical University of Ostrava, Czech Republic. https://events.prace-ri.eu/event/1013/ 2020-09-24 07:30:00 UTC 2020-09-24 14:30:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] School on Scientific Data Analytics and Deep Learning @ Cineca

    5 - 9 October 2020

    [ONLINE] School on Scientific Data Analytics and Deep Learning @ Cineca https://tess.elixir-europe.org/events/online-school-on-scientific-data-analytics-and-deep-learning-cineca Description: The increasing amount of scientific data collected through sensors or computational simulations can take advantage of new techniques for being processed in order to extract new insights out of raw data. The purpose of this one-week school is to present researchers and scientists with methods, tools and techniques for exploring and mining, large data sets using Cineca high performance resources. The school is an introductory set of lectures aimed at training beginner participants in the application of relevant statistical, machine and deep learning algorithms to create classification and predictive models using Cineca resources to execute efficient processing jobs. The school will consist of introductory lectures held by data scientists, and hands-on sessions. Furthermore some practical insights on a few use cases addressed in the field of reasearch projects in Cineca will be introduced. Skills: At the end of the course, the student will possess and know how to use the following skills: - Use of Cineca HPC resources - Python basic programming for ML - Machine Learning algorithms and libraries - Deep Learning frameworks Target audience: Young students, PhD, and researchers in computational sciences and scientific areas with different backgrounds, looking for new technologies and methods to process and analyse large amount of data. Pre-requisites: Participants must have basic knowledge in statistics, fundamentals of computer programming with Python and use of GNU/Linux-based systems. The number of participants is limited to 25 students. Applicants will be selected according to their experience, qualification and scientific interest BASED ON WHAT WRITTEN IN THE "Reason for participation" FIELD OF THE REGISTRATION FORM.    APPLICATION DEADLINE Sept.11th, 2020.  STUDENTS WILL BE NOTIFIED ON THEIR ADMISSION OR NOT WITH AN EMAIL ON MONDAY SEPT. 21st.  Attendance is FREE.    https://events.prace-ri.eu/event/1044/ 2020-10-05 07:00:00 UTC 2020-10-09 16:00:00 UTC [] [] [] workshops_and_courses [] []
  • [POSTPONED] Modern Scientific C++ @ MdlS/Idris

    5 - 8 October 2020

    [POSTPONED] Modern Scientific C++ @ MdlS/Idris https://tess.elixir-europe.org/events/modern-scientific-c-mdls-idris In recent years, the C ++ language has evolved. To stick to the 1998/2003 standard is to miss many new features that make modern C ++ more robust, more powerful, and often more readable. Through this training, it is proposed to become familiar with syntactic novelties that facilitate the writing of code, modernized best practices to avoid the language traps, and a programming way that is easier to parallelize. This training is for scientific programmers who want to discover "modern" C ++ (2011 to 2020 standards), and adapt their programming practices accordingly. Detailed Program : Day 1 (Victor ALESSANDRINI) Review of some basic C++ concepts, overview of C++ as a software development environment, with two major software engineering strategies: object oriented programming and generic programming. Object oriented programming: the virtual function mechanism enabling late binding at execution time, (software modules calling newly written routines without recompilation). Examples of the power and relevance of virtual functions Function objects as extended pointers to functions, examples Introduction to generic programming: function templates, examples Day 2 (Victor ALESSANDRINI) Generic programming: class templates, examples The core of generic programming: using function and class templates to parameterize behavior rather than just object types Overview of the Standard Template Library (STL): strategies, containers, iterators, algorithms Concurrency in the standard C++ library: overview of the thread class, discussion of the new threading interfaces (futures, promises) enabling easy synchronization of simple concurrency patterns. Day 3 (David CHAMONT): modern C++ syntax Basic features: type inference, stronger typing, user-defined literals, uniform initialization, rvalue references, move semantics. Object features: member variables initialization, delegated and inherited constructors, explicit deletion and overriding of member functions. Generic features: static assertions, template variables and type aliasing, constant expressions, variadic templates, perfect forwarding.  Functional features: lambda functions. Day 4 (David CHAMONT): modern C++ library Basic tools: smart pointers (unique_ptr, shared_ptr), new collections (array, unordered maps), views (span, string_array), wrapper types (function, ref). Generic tools: type traits, sfinae, concepts. Functional tools: algebraic types (tuple, variant), monadic types (optional, future), ranges. Optimization and parallelization: beyond double, random numbers, chrono, execution policies, structures of arrays, co-routines. Prerequisites : Knowledge of classical (pre-11) C++ syntax, and basic experience in programming with C++ objects. Participants should be familiar with the following concepts: namespaces references, difference with pointers basic memory allocation (new and delete) Properties of C++ functions (default values, overloading) Basic understanding of error handling (exceptions) C++ classes, programming with objects, public derivation of classes Basic knowledge of templates https://events.prace-ri.eu/event/977/ 2020-10-05 07:30:00 UTC 2020-10-08 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • Shared-memory programming with OpenMP @ EPCC Online

    6 - 15 October 2020

    Shared-memory programming with OpenMP @ EPCC Online https://tess.elixir-europe.org/events/shared-memory-programming-with-openmp-epcc-online Outline Almost all modern computers now have a shared-memory architecture with multiple CPUs connected to the same physical memory, for example multicore laptops or large multi-processor compute servers. This course covers OpenMP, the industry standard for shared-memory programming, which enables serial programs to be parallelised easily using compiler directives. Users of desktop machines can use OpenMP on its own to improve program performance by running on multiple cores; users of parallel supercomputers can use OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of the compute nodes. This course will cover an introduction to the fundamental concepts of the shared variables model, followed by the syntax and semantics of OpenMP and how it can be used to parallelise real programs. Hands-on practical programming exercises will be included, with access to HPC provided for the duration of the course. Pre-requisites This course is aim at attendees with prior experience of programming in C, C++ or Fortran. Some familiarity with HPC systems is helpful but not essential.  Learning outcomes At the end of the course, attendees should be equipped with the skills to write correct parallel programs using the core functionality of OpenMP, and understand the main issues affecting performance of shared memory parallel programs Timetable This course will be held over four 2-hour afternoon sessions: Tue 6th Oct 14:00 - 16:00 BST Thu 8th Oct 14:00 - 16:00 BST Tue 13th Oct 14:00 - 16:00 BST Thu 15th Oct 14:00 - 16:00 BST Full details Register https://events.prace-ri.eu/event/1057/ 2020-10-06 13:00:00 UTC 2020-10-15 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Parallel Programming Workshop (Train the Trainer) @ HLRS

    12 - 16 October 2020

    [ONLINE] Parallel Programming Workshop (Train the Trainer) @ HLRS https://tess.elixir-europe.org/events/online-parallel-programming-workshop-train-the-trainer-hlrs The Train the Trainer Program is provided in conjunction with the regular course Parallel Programming with MPI and OpenMP and Advanced Parallel Programming. Whereas the regular course teaches parallel programming, this program is an education for future trainers in parallel programming. Too few people can provide parallel programming courses on the level that is needed if scientists and PhD students want to learn how to parallelize a sequential application or to enhance parallel applications. Within Europe, currently only six PATC centres and several other national centres provide such courses on an European or national level. We would like to assist further trainers and centres to also provide such courses for whole Europe or at least within their countries. Prerequisites You are familiar with parallel programming with MPI and OpenMP on an advanced level and skilled in both programming languages C and Fortran. Your goal: You want to provide MPI and OpenMP courses for other scientists and PhD students in your country, i.e., you would like to provide at least the first three days of the regular course as a training block-course to PhD students. Background: (a) Your centre supports you to provide such PhD courses in a course room at your centre. The course room is equipped at least with one computer/laptop per two (or three) students and has access to a HPC resource that allows MPI and OpenMP programming in C and Fortran. Or (b), you as a future trainer would like to co-operate with a centre with the necessary course infrastructure. What does this Train the Trainer Program provide? We provide you with all necessary teaching material on a personal basis, i.e., with the copyright to use it and to provide pdf or paper copies to the students in your PhD courses. We provide all exercise material. You will listen the lectures that you get familiar with the training material. During the exercises, you will help the regular students to correct their errors. The regular students are advised to request help if they were stuck for more than a minute. You will be trained to detect their problems as fast as possible (typically in less than a minute) and to provide the students with the needed help.   The Train the Trainer Program includes the curriculum from Monday until Friday according the course agenda. The Train the Trainer Program starts on Monday with a short introductory meeting at 8:15 am. On Thursday evening we will have an additional meeting and dinner for all participants of this TtT program. For further information and registration please visit the HLRS course page. https://events.prace-ri.eu/event/1031/ 2020-10-12 06:15:00 UTC 2020-10-16 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Parallel Programming Workshop (MPI, OpenMP and advanced topics) @ HLRS

    12 - 16 October 2020

    [ONLINE] Parallel Programming Workshop (MPI, OpenMP and advanced topics) @ HLRS https://tess.elixir-europe.org/events/online-parallel-programming-workshop-mpi-openmp-and-advanced-topics-hlrs Distributed memory parallelization with the Message Passing Interface MPI (Mon, for beginners – non-PRACE part): On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an introduction into MPI-1. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI). Shared memory parallelization with OpenMP (Tue, for beginners – non-PRACE part): The focus is on shared memory parallelization with OpenMP, the key concept on hyper-threading, dual-core, multi-core, shared memory, and ccNUMA platforms. This course teaches shared memory OpenMP parallelization. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the directives and other interfaces of OpenMP. Race-condition debugging tools are also presented. Intermediate and advanced topics in parallel programming (Wed-Fri – PRACE course): Topics are advanced usage of communicators and virtual topologies, one-sided communication, derived datatypes, MPI-2 parallel file I/O, hybrid mixed model MPI+OpenMP parallelization, parallelization of explicit and implicit solvers and of particle based applications, parallel numerics and libraries, and parallelization with PETSc. MPI-3.0 introduced a new shared memory programming interface, which can be combined with MPI message passing and remote memory access on the cluster interconnect. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared in the hybrid mixed model MPI+OpenMP parallelization session with various hybrid MPI+OpenMP approaches and pure MPI. Further aspects are domain decomposition, load balancing, and debugging. Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. For further information and registration please visit the HLRS course page. https://events.prace-ri.eu/event/1030/ 2020-10-12 06:30:00 UTC 2020-10-16 14:30:00 UTC [] [] [] workshops_and_courses [] []
  • [POSTPONED] Performance portability for GPU application using high-level programming approaches with Kokkos @ MdlS/Idris

    12 - 13 October 2020

    [POSTPONED] Performance portability for GPU application using high-level programming approaches with Kokkos @ MdlS/Idris https://tess.elixir-europe.org/events/performance-portability-for-gpu-application-using-high-level-programming-approaches-with-kokkos-mdls-idris When developing a numerical simulation code with high performance and efficiency in mind, one is often compelled to accept a trade-off between using a native-hardware programming model (like CUDA or OpenCL), which has become tremendously challenging, and loosing some cross-platform portability. Porting a large existing legacy code to a modern HPC platform, and developing a new simulation code, are two different tasks that may be benefit from a high-level programming model, which abstracts the low-level hardware details. This training presents existing high-level programming solutions that can preserve at best as possible performance, maintainability and portability across the vast diversity of modern hardware architectures (multicore CPU, manycore, GPU, ARM, ..) and software development productivity. We will  provide an introduction to the high-level C++ programming model Kokkos https://github.com/kokkos, and show basic code examples  to illustrate the following concepts through hands-on sessions: hardware portability: design an algorithm once and let the Kokkos back-end (OpenMP, CUDA, ...) actually derive an efficient low-level implementation; efficient architecture-aware memory containers: what is a Kokkos::view; revisit fundamental parallel patterns with Kokkos: parallel for, reduce, scan, ... ; explore some mini-applications. Several detailed examples in C/C++/Fortran will be used in hands-on session on the high-end hardware platform Jean Zay (http://www.idris.fr/jean-zay/), equipped with Nvidia Tesla V100 GPUs. Prerequisites: Some basic knowledge of the CUDA programming model and of C++. https://events.prace-ri.eu/event/892/ 2020-10-12 07:30:00 UTC 2020-10-13 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Parallel Programming Workshop @ BSC

    13 - 22 October 2020

    [ONLINE] Parallel Programming Workshop @ BSC https://tess.elixir-europe.org/events/online-parallel-programming-workshop-bsc The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge. Course Convener: Xavier Martorell Sessions will be in October 13th-16th and 19th-22nd from 2pm to 5.30pm and delivered via Zoom. Level: Intermediate: For trainees with some theoretical and practical knowledge, some programming experience. Advanced: For trainees able to work independently and requiring guidance for solving complex problems. Attendants can bring their own applications and work with them during the course for parallelization and analysis. Prerequisites: Fortran, C or C++ programming. All examples in the course will be done in C Objectives: The objectives of this course are to understand the fundamental concepts supporting message-passing and shared memory programming models. The course covers the two widely used programming models: MPI for the distributed-memory environments, and OpenMP for the shared-memory architectures. The course also presents the main tools developed at BSC to get information and analyze the execution of parallel applications, Paraver and Extrae. It also presents the Parallware Assistant tool, which is able to automatically parallelize a large number of program structures, and provide hints to the programmer with respect to how to change the code to improve parallelization. It deals with debugging alternatives, including the use of GDB and Totalview. The use of OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of current compute nodes in clustered architectures is also considered. Paraver will be used along the course as the tool to understand the behavior and performance of parallelized codes. The course is taught using formal lectures and practical/programming sessions to reinforce the key concepts and set up the compilation/execution environment. Attendants can bring their own applications and work with them during the course for parallelization and analysis. Learning Outcomes: The students who finish this course will be able to develop benchmarks and applications with the MPI, OpenMP and mixed MPI/OpenMP programming models, as well as analyze their execution and tune their behaviour in parallel architectures. Agenda:  all times are 2pm to 5.30pm with two breaks of 15' Tuesday 13/10/2020 1. Introduction to parallel architectures, algorithms design and performance parameters 2. Introduction to the MPI programming model 3. Practical: How to compile and run MPI applications Wednesday 14/10/2020 1. MPI: Point-to-point communication, collective communication 2. Practical: Simple matrix computations 3. MPI: Blocking and non-blocking communications 4. Practical: matrix computations with non-blocking communication Thursday 15/10/2020 1. Introduction to Paraver: tool to analyze and understand performance 2. Practical: Trace generation and trace analysis Friday 16/10/2020 1. MPI: Collectives, Communicators, Topologies 2. Practical: Heat equation example Monday 19/10/2020 1. OpenMP Fundamentals: the fork-join model (lecture) 2. OpenMP Fundamentals: the fork-join model (hands-on) 3. OpenMP Fundamentals: the data environment (lecture) 4. OpenMP Fundamentals: the data environment (hands-on) Tuesday 20/10/2020 1. OpenMP Work-sharing: distributing work among threads (lecture)  2. OpenMP Work-sharing: distributing work among threads (hands-on)  3. OpenMP Work-sharing: loop distribution (lecture)  4. OpenMP Work-sharing: loop distribution (hands-on)  Wednesday 21/10/2020 1. OpenMP Tasking model: basics (lecture)    The task construct    The taskwait 2. OpenMP Tasking model: basics (hands-on) 3. OpenMP Tasking model: intermediate (lecture) 4. OpenMP Tasking model: intermediate (hands-on) Thursday 22/10/2020 1. Hybrid MPI+OpenMP    Standard (threading level, synchronous/asynchronous MPI)    TAMPI 2. Practical: Heat, nbody End of Course https://events.prace-ri.eu/event/1041/ 2020-10-13 12:00:00 UTC 2020-10-22 15:30:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to ScaLAPACK and MAGMA libraries @ MdlS/Idris

    19 - 20 October 2020

    Introduction to ScaLAPACK and MAGMA libraries @ MdlS/Idris https://tess.elixir-europe.org/events/introduction-to-scalapack-and-magma-libraries-mdls-idris The aim of this course is to introduced the basic usages of the ScaLAPACK and MAGMA libraries ScaLAPACK : The ScaLAPACK (Scalable Linear Algebra PACKage) is a library for high-performance dense linear algebra based on routines for distributed-memory message passing computers. It is mostly based on a subset of LAPACK (Linear Algebra PACKage) and BLAS (Basic Linear Algebra Subprograms) routines redesigned for distributed memory MIMD parallel computers where all the MPI communications are handled by routines provided by the BLACS (Basic Linear Algebra Communication Subprograms) library. The lecture will be mostly based on how to use the PBLAS  (Parallel BLAS) and ScaLAPACK libraries for linear algebra problems in HPC:   General introduction about the PBLAS and ScaLAPACK libraries Main ideas how to decompose the linear algebra problems in parallel programming Examples of basic operations with PBLAS : vector-vector, vector-matrix and matrix-matrix operations Examples of basic operations with ScaLAPACK : inversion and diagonalization Main problem based on calculating an exponentiation of a matrix MAGMA: In the second part of the course, we present MAGMA (Matrix Algebra on GPU and Multicore Architectures) , a dense linear algebra library similar to LAPACK but for hybrid/heterogeneous architectures. We start by presenting basic concepts of GPU architecture and giving an overview of communication schemes between CPUs and GPUs. Then, we  briefly present hybrid CPU/GPU programming models using the CUDA language.  Finally, we present MAGMA and how it can be used to easily and efficiently accelerate scientific codes, particularly those already using BLAS and LAPACK. Trainers: Donfack Simplice (MAGMA) Hasnaoui Karim (ScaLAPACK) Prerequisites : C or C++ and Fortran programming. Notions of linear algebra, as well as notions of MPI, would be an asset. https://events.prace-ri.eu/event/919/ 2020-10-19 07:30:00 UTC 2020-10-20 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] VASP Best Practices Workshop @ SNIC

    19 - 20 October 2020

    [ONLINE] VASP Best Practices Workshop @ SNIC https://tess.elixir-europe.org/events/online-vasp-best-practices-workshop-snic Introduction In this workshop, including seminars and hands-on sessions, we will consider how to run the Vienna Ab-initio Simulation Package (VASP) efficiently on supercomputers. The workshop is mostly suitable for beginners at different stages and the focus will be on best practices for running VASP calculations. The hands-on part will be done at the Tetralith cluster, NSC, Linköping University. Parts of the seminars (to be made available for download) can also be of interest for advanced users. For materials and more info, check the event page at NSC (in future moved to here). The seminars will cover the following topics: basic theory, DFT and PAW-method general introduction to VASP, essential files and parameters, workflow running VASP at HPC centers, CPUs and GPUs (examples from SNIC) performance analysis common issues post-processing tools The hands-on sessions include: running VASP calculations step-by-step on the Tetralith cluster using ThinLinc virtual desktop for login to Tetralith Q&A Registration (12 Sep - 12 Oct) The number of hands-on participants are limited to ca. 30, but it will still be possible to follow seminars (arbitrary limit set to 90) and be put on a waiting list for hands-on access. Instructions for how to set up an account at Tetralith will be sent out in good time before the workshop. Tentative Schedule The workshop days 19-20th Oct are split into seminars before lunch and hands-on sessions after lunch. There will also be breaks. The guided hands-on sessions will be led by instructors. Monday 19th Oct 10:00 Seminar, introduction + theory 11:00 Seminar, basics 12:00 Lunch 13:00 Hands-on session (guided) 15:00 Hands-on session (open) 17:00 End of day Tuesday 20th Oct 10:00 Seminar, further topics 11:00 Seminar, cont. 12:00 Lunch 13:00 Hands-on session (guided) 15:00 Hands-on session (open) 17:00 End of workshop, closing Materials The materials for the workshop will be put up at the corresponding event page at NSC. For reference in the future, check under NSC past events. Venue Online via Zoom. Links and instructions for the workshop will be sent via email. Organizers Weine Olovsson, with helpers Diana Iusan (UPPMAX), Peter Kjellström and Hamish Struthers. https://events.prace-ri.eu/event/1066/ 2020-10-19 08:00:00 UTC 2020-10-20 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [POSTPONED] Introduction to parallel programming @ UL

    20 - 23 October 2020

    [POSTPONED] Introduction to parallel programming @ UL https://tess.elixir-europe.org/events/introduction-to-parallel-programming-2f81120c-c12b-44ac-a2c2-bff2a21d8dc5 Introduction to parallel programming (day 1-2) This training course is focused on understanding the basics of parallel programming with the message-passing interface (MPI) and OpenMP parallel programming. The course consists of discussions, delivered through lectures and examples in the form of hands-on exercises. The topics covered are directly applicable to almost every parallel computer architecture. Breaking down a larger problem into a series of smaller steps, delivering instructions, and deploying several processors to execute the calculations at the same time is the basis of a parallel programming process. It offers the same results as concurrent programming but in less time and with more efficiency. Many modern computers use this form of programming in their hardware to ensure that tasks are quickly completed in the background. Tasks that will be executed by the processors, and also how these tasks are to be synchronized and data exchange have to be defined. Message passing (MP) communication protocol controls communication between two processes, subroutines, or functions within a program. It is the key to models of concurrency and object-oriented programming. This course covers the standard for message passing, the Message Passing Interface (MPI). The standard defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message-passing programs in C, C++, and Fortran. OpenMP is an open source Message Passing Interface (MPI) library project used by many TOP500 supercomputers. Open MPI aims to use the best ideas and technologies from the individual projects and create one world-class open-source MPI implementation that excels in all areas. The course is delivered in an intensive two-day format using UL-FME’s training facilities. It is taught using a variety of methods including formal lectures, practical exercises, programming examples and informal tutorial discussions. After the course the participants should be able to write simple parallel programs and parallelize existing programs with the basic features of MPI and/or OpenMP. This is part of Slovenian EuroHPC Competence Centre training event. Parallel programming with MPI / OpenMP (day 3-4) This training course covers advanced parallel programming with the message-passing interface (MPI) and OpenMP parallel programming. The course consists of discussions, delivered through lectures and examples in the form of hands-on exercises. The topics covered are directly applicable to almost every parallel computer architecture. Participants are advised to obtain basic knowledge of parallel programming prior to the course. To exploit large massively parallel cluster paradigms OpenMP is used. Moreover OpenMP standards are evolving including new ideas and features to become increasingly effective in new machines. This gives developers of HPC applications a smooth path of evolution of their applications without having to deal with heavy re-factoring to take up new technologies. The 2-day course will cover topics including parallelism, OpenMP tasks, the OpenMP memory model, performance tuning, hybrid OpenMP + MPI and OpenMP implementations. The course is aimed at programmers seeking to deepen their understanding of OpenMP. The course is delivered in an intensive format using UL-FME’s training facilities. It is taught using a variety of methods including formal lectures, practical exercises, programming examples and informal tutorial discussions. After the course the participants should be able to write more efficient OpenMP programs. This is a PRACE training event. It is organized by LECAD Laboratory, University of Ljubljana, Slovenia.   Content level: Beginners =                6:48h                (40%) Intermediate =            6:48h                (40%) Advanced =                3:24h                (20%) (Please note that the content levels refer to the originally planned face-to-face course.) Target audience: The target audience consists of postgraduate students and young researchers of natural and technical sciences, engineers from industry where supercomputing can be used as competitive advantage (automotive, electronic, material industry), logistics, etc. Prerequisite knowledge: For the hands-on sessions you should know how to work on the Unix/Linux command line, should be able to program in either C/C++ and have intermediate skills in programming with Python. Since the focus of the school is on parallelization, participants have to be familiar with the topic and must have basic knowledge in OpenMP and MPI. Skills to be gained: At the end of the course the student will be able to: Understand the messagepassing model Implement standard messagepassing algorithms in MPI. Debug simple MPI codes. Measure and comment on the performance of MPI codes. Understanding of best practice for MPI+OpenMP programming. Design and implement efficient parallel programs to solve regular-grid problems. First part of this course (Day 1&2) is part of Slovenian EuroHPC Competence Centre training event. It is organized by LECAD Laboratory, University of Ljubljana, Slovenia. The second part of this course is a PRACE training event. It is organized by LECAD Laboratory, University of Ljubljana, Slovenia.PRACE training centre. Participation is free of charge! In case of too many registrations (over 70) we will make a selection of registrants. In light of COVID-19 situation, this will be a hybrid event that will offer both in-person and remote participation. https://events.prace-ri.eu/event/1049/ 2020-10-20 06:00:00 UTC 2020-10-23 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] ChEESE Advanced Training on HPC for Computational Seismology @ HLRS

    21 - 23 October 2020

    [ONLINE] ChEESE Advanced Training on HPC for Computational Seismology @ HLRS https://tess.elixir-europe.org/events/online-cheese-advanced-training-on-hpc-for-computational-seismology-hlrs Overview August 3: Note: Due to the COVID-19 pandemic we currently plan to provide this course as an ONLINE course (using Zoom). Are you interested in numerical wave simulations but feel like there are too many codes and methods out there and don’t know which option is right for you? Or, perhaps you just want to refresh your knowledge and gain some practical experience with the different possibilities? Either way, this fully virtual workshop is for you! During three full days on 21- 23 October 2020, you will receive introductory hands-on tutorials for ExaHyPE, Salvus, SeisSol, and SPECFEM3D, which are four different high-performance computational seismology software packages focused on waveform modeling and inversion. The workshop will provide you with an overview of their similarities, differences, and areas of applicability. Unique and in-depth half-day practicals will be provided for each package, with computational resources provided by HLRS. Participants will also be encouraged to present their own work and/or motivation as a one-minute lightning-talk during the wrap-up session of the workshop. This training event is hosted by ChEESE, the Center of Excellence for Exascale in Solid Earth and supported by the ENERXICO project. The organizers look forward to seeing you in October! Target audience Researchers and students of all levels interested in augmenting their science with numerical waveform and dynamic rupture simulations as well as full-waveform inversion.   For further information and registration please visit the HLRS course page. https://events.prace-ri.eu/event/1046/ 2020-10-21 07:00:00 UTC 2020-10-23 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [POSTPONED] Introduction to Deep Learning Models @ JSC

    21 - 23 October 2020

    [POSTPONED] Introduction to Deep Learning Models @ JSC https://tess.elixir-europe.org/events/introduction-to-deep-learning-models-jsc This course has to be postponed to 2021. This course focuses on a recent machine learning method known as deep learning that emerged as a promising disruptive approach, allowing knowledge discovery from large datasets in an unprecedented effectiveness and efficiency. It is particularly relevant in research areas, which are not accessible through modelling and simulation often performed in HPC. Traditional learning, which was introduced in the 1950s and became a data-driven paradigm in the 90s, is usually based on an iterative process of feature engineering, learning, and modelling. Although successful on many tasks, the resulting models are often hard to transfer to other datasets and research areas. This course provides an introduction into deep learning and its inherent ability to derive optimal and often quite generic problem representations from the data (aka ‘feature learning’). Concrete architectures such as Convolutional Neural Networks (CNNs) will be applied to real datasets of applications using known deep learning frameworks such as Tensorflow, Keras, or Torch. As the learning process with CNNs is extremely computational-intensive the course will cover aspects of how parallel computing can be leveraged in order to speed-up the learning process using general purpose computing on graphics processing units (GPGPUs). Hands-on exercises allow the participants to immediately turn the newly acquired skills into practice. After this course participants will have a general understanding for which problems CNN learning architectures are useful and how parallel and scalable computing is facilitating the learning process when facing big datasets. Prerequisites: Participants should be able to work on the Unix/Linux command line, have a basic level of understanding of batch scripts required for HPC application submissions, and have a minimal knowledge of probability, statistics, and linear algebra. Participants should bring their own notebooks (with an ssh-client). Application Applicants will be notified one month before the course starts, whether they are accepted for participitation. Instructors: Prof. Dr. Morris Riedel, Dr. Gabriele Cavallaro, Dr. Jenia Jitsev, Jülich Supercomputing Centre Contact For any questions concerning the course please send an e-mail to g.cavallaro@fz-juelich.de. https://events.prace-ri.eu/event/983/ 2020-10-21 11:00:00 UTC 2020-10-23 14:30:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Tools and techniques to quickly improve performances of HPC applications in Solid Earth@Cineca

    26 - 28 October 2020

    [ONLINE] Tools and techniques to quickly improve performances of HPC applications in Solid Earth@Cineca https://tess.elixir-europe.org/events/tools-and-techniques-to-quickly-improve-performances-of-hpc-applications-in-solid-earth-cineca It will  shown a course targeted to improve the overall performance of a code in Solid Earth, currently in use at the CHEESE Center of Excellence H2020 project. First, parallel performance profiling tools will be provided on the initial version of the code to find the so-called performance bottlenecks. Starting from the profiling analysis, it will show how and where to intervene with respect to the hardware characterization of the HPC machine used for the investigation. We will show also how debug tools will be useful in the development / optimization phase to eliminate any possible bugs introduced in the writing (or redesign) of new parts of the code. Finally, it will be shown how to improve the overall performance of the code with respect to other popular parameters such as I / O, vectorization, etc. Skills: At the end of the course the student will be able to: use a concrete methodology to improve the performance of a code in Solid Earth already in use in the context of ChEESE project find and solve all the main bottlenecks of an application with respect to appropriate computational metrics and the machine used use appropriate debugging tools to eliminate any bugs that may arise during the development / optimization phase Target audience:  Researchers in Solid Earth interested to learn and use all those techniques and related tools that may allow them to improve the performance of their code on current HPC architectures in the shortest possible time. Pre-requisites: Basic knowledge of LINUX/UNIX. Knowledge of C, FORTRAN, MPI or OpenMP is recommended. Notions of Parallel computing techniques and algorithms for Solid Earth applications Grant: The lunch for the three days will be offered to all the participants and some grants are available. The only requirement to be eligible is to be not funded by your institution to attend the course and to work or live in an institute outside the Roma area. The grant  will be 300 euros for students working and living outside Italy and 150 euros for students working and living in Italy (outside Roma). Some documentation will be required and the grant will be paid only after a certified presence of minimum 80% of the lectures. Further information about how to request the grant, will be provided at the confirmation of the course: about 3 weeks before the starting date. Coordinating Teacher: Dr. P. Lanucara   https://events.prace-ri.eu/event/973/ 2020-10-26 08:00:00 UTC 2020-10-28 17:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Data Management with iRODS and Compute @SURFsara

    29 October 2020

    [ONLINE] Data Management with iRODS and Compute @SURFsara https://tess.elixir-europe.org/events/data-management-with-irods-and-compute-surfsara-11cb4922-2fd9-4e88-b5b0-1930d3c1afba Would you want to practice reproducible research in HPC and preserve your data for the future? Research Data Management is essential in effective and reproducible science. Due to ever increasing data volumes and complexity, researchers need modern tools to manage all stages of the data life cycle. Here we will use iRODS, a data management framework, to incorporate data management principles in compute pipelines.  What? In this course you will: - Learn about the the iRODS data management framework and icommands - Understand how to incorporate provenance in a compute workflow - Know about FAIR in data processing workflows Who? - Everyone interested in learning advanced data management tools for compute and data processing workflows Requirements - Basic knowledge of Linux and shell commands You should have - Your own laptop with an up-to-date browser and a terminal emulator. The use of the operating systems Linux and macOS is preferred, but not mandatory. For Windows users we recommend to download MobaXterm (portable version) as terminal emulator.   IMPORTANT INFORMATION: WAITING LIST If the course gets fully booked, no more registrations are accepted through this website. However, you can be included in the waiting list: for that, please send an email to training@surfsara.nl and you'll be informed when a place becomes available. https://events.prace-ri.eu/event/1058/ 2020-10-29 08:00:00 UTC 2020-10-29 16:30:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Heterogeneous computing with performance modelling @ SNIC

    4 - 5 November 2020

    [ONLINE] Heterogeneous computing with performance modelling @ SNIC https://tess.elixir-europe.org/events/online-heterogeneous-computing-with-performance-modelling-snic Modern high core count CPUs and accelerators, such as GPUs, have been shown to provide significant performance benefits in many different applications. However, for a novice, or even for a moderately experienced scientist or programmer, it is not always clear which applications could potentially benefit from these resources and which do not. For example, a Nvidia V100 GPU can perform artificial intelligence (AI) related computations in a fraction of the time it takes a regular CPU to perform the same computations but ill-informed OpenACC compiler pragmas can actually make a code run slower. Why is this? When should one invest time in parallelization and/or GPU-acceleration? How much speedup can be expected with a given application? The main goal of this two-day course is to start answering these questions. The course also covers the basics of GPU programming and aims to provide the necessary information for avoiding the most common pitfalls. More information, schedule, and registration can be found on the course webpage at HPC2N, Umeå University. https://events.prace-ri.eu/event/1064/ 2020-11-04 08:00:00 UTC 2020-11-05 16:00:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Earth Sciences Simulation Environments @ BSC

    10 - 12 November 2020

    [ONLINE] Earth Sciences Simulation Environments @ BSC https://tess.elixir-europe.org/events/earth-sciences-simulation-environments-bsc-599900ce-5ae4-4c46-81cd-d14a424670dd The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge. Course conveners: Marc Guevara and Carles Tena This training will be online via zoom. Course lecturers: Earth Sciences Department researchers involved in the group of Computational Earth Sciences, Atmospheric Composition, Climate Prediction and Earth System Services. Final lectures TBA. Objectives: The objective of this PATC course is to cover the basics of a high performance computing (HPC) environment oriented towards earth science applications, specifically chemical weather modelling and climate modelling. More precisely, the course will cover: Introduction to earth science fundamentals and modelling; Basic usage of an HPC environment: shell, compilers, libraries, file systems, queuing system and parallel computing; Build and configure targeted earth science applications with the NMMB/BSC-CTM chemical transport model and with the EC-EARTH climate model; Execute and monitor numerical experiments using a workflow manager; Analyse and visualise model outputs with a wide set of tools. Learning outcomes: Participants will learn and gain experience in accessing an HPC facility, installing earth science numerical models and related utilities and libraries, running numerical simulations, monitoring the execution of supercomputing jobs, analysing and visualising model results. Level: (All courses are designed for specialists with at least 1st cycle degree or similar background experience) For trainees with some theoretical and practical knowledge Prerequisites: At least University degree in progress on Earth Sciences, Computer Sciences or related area Basic knowledge of UNIX Knowledge of C, FORTRAN, MPI or openMP is recommended Knowledge of Earth Sciences data formats is recommended (grib, netcdf, hdf,…) Knowledge of R and Python Agenda: Day 1 (Nov. 10) – 4.5h 10:00-10:15 Welcome and introduction of the BSC Earth Sciences Department. (Francisco Doblas-Reyes, Head of BSC-ES Department) 10:15-10:30 Technical and logistic information. (Marc Guevara and Carles Tena) 10:30-11:00 Introduction to earth science fundamentals and modelling. (Raffaele Bernardello) 11:00-11:30 Break 11:30-13:00 Introduction to Linux and HPC, BSC facilities, PRACE and RES + preparation/checks for hands-on tutorial after lunch break (Mario Acosta) 13:00-14:00 Lunch break 14:00-16:00 HPC hands-on tutorial. At the end of this tutorial, the students will be able to compile and execute jobs on Mare Nostrum 4. (Mario Acosta and Miguel Castrillo and Carles Tena) Day 2 (Nov 11) – 3.5h 9:30-10:00 Lecture on the EC-EARTH climate model. (Raffaele Bernardello) 10:00-10:30 Lecture on the NMMB-MONARCH atmosphere chemistry model. (Oriol Jorba) 10:30-11:00 Lecture on setting up and running the NMMB-MONARCH model with Autosubmit. (Francesca Macchia and Gilbert Montane) 11:00-11:30 Break 11:30-13:30 Model hands-on tutorial. Students will run a prepared case. (Francesca Macchia and Gilbert Montane and Carles Tena and Marc Guevara) Day 3 (Nov 12) – 3h 10:00-11:00 Introduction to analysis and visualisation tools for model outputs (s2dverification, MapGenerator, CDO, NCO, panoply, ncview) (Francesco Benincasa) 11:00-11:30 Break 11:30-13:30 Analysis hands-on tutorial. Students will apply tools for analysis and visualisation to the outputs created the day before. (Francesco Benincasa and Carles Tena and Marc Guevara) END of COURSE https://events.prace-ri.eu/event/1042/ 2020-11-10 08:30:00 UTC 2020-11-12 15:30:00 UTC [] [] [] workshops_and_courses [] []
  • [ONLINE] Node-Level Performance Engineering @ LRZ

    2 - 4 December 2020

    [ONLINE] Node-Level Performance Engineering @ LRZ https://tess.elixir-europe.org/events/online-node-level-performance-engineering-lrz This online course covers performance engineering approaches on the compute node level. Even application developers who are fluent in OpenMP and MPI often lack a good grasp of how much performance could at best be achieved by their code. This is because parallelism takes us only half the way to good performance. Even worse, slow serial code tends to scale very well, hiding the fact that resources are wasted. This course conveys the required knowledge to develop a thorough understanding of the interactions between software and hardware. This process must start at the core, socket, and node level, where the code gets executed that does the actual computational work. We introduce the basic architectural features and bottlenecks of modern processors and compute nodes. Pipelining, SIMD, superscalarity, caches, memory interfaces, ccNUMA, etc., are covered. A cornerstone of node-level performance analysis is the Roofline model, which is introduced in due detail and applied to various examples from computational science. We also show how simple software tools can be used to acquire knowledge about the system, run code in a reproducible way, and validate hypotheses about resource consumption. Finally, once the architectural requirements of a code are understood and correlated with performance measurements, the potential benefit of code changes can often be predicted, replacing hope-for-the-best optimizations by a scientific process.   The course is a PRACE training event. Introduction Our approach to performance engineering Basic architecture of multicore systems: threads, cores, caches, sockets, memory The important role of system topology Tools: topology & affinity in multicore environments Overview likwid-topology and likwid-pin Microbenchmarking for architectural exploration Properties of data paths in the memory hierarchy Bottlenecks OpenMP barrier overhead Roofline model: basics Model assumptions and construction Simple examples Limitations of the Roofline model Pattern-based performance engineering Optimal use of parallel resources Single Instruction Multiple Data (SIMD) Cache-coherent Non-Uniform Memory Architecture (ccNUMA) Simultaneous Multi-Threading (SMT) Tools: hardware performance counters Why hardware performance counters? likwid-perfctr Validating performance models Roofline case studies Dense matrix-vector multiplication Sparse matrix-vector multiplication Jacobi (stencil) smoother Optional: The ECM performance model https://events.prace-ri.eu/event/1052/ 2020-12-02 08:00:00 UTC 2020-12-04 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • [FORMAT TBD] Programming Distributed Computing Platforms with COMPSs @ BSC

    26 - 27 January 2021

    [FORMAT TBD] Programming Distributed Computing Platforms with COMPSs @ BSC https://tess.elixir-europe.org/events/format-tbd-programming-distributed-computing-platforms-with-compss-bsc Please, bring your own laptop. All the PATC courses at BSC are free of charge. Course convener: Rosa Badia, Workflows and Distributed Computing Group Manager, Computer Sciences - Workflows and Distributed Computing Department Lecturers:  Rosa M Badia, Workflows and Distributed Computing Group Manager, Computer Sciences - Workflows and Distributed Computing Department, BSC Javier Conejero, Senior Researcher, Computer Sciences - Workflows and Distributed Computing Department, BSC Jorge Ejarque, Researcher, Computer Sciences - Workflows and Distributed Computing Department, BSC Daniele Lezzi, Senior Researcher, Computer Sciences - Workflows and Distributed Computing Department, BSC Objectives: The objective of this course is to give an overview of the COMPSs programming model, which is able to exploit the inherent concurrency of sequential applications and execute them in a transparent manner to the application developer in distributed computing platform. This is achieved by annotating part of the code as tasks, and building at execution a task-dependence graph based on the actual data used consumed/produced by the tasks. The COMPSs runtime is able to schedule the tasks in the computing nodes and take into account facts like data locality and the different nature of the computing nodes in case of heterogeneous platforms. Additionally, recently COMPSs has been enhanced with the possibility of coordinating Web Services as part of the applications. COMPSs supports Java, C/C++ and Python as programming languages. Learning Outcomes:  In the course, the COMPSs syntax, programming methodology and an overview of the runtime internals will be given. The attendees will get a first lesson about programming with COMPSs that will enable them to start programming with this framework. A hands-on with simple introductory exercises will be also performed. The students who finish this course will be able to develop simple COMPSs applications and to run them both in a local resource and in a distributed platform (initially in a private cloud). The exercises will be delivered in Python and Java. In case of Python, Jupyter notebooks will be used in some of the exercises. Level: for trainees with some theoretical and practical knowledge. INTERMEDIATE: for trainees with some theoretical and practical knowledge; those who finished the beginners course ADVANCED: for trainees able to work independently and requiring guidance for solving complex problems Prerequisites: Programming skills in Java and Python  Agenda:  Day 1 (January 26th, 2021) 9:30 - 10:00 - Welcome and round table (30') 10:00 - 10:30 - Session 1: Introduction to COMPSs (30') 10:30 - 11:15 - Session 2: PyCOMPSs: Writing Python applications (45') 11:15 - 11:45 - Coffee break 11:45 - 13:00 - Session 3: Python Hands-on using Jupyter notebooks (1h15') 13:00 - 14:30 - Lunch break 14:30 - 15:00 - Session 4: Machine learning with dislib (30') 15:00 - 16:30 - Session 5: Hands-on with dislib (1h30') Day 2 (January 27th, 2021) 9:30 - 11:00 - Session 6: Java & C++ (1h30') 11:00 - 11:30 - Cofee break 11:30 - 13:00 - Session 7: COMPSs Advanced Features (1h30') 13:00 - 14:30 - Lunch break 14:30 - 16:30 - Session 8: Cluster Hands-on (MareNostrum) (2h) 16:30 - 16:45 - Session 9: COMPSs Installation & Final Notes (15') END of COURSE https://events.prace-ri.eu/event/1054/ 2021-01-26 08:30:00 UTC 2021-01-27 15:30:00 UTC [] [] [] workshops_and_courses [] []
  • [FORMAT TBD] Managing distributed data with Hecuba and dataClay @ BSC

    28 January 2021

    [FORMAT TBD] Managing distributed data with Hecuba and dataClay @ BSC https://tess.elixir-europe.org/events/format-tbd-managing-distributed-data-with-hecuba-and-dataclay-bsc Please, bring your own laptop. All the PATC courses at BSC are free of charge. Course conveners: Department and Research group: Computer Science - Workflows and Distributed Computing Yolanda Becerra, Data-driven Scientific Computing research line, Senior researcher Anna Queralt, Distributed Object Management research line, Senior researcher Course Lecturers: Department and Research group: Computer Sciences - Workflows and Distributed Computing Alex Barceló, Distributed object Management research line, Researcher Yolanda Becerra, Data-driven Scientific Computing research line, Senior researcher Adrián Espejo, Data-driven Scientific Computing research line, Junior research engineer Daniel Gasull, Distributed object Management research line, Research engineer Pol Santamaria, Data-driven Scientific Computing research line, Junior developer Anna Queralt, Distributed object Management research line, Senior researcher Objectives: The objective of this course is to give an overview of BSC storage solutions, Hecuba and dataClay. These two platforms allow to easily store and manipulate distributed data from object-oriented applications, enabling programmers to handle object persistence using the same classes they use in their programs, thus avoiding time consuming transformations between persistent and non-persistent data models. Also, Hecuba and dataClay enable programmers to transparently manage distributed data, without worrying about its location. This is achieved by adding a minimal set of annotations in the classes. Both Hecuba and dataClay can work independently or integrated with the COMPSs programming model and runtime to facilitate parallelization of applications that handle persistent data, thus providing a comprehensive mechanism that enables the efficient usage of persistent storage solutions from distributed programming environments. Both platforms offer a common interface to the application developer that facilitates using one solution or the other depending on the needs, without changing the application code. Also, both of them have additional features that allow the programmer to take advantage of their particularities. Learning Outcomes:   In the course, the Hecuba and dataClay syntax, programming methodology and an overview of their internals will be given. Also, an overview of COMPSs at user level will be provided in order to take advantage of the distribution of data with both platforms. The attendees will get a first lesson about programming with the common storage interface that will enable them to start programming with both frameworks. A hands-on with simple introductory exercises will be also performed for each platform, with and without COMPSs to distribute the computation. The students who finish this course will be able to develop simple Hecuba and dataClay applications and to run them both in a local resource and in a distributed platform (initially in a private cloud) Prerequisites: Basic programming skills in Python and Java. Previous attendance to PATC course on programming distributed systems with COMPSs is recommended.   Tentative Agenda (TBC):  Day 1 (Jan 28th, 2021) Session 1 / 9:30 – 13:00 9:30-10:00 Round table. Presentation and background of participants 10:00-11:00 Motivation, introduction and syntax of BSC storage platforms 11:00-11:30 Coffee break 11:30-12:15 Hands-on with storage API 12:15-13:00 COMPSs overview and how to parallelize a sequential application 13:00-14:30 Lunch break Session 2/ 14:30 – 18:00 14:30-16:00 Hecuba specifics and hands-on 16:00-16:30 Break 16:30-18:00 dataClay specifics and hands-on END of COURSE       https://events.prace-ri.eu/event/1055/ 2021-01-28 08:30:00 UTC 2021-01-28 17:00:00 UTC [] [] [] workshops_and_courses [] []

Note, this map only displays events that have geolocation information in TeSS.
For the complete list of events in TeSS, click the grid tab.