PRACE

Partnership for Advanced Computing in Europe

PRACE https://tess.elixir-europe.org/content_providers/prace Partnership for Advanced Computing in Europe /system/content_providers/images/000/000/059/original/1c2d412b-c20c-43b8-926d-6febf042f341?1479828059
Found 0 materials.
Showing 13 upcoming events. Found 470 past events. View all results.
  • 13th International Conference on Neurology and Brain Disorders

    15 - 16 November 2018

    osaka, Japan

    13th International Conference on Neurology and Brain Disorders https://tess.elixir-europe.org/events/13th-international-conference-on-neurology-and-brain-disorders Establishing the framework to great wellbeing Conference Series has been instrumental in conducting international meetings for 25 years, and very excited to expand in medical and healthcare fields. Previous meetings were held majorly in North American cities like San Francisco, Chicago, Las Vegas and expanding to Europe, America and Asia Pacific continents. Conference Series Cordially Welcome all the participants to attend the "13th International Conference on Neurology and Brain Disorders " held during November 15-16, 2018 at Osaka, Japan. Neuro 2018 will be operated by world class experts in the field of neurology and Neuroscience, International symposiums, B2B meetings, workshops will also be organised to discuss the specific topics in the field of Neurology like Neuro Genesis, Neurophysiology, Neurological Disorders, Neuro Muscular Disorders, Neuro Infecticious Diseases, Neuropathology and Neuroplasticity. The conference is aimed in to identify the genetic determinants of common disorders to understand the mechanisms underlying human cognition. Neurogenetic Disorders are common among developed and developing countries. Everyone interested in the future of emerging - omics healthcare techniques majorly in Healthcare are encouraged to attend. Connect with our social network pages to get regular industry updates. Conference Related Scientific Sessions: • Neurology • Brain and Neurological Disorders • Child Neurology • Neurophysiology • Neurological Disorders • Neuroimmunology • Neuro Infecticious Diseases • Peripheral Nervous System • Neuromuscular Disorders • Neurosurgery • Neuropathology • Neuroradiology and Neuro Imaging • Neural Engineering • Neuroplasticity • Neuropsychiatry • Neuropharmocology • Neurological Nursing • Neurology Treatment • Central Nervous System For more details, please visit: https://neurology.annualcongress.com/ For Abstract submission, please visit: https://neurology.annualcongress.com/abstract-submission.php Registration Includes 1. Access to all Sessions 2. Handbook & Conference Kit 3. Certificate Accreditation by the International Organizing Committee. 4. Abstracts will be published in the conference souvenir and respective international journals 5. Each registrant will get 70% abatement on manuscript publication fees 6. Ask the Expert Forums (One to One Pre-Scheduled meeting on interest and availability) 7. Access to the attendees email list (post conference) 8. 10% abatement on the registration fees for the next annual conference 9. 10% abatement on International conferences membership 10. 2 Coffee breaks (Refreshments and snacks) during the conference 11. Lunch during the conference \ Thanks & Best Wishes, Program Manager – Neuro 2018 2018-11-15 09:00:00 UTC 2018-11-16 17:00:00 UTC ConferenceSeries osaka, Japan osaka japan Japan 5590034 Conferenceseries LLC [] 500 -1 meetings_and_conferences registration_of_interest Alzheimers Clinical Trials And StudiesAlzheimers DiseaseAmnesiaAnimal Models In DementiaAnxiety Autoimmune Disorders & Multiple SclerosisBehavioral IssuesBiopsychiatry Blood-Brain BarrierBrain DiagnosisBrain Disease & FailureBrain InjuryBrain TumourCentral Nervous SystemCerebrovascular DiseaseChild NeurologyClinical NeurophysiologyCNS TumorsCognitive DisordersDementiaDementia CareDemyelinationDiagnosis And TherapyEpilepsyGenetics Genetics And Epigenetics Geriatric NeurologyGut- Brain AxisHeadacheIntervention NeuroradiologyMeningitisMental HealthMental Health NursingMetastatisMovement DisordersMultiple SclerosisMultiple Sclerosis DiagnosisNervous SystemNeural EngineeringNeural EnhancementNeuro-Oncology NeuroanaatomyNeuroanatomyNeurocritical CareNeurodegenerationNeurodegenerative DisorderNeurodegenerative DisordersNeurogeneticsNeuroimagingNeuroimmunologyNeurological DisordersNeurological Disorders And StrokeNeurological InfectionsNeurological SurgeryNeurologistNeurologyNeuromedicinesNeuromuscular MedicineNeuronal AutoantibodiesNeuropathyNeuropeptidesNeuropharmacologyNeuroscienceNeurosurgeryNeurotoxicologyNeurotoxinsNeurotransmissionNeurotransmittersParkinsonParkinsons DiseasePediatric NeurologyPharmacology Psychiatry Psychology PsychoneuroimmunologyRecent Research And Case StudiesRehabilitation TherapyStem Cell Treatment For Neurological DisordersStrokesTraumatic Brain Injury
  • Intel Manycore Programming Workshop @ LRZ

    16 - 18 July 2018

    Intel Manycore Programming Workshop @ LRZ https://tess.elixir-europe.org/events/intel-mic-programming-workshop   The course discusses programming models and optimisation techniqes for Intel manycore processors in order to enable programmers to achieve good performance of their applications. The course will concentrate on techniques also relevant for the latest Intel® Xeon® Scalable processor, code-named Skylake, which is going to be utilized in the upcoming SuperMUC-NG machine at LRZ. Futhermore programming and optimisation techniques for Intel Knights Landing (KNL) based systems like the KNL cluster CoolMUC3  at LRZ will be discussed. The workshop covers a wide range of topics from the description of the hardware of the Intel processors through information about the basic programming models as well as information about vectorisation and MCDRAM usage up to tools and strategies how to analyse and improve the performance of applications. Topics covered will include: Overview of the Intel Skylake and MIC architecture Overview of programming models Vectorisation and basic performance optimisation Code optimisation process for Intel processors Intel profiling tools and roofline model KNL memory modes and cluster modes, MCDRAM Advanced optimisation techniques (RRZE) Skylake programming using Intrinsics and Assembler (RRZE) Skylake nodelevel- and microarchitecture in detail (Intel) OpenMP SIMD (Intel) OpenMP affinity and outlook on OpenMP 5.0 memory features (Intel) The workshop will include both theoretical and practical hands-on sessions. Please bring your own laptop (with an ssh client installed) for the hands-on sessions! Figure: Participants of the MIC Workshop 2016   About the tutors   Dr. Momme Allalen received his Ph.D in theoretical Physics from the University of Osnabrück in 2006. He worked in the field of molecular magnetics through modelling techniques such as the exact numerical diagonalisation of the Heisenberg model. He joined the Leibniz Computing Centre (LRZ) in 2007 working in the High Performance Computing group. His tasks include user support, optimisation and parallelisation of scientific application codes, and benchmarking for characterising and evaluating the performance of high-end supercomputers. His research interests are various aspects of parallel computing and new programming languages and paradigms. Dr.-Ing. Jan Eitzinger (RRZE) (formerly Treibig) holds a PhD in Computer Science from the University of Erlangen. He is now a postdoctoral researcher in the HPC Services group at Erlangen Regional Computing Center (RRZE). His current research revolves around architecture-specific and low-level optimization for current processor architectures, performance modeling on processor and system levels, and programming tools. He is the developer of LIKWID, a collection of lightweight performance tools. In his daily work he is involved in all aspects of user support in High Performance Computing: training, code parallelization, profiling and optimization, and the evaluation of novel computer architectures. Dr. Luigi Iapichino holds a position of scientific computing expert at LRZ and he is member of the Intel Parallel Computing Center (IPCC). His main tasks are code modernization for many-core systems, and HPC support. He got in 2005 a PhD in physics from TU München, working at the Max Planck Institute for Astrophysics. Before moving to LRZ in 2014, he worked at the Universities of Würzburg and Heidelberg, involved in research projects related to computational astrophysics. Dr.-Ing. Michael Klemm (Intel Corp.) obtained an M.Sc. in Computer Science in 2003 and received a Doctor of Engineering degree (Dr.-Ing.) from the Friedrich-Alexander-University Erlangen-Nuremberg, Germany, in 2008. Michael Klemm works in the Developer Relations Division at Intel in Germany and his areas of interest include compiler construction, design of programming languages, parallel programming, and performance analysis and tuning. Michael Klemm joined the OpenMP organization in 2009 and was appointed CEO of the OpenMP ARB in 2016. Dr. Volker Weinberg studied physics at the Ludwig Maximilian University of Munich and later worked at the research centre DESY. He received his PhD from the Free University of Berlin for his studies in the field of lattice QCD. Since 2008 he is working in the HPC group at the Leibniz Supercomputing Centre and is responsible for HPC and PATC (PRACE Advanced Training Centre) courses at LRZ, new programming languages and the Intel Xeon Phi based system SuperMIC. Within PRACE-4IP he took over the leadership to create Best Practice Guides for new architectures and systems. https://events.prace-ri.eu/event/736/ 2018-07-16 07:00:00 UTC 2018-07-18 14:00:00 UTC [] [] [] workshops_and_courses [] []
  • Concepts of GASPI and interoperability with other communication APIs @ HLRS

    2 - 3 July 2018

    Concepts of GASPI and interoperability with other communication APIs @ HLRS https://tess.elixir-europe.org/events/efficient-parallel-programming-with-gaspi-hlrs-71a7e009-21b8-410a-a3ba-e2bff30d3f1f Overview In this tutorial we present an asynchronous data flow programming model for Partitioned Global Address Spaces (PGAS) as an alternative to the programming model of MPI. GASPI, which stands for Global Address Space Programming Interface, is a partitioned global address space (PGAS) API. The GASPI API is designed as a C/C++/Fortran library and focused on three key objectives: scalability, flexibility and fault tolerance. In order to achieve its much improved scaling behaviour GASPI aims at asynchronous dataflow with remote completion, rather than bulk-synchronous message exchanges. GASPI follows a single/multiple program multiple data (SPMD/MPMD) approach and offers a small, yet powerful API (see also http://www.gaspi.de and http://www.gpi-site.com). GASPI is successfully used in academic and industrial simulation applications. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of GASPI. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. For further information and registration please visit the HLRS course page. https://events.prace-ri.eu/event/690/ 2018-07-02 07:00:00 UTC 2018-07-03 13:30:00 UTC [] [] [] workshops_and_courses [] []
  • Advanced usage on Irene Joliot-Curie supercomputer : Architecture & Best practices @ MdlS / TGCC

    3 - 5 July 2018

    Advanced usage on Irene Joliot-Curie supercomputer : Architecture & Best practices @ MdlS / TGCC https://tess.elixir-europe.org/events/advanced-usage-on-irene-joliot-curie-supercomputer-architecture-best-practices-mdls-tgcc The aim of this course is to give users the best practices to improve their use of the newly installed Prace Irene Joliot-Curie system and to give hints to prepare their codes for future architectures. Topics Introduction:CEA/TGCC, Irene Joliot Curie supercomputer [CEA] Technology: architecures, KNL/Skylake, IB/BXI  [ATOS/Bull] MPI Software: OpenMPI, portals, infiniBand, WI4MPI & nbsp;[EOLEN/AS+] User environment: module, collections, flavor/features,toolchains, Hands'on   [EOLEN/AS+] Vectorisation: openMP4, simd directives, tools, optimisation [EOLEN/AS+] Virtualisation: Pcocc, checkpoint, templates, Hands'on[CEA  / EOLEN ] I/O: Posix, StdC, MPI-io, hdf5, Hands'on   [EOLEN/AS+] Prerequisites Experience with code developpement, knowledge of C or F90, MPI, OpenMP   https://events.prace-ri.eu/event/740/ 2018-07-03 07:00:00 UTC 2018-07-05 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to PETSc @ MdlS/Idris

    25 - 26 June 2018

    Introduction to PETSc @ MdlS/Idris https://tess.elixir-europe.org/events/introduction-to-petsc-mdls-idris The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations (www.mcs.anl.gov/petsc/). It enables researchers to delegate the linear algebra part of their applications to a specialized team, and to test various solution methods. The course will provide the necessary basis to get started with PETSc and give an overview of its possibilities. Presentations will alternate with hands-on sessions (in C or Fortran). Intended learning outcomes : On completion of this course, the participant should - Be able to build and solve simple PDE examples - Use and compare different solvers on these examples - Be familiar with using the on-line documentation - Be able to easily explore other PETsc possibilities relevant to his/her application. Prerequisites : C or Fortran programming. Notions of linear algebra, as well as notions of MPI, would be an asset. https://events.prace-ri.eu/event/682/ 2018-06-25 07:30:00 UTC 2018-06-26 15:00:00 UTC [] [] [] workshops_and_courses [] []
  • Dermatologists Annual Meeting 2018

    12 November 2018

    Osaka, Japan, Japan

    Dermatologists Annual Meeting 2018 https://tess.elixir-europe.org/events/dermatologists-annual-meeting-2018 Dermatologists Annual Meeting 2018 will be held at Osaka, Japan between November 12-13, 2018, focusing on the theme “Current Emerging Practices in Clinical and Experimental Dermatology”. For more details, visit: https://dermatologists.conferenceseries.com/ 2018-11-12 09:00:00 UTC 2018-11-12 17:00:00 UTC Conference Series llc LTD Osaka, Japan, Osaka, Japan, Japan Osaka, Japan Osaka, Japan Japan Japan 5300001 [] Kelly John dermatologists@annualcongress.net [] Dermatologists, Cosmetologists, Aesthetic Dermatologists, Skin Specialist, Dermatology experts, Young Dermatologists, Medical students 100 meetings_and_conferences registration_of_interest Dermatologists, Cosmetologists, Aesthetic Dermatologists, Skin Specialist, Dermatology experts, Young Dermatologists, Medical students
  • Advanced MPI @ EPCC at Milton Keynes

    30 - 31 July 2018

    Advanced MPI @ EPCC at Milton Keynes https://tess.elixir-europe.org/events/advanced-mpi-epcc-at-milton-keynes Advanced MPI ARCHER, the UK's national supercomputing service, offers training in software development and high-performance computing to scientists and researchers across the UK. As part of our training service we will be running a 2-day Advanced MPI training session. Trainer David Henty David teaches on a wide range of EPCC's technical training courses, including MPI and OpenMP, and is overall course organiser for EPCC's MSc in High Performance Computing. Details This course is aimed at programmers seeking to deepen their understanding of MPI and explore some of its more recent and advanced features. We cover topics including communicator management, non-blocking and neighbourhood collectives, single-sided MPI and the new MPI memory model. We also look at performance aspects such as which MPI routines to use for scalability, overlapping communication and calculation and MPI internal implementation issues. This course is Intended learning outcomes Understanding of how internal MPI implementation details affect performance Familiarity with neighbourhood collective operations in MPI Knowledge of MPI memory models for RMA operations Familiarity with MPI RMA operations and single-sided communication Understanding of best practice for MPI+OpenMP programming Pre-requisites Attendees should be familiar with MPI programming in C, C++ or Fortran, e.g. have attended the ARCHER MPI course. Pre-course setup All attendees should bring their own wireless-enabled laptop set up with the required software. Practical exercises will be done using a guest account on ARCHER. Draft Timetable (May be subject to change) All sessions will include hands-on practical exercises in addition to lectures material. Day 1: 30th July 09:00 - 09:30 Welcome and Registration 09:30 - 11:00 MPI Internals 11:00 - 11:30 Coffee 11:30 - 13:00 MPI Tools 13:00 - 14:00 Lunch 14:00 - 15:30 MPI Optimisations 15:30 - 16:00 Coffee 16:00 - 17:30 Advanced Collectives 17:30 CLOSE Day 2: 30st July 09:30 - 11:00 MPI + OpenMP (i) 11:00 - 11:30 Coffee 11:30 - 13:00 MPI + OpenMP (ii) 13:00 - 14:00 Lunch 14:00 - 15:30 New MPI shared-memory model 15:30 - 16:00 Coffee 16:00 - 17:00 Individual consultancy session 17:00 CLOSE Course Materials Links to the Slides and exercise material for this course will be available soon. https://events.prace-ri.eu/event/733/ 2018-07-30 08:00:00 UTC 2018-07-31 16:30:00 UTC [] [] [] workshops_and_courses [] []
  • Fortran Programming for Scientific Computing @ CSC

    13 - 14 September 2018

    Fortran Programming for Scientific Computing @ CSC https://tess.elixir-europe.org/events/fortran-programming-for-scientific-computing-csc-ec6559bd-18b6-4ff2-98a1-f5048f2c92b1 Description The Fortran programming language is still - despite of its long history - one of the standard and most useful languages in scientific computing. Our course will give the participants the basic knowledge on the Fortran 95 programming language as well as some glimpses on the Fortran 2003/2008 standards and their new features. Best practices for obtaining high performance with Fortran will also be discussed. Both days include lectures and practical hands-on exercises. Learning outcome After this course you should be able to write simple programg using the Fortran language. Prerequisites Basic knowledge on programming in general is recommended. Agenda Day 1, Thursday 13.9    09.00 – 09.45 Getting started with Fortran    09.45 – 10.15 Exercises    10.15 – 10.30 Coffee break    10.30 – 11.00 Procedures & modules    11.00 – 12.00 Exercises    12.00 – 13.00 Lunch    13.00 – 13.30 Fortran arrays I    13.30 – 14.00 Exercises    14.00 – 14.30 Fortran arrays II    14.30 – 14.45 Coffee break    14.45 – 15.45 Exercises    15.45 – 16.00 Summary of the first day Day 2, Friday 14.9     09.00 – 09.45 Input / Output    09.45 – 10.30 Exercises    10.30 – 10.45 Coffee break    10.45 – 11.15 Derived types    11.15 – 12.00 Exercises    12.00 – 13.00 Lunch    13.00 – 13.45 Other useful features    13.45 – 14.30 Exercises    14.30 – 14.45 Coffee break    14.45 – 15.45 Exercises    15.45 – 16.00 Summary of the second day Lecturers:  Sami Ilvonen (CSC), N.N (CSC)   Language:  EnglishPrice:          Free of charge https://events.prace-ri.eu/event/741/ 2018-09-13 06:00:00 UTC 2018-09-14 13:00:00 UTC [] [] [] workshops_and_courses [] []
  • HPC code optimization workshop @ LRZ

    21 - 22 June 2018

    HPC code optimization workshop @ LRZ https://tess.elixir-europe.org/events/hpc-code-optimization-workshop-lrz          Contents In the ever-growing complexity of computer architectures, code optimization has become the main route to keep pace with hardware advancements and effectively make use of current and upcoming High Performance Computing systems. Have you ever asked yourself: Where does the performance of my application lay? What is the maximum speed-up achievable on the architecture I am using? Is my implementation matching the HPC objectives? In this workshop, we will answer these questions and provide a unique opportunity to learn techniques, methods and solutions on how to improve code, how to enable the new hardware features and how to use the roofline model to visualize the potential benefits of an optimization process.  We will begin with a description of the latest micro-processor architectures and how the developers can efficiently use modern HPC hardware, in particular the vector units via SIMD programming and AVX-512 optimization and the memory hierarchy. The attendees are then conducted along the optimization process by means of hands-on exercises and learn how to enable vectorization using simple pragmas and more effective techniques, like changing data layout and alignment. The work is guided by the hints from the Intel® compiler reports, and using Intel® Advisor. We provide also an N-body code, to support the described optimization solutions with practical hands-on. The course is a PRACE training event. Learning Goals Through a sequence of simple, guided examples of code modernization, the attendees will develop awareness on features of multi and many-core architecture which are crucial for writing modern, portable and efficient applications. A special focus will be dedicated to scalar and vector optimizations for the latest Intel® Xeon® Scalable processor, code-named Skylake, which is going to be utilized in the upcoming SuperMUC-NG machine at LRZ. The tutorial will have presentations and demo session. We will provide to the attendees access to Skylake processors and Intel® tools using VM instances provided by Google Cloud Platform. The workshop interleaves frontal and practical sessions. Here the outline: Day 1 09:00-09:30       Introduction 09:30-10:30       Skylake optimisation features 10:30-11:00       Coffee Break 11:00-12:00       Code modernisation approach 12:00-12:30       Login to Google cloud machines 12:30-13:30       Lunch 13:30-14:00       Scalar optimisation 14:00-15:00       Compiler autovectorisation 15:00-15:30       Coffee Break 15:30-16:00       Explicit vectorisation 16:00-16:30       Data layout from AoS to SoA 16:30-17:00       Memory access optimisation 17:00-18:00       Wrap-up Day 2 09:00-09:30       Introduction to roofline model 09:30-10:30       Intel Advisor analysis 10:30-11:00       Coffee Break 11:00-12:00       Intel Advisor hands-on 1/2 (roofline) 12:00-12:30       Intel Advisor hands-on 2/2 (map and dependency analysis) 12:30-13:30       Lunch 13:30-14:00       What is inside Intel Parallel Studio 2019 14:00-15:00       Introduction to MKL 15:00-15:30       Coffee Break 15:30-16:00       Hands-on MKL 16:00-16:30       Hands-on MKL 16:30-17:00       Open Discussion and feedback 17:00-18:00       Wrap-up Please bring your own laptop (with X11 support and an ssh client installed) for the hands-on sessions! For GUI applications we require the installation of vncviewer (https://www.realvnc.com/en/connect/download/viewer/ )”.   About the Lecturers Fabio Baruffa is a software technical consulting engineer in the Developer Products Division (DPD) of the Software and Services Group (SSG) at Intel. He is working in the compiler team and provides customer support in the high performance computing (HPC) area. Prior at Intel, he has been working as HPC application specialist and developer in the largest supercomputing centers in Europe, mainly the Leibniz Supercomputing Center and the Max-Plank Computing and Data Facility in Munich, as well as Cineca in Italy. He has been involved in software development, analysis of scientific code and optimization for HPC systems. He holds a PhD in Physics from University of Regensburg for his research in the area of spintronics device and quantum computing. Luigi Iapichino holds a position of scientific computing expert at LRZ and he is member of the Intel Parallel Computing Center (IPCC). His main tasks are code modernization for many-core systems, and HPC support. He got in 2005 a PhD in physics from TU München, working at the Max Planck Institute for Astrophysics. Before moving to LRZ in 2014, he worked at the Universities of Würzburg and Heidelberg, involved in research projects related to computational astrophysics. https://events.prace-ri.eu/event/727/ 2018-06-21 07:00:00 UTC 2018-06-22 16:00:00 UTC [] [] [] workshops_and_courses [] []
  • Advanced OpenMP @ EPCC at Cambridge 17-19 July 2018

    17 - 19 July 2018

    Advanced OpenMP @ EPCC at Cambridge 17-19 July 2018 https://tess.elixir-europe.org/events/advanced-openmp-epcc-at-cambridge-17-19-july-2018 OpenMP is the industry standard for shared-memory programming, which enables serial programs to be parallelised using compiler directives.This course is aimed at programmers seeking to deepen their understanding of OpenMP and explore some of its more recent and advanced features. This 3-day course will cover topics including nested parallelism, OpenMP tasks, the OpenMP memory model, performance tuning, hybrid OpenMP + MPI, OpenMP implementations, and new features in OpenMP 4.0/4.5. Hands-on practical programming exercises make up a significant, and integral, part of this course. Attendees should be familiar with the basics of OpenMP, including parallel regions, data scoping, work sharing directives and synchronisation constructs. Access will be given to appropriate hardware for all the exercises, although many of them can also be performed on a standard Linux laptop.   Pre-course setup All attendees should bring their own wireless-enabled laptop. Practical exercises will be done using a guest account on ARCHER. You will need an ssh client such as terminal on a Mac or Linux machine, or putty or MobaXterm on Windows. The course tutor will be able to assist with settings to connect on the day. You should also have a web browser, a pdf reader and a simple text editor.   Timetable Day 1 09:00 - 11:00  Lectures: OpenMP basics: Parallel regions, Worksharing, Synchronisation 11:00 - 11:30  Coffee 11:30 - 13:00  Practical: Parallel regions 13:00 - 14:00  Lunch 14:00 - 15:30  Lectures: Multicore and multithreaded CPUs, Caches, Cache coherency, NUMA 15:30 - 16:00  Tea 16:00 - 17:00  Practicals: Streams, Coherency Day 2 09:00 - 11:00  Lectures: Tasks, Nested parallelism, Memory model 11:00 - 11:30  Coffee 11:30 - 13:00  Practicals: Mandelbrot with nested loops, collapse, and tasks 13:00 - 14:00  Lunch 14:00 - 15:30  Lectures: OpenMP + MPI 15:30 - 16:00  Tea 16:00 - 17:00  Practicals: OpenMP + MPI Day 3 09:00 - 11:00  Lectures: OpenMP tips, tricks and pitfalls, Performance issues 11:00 - 11:30  Coffee 11:30 - 13:00  Practicals: MD tuning 13:00 - 14:00  Lunch 14:00 - 15:30  OpenMP 4.0/4.5 features, alternative APIs 15:30 - 16:00  Tea 16:00 - 17:00  Consultancy session: bring your own code! Course Materials http://www.archer.ac.uk/training/course-material/2018/07/AdvOpenMP-camb/index.php  https://events.prace-ri.eu/event/722/ 2018-07-17 08:00:00 UTC 2018-07-19 16:30:00 UTC [] [] [] workshops_and_courses [] []
  • Message-passing Programming with MPI @ EPCC

    4 - 6 July 2018

    Message-passing Programming with MPI @ EPCC https://tess.elixir-europe.org/events/message-passing-programming-with-mpi-epcc-0e59d933-2fd2-41ba-ba4b-166c54a51df7 The world’s largest supercomputers are used almost exclusively to run applications which are parallelised using Message Passing. The course covers all the basic knowledge required to write parallel programs using this programming model, and is directly applicable to almost every parallel computer architecture. Parallel programming by definition involves co-operation between processes to solve a common task. The programmer has to define the tasks that will be executed by the processors, and also how these tasks are to synchronise and exchange data with one another. In the message-passing model the tasks are separate processes that communicate and synchronise by explicitly sending each other messages. All these parallel operations are performed via calls to some message-passing interface that is entirely responsible for interfacing with the physical communication network linking the actual processors together. This course uses the de facto standard for message passing, the Message Passing Interface (MPI). It covers point-to-point communication, non-blocking operations, derived datatypes, virtual topologies, collective communication and general design issues. The course is normally delivered in an intensive three-day format using EPCC’s dedicated training facilities. It is taught using a variety of methods including formal lectures, practical exercises, programming examples and informal tutorial discussions. This enables lecture material to be supported by the tutored practical sessions in order to reinforce the key concepts. If you are not already familiar with basic Linux commands,logging on to a remote machine using ssh and compiling and running a program on a remote machine then we would strongly encourage you to also attend the Hands-on Introduction to HPC course running immediately prior to this course. This course is free to all academics.  Intended Learning Outcomes On completion of this course students should be able to: Understand the message-passing model in detail. Implement standard message-passing algorithms in MPI. Debug simple MPI codes. Measure and comment on the performance of MPI codes. Design and implement efficient parallel programs to solve regular-grid problems. Pre-requisite Programming Languages: Fortran, C or C++. It is not possible to do the exercises in Java. Pre-requisite setup Attendees should bring their own laptop (Windows/Mac/Linux) and will need the following software installed: ssh client - on Mac/Linux then Terminal is fine, on Windows we recommend putty from https://www.putty.org/ web browser - e.g. Firefox or Chrome pdf viewer - e.g. Firefox, Adobe Acrobat text file editor - e.g. vi, vim, emacs, notepad xming graphics viewer for Windows https://xming.en.softonic.com/ is useful if the course uses graphics   Timetable Day 1 09:30  Message-Passing Concepts 10:15  Practical: Parallel Traffic Modelling 11:00  Break 11:30  MPI Programs 12:00  MPI on ARCHER 12:15  Practical: Hello World 13:00  Lunch 14:00  Point-to-Point Communication 14:30  Practical: Pi 15:30  Break 16:00  Communicators, Tags and Modes 16:45 Practical: Ping-Pong 17:30  Finish Day 2 09:30  Non-Blocking Communication 10:00  Practical: Message Round a Ring 11:00  Break 11:30  Collective Communicaton 12:00  Practical: Collective Communication 13:00  Lunch 14:00  Virtual Topologies 14:30  Practical: Message Round a Ring (cont.) 15:30  Break 16:00  Derived Data Types 16:45  Practical: Message Round a Ring (cont.) 17:30  Finish Day 3 09:30  Introduction to the Case Study 10:00  Practical: Case Study 11:00  Break 11:30  Practical: Case Study (cont.) 13:00  Lunch 14:00  Designing MPI Programs 15:00 Individual Consultancy Session 15:30  Finish https://events.prace-ri.eu/event/734/ 2018-07-04 08:00:00 UTC 2018-07-06 14:30:00 UTC [] [] [] workshops_and_courses [] []
  • Introduction to Unified Parallel C (UPC) and Co-array Fortran (CAF) @ HLRS

    5 - 6 July 2018

    Introduction to Unified Parallel C (UPC) and Co-array Fortran (CAF) @ HLRS https://tess.elixir-europe.org/events/introduction-to-unified-parallel-c-upc-and-co-array-fortran-caf-hlrs-6f854665-d2c3-43df-8a9d-76005b682dd1 Overview Partitioned Global Address Space (PGAS) is a new model for parallel programming. Unified Parallel C (UPC) and Co-array Fortran (CAF) are PGAS extensions to C and Fortran.UPC and CAF are language extensions to C and Fortran. Parallelism is part of the language. PGAS languages allow any processor to directly address memory/data on any other processors. Parallelism can be expressed more easily compared to library based approaches as MPI. This course gives an introduction to this novel approach of expressing parallelism. Hands-on sessions (in UPC and/or CAF) will allow users to immediately test and understand the basic constructs of PGAS languages. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. For further information and registration please visit the HLRS course page. https://events.prace-ri.eu/event/691/ 2018-07-05 06:30:00 UTC 2018-07-06 13:30:00 UTC [] [] [] workshops_and_courses [] []
  • 9th Programming and Tuning Massively Parallel Systems summer school (PUMPS)@BSC - UPC

    16 - 20 July 2018

    9th Programming and Tuning Massively Parallel Systems summer school (PUMPS)@BSC - UPC https://tess.elixir-europe.org/events/9th-programming-and-tuning-massively-parallel-systems-summer-school-pumps-bsc-upc The Barcelona Supercomputing Center (BSC) in association with Universitat Politecnica de Catalunya (UPC) has been awarded by NVIDIA as a GPU Center of Excellence. BSC and UPC currently offer a number of courses covering CUDA architecture and programming languages for parallel computing. Please contact us for possible collaborations. The ninth edition of the Programming and Tuning Massively Parallel Systems + Artificial Intelligence summer school (PUMPS+AI) is aimed at enriching the skills of researchers, graduate students and teachers with cutting-edge technique and hands-on experience in developing applications for many-core processors with massively parallel computing resources like GPU accelerators. Summer School Co-Directors: Mateo Valero (BSC and UPC) and Wen-mei Hwu (University of Illinois at Urbana-Champaign) Local Organizers: Antonio J. Peña (responsible, BSC and UPC), and Pau Farre (BSC) Dates: Applications due: May 31, 2018 Due to space limitations, early application is strongly recommended. You may also be suggested to attend an online prerequisite training on basic CUDA programming before joining PUMPS. Notification of acceptance: June 12, 2018 Hackathon day: 15 July (only for selected applicants) Summer school: 16-20 July Location: Barcelona Supercomputing Center, Computer Architecture Dept. at Universitat Politecnica de Catalunya, Barcelona, Spain Organized by: Barcelona Supercomputing Center (BSC) University of Illinois at Urbana-Champaign (University of Illinois) Universitat Politecnica de Catalunya (UPC) HiPEAC Network of Excellence (HiPEAC) PUMPS is part of this year PRACE Advanced Training Centre program The following is a list of some of the topics that will be covered during the course: Deep Learning High-level programming models (OpenACC, Python, and Mathematica on GPUs) CUDA Algorithmic Optimization Strategies Dealing with Sparse and Dynamic data Efficiency in Large Data Traversal Reducing Output Interference Controlling Load Imbalance and Divergence Acceleration of Collective Operations Dynamic Parallelism and HyperQ Debugging and Profiling CUDA Code Multi-GPU Execution Architecture Trends and Implications Introduction to OmpSs and to the Paraver analysis tool OmpSs: Leveraging GPU/CUDA Programming Hands-on Labs: CUDA Optimizations on Scientific Codes; OmpSs Programming and Tuning Instructors: Distinguished Lecturers: Wen-mei Hwu (University of Illinois at Urbana-Champaign) and David Kirk (NVIDIA Corporation ) Invited Lecturer: Juan Gómez-Luna (Universidad de Córdoba) BSC / UPC Lecturers: Antonio J. Peña, Xavier Martorell and Xavier Teruel Teaching Assistants: UIUC: Carl Pearson, Simon Garcia de Gonzalo, Mert Hidayetoglu BSC/UPC: Marc Jorda, Pau Farre, Ivan Martinez, Aimar Rodriguez Hackathon: Juan Gómez-Luna (Universidad de Córdoba) Pedro Valero (BSC) Prerequisites for the course are: Basic CUDA knowledge is required to attend the course. Applicants that cannot certify their experience in CUDA programming will be asked to take a short on-line course covering the necessary introductory topics C, C++, Java, or equivalent programming knowledge. Skills in parallel programming will be helpful Preliminary Overview Registration for the course is free for attendees from academia and public institutions. Please note that travel, lodging, and meals are not covered. Applicants from non-academic institutions (companies), please contact us at pumps@bsc.es for sponsorship possibilities. By the end of the summer school, participants will: Be able to design algorithms that are suitable for accelerators. Understand the most important architectural performance considerations for developing parallel applications. Be exposed to computational thinking skills for accelerating applications in science and engineering. Engage computing accelerators on science and engineering breakthroughs. Programming Languages: CUDA, MPI, OmpSs Hands-on Labs: Afternoon labs with teaching assistants for each audience/level. Participants are expected to bring their own laptops to access the servers with GPU accelerators. The afternoon lab sessions will provide hands-on experience with various languages and tools covered in the lectures and will comprise a brief introduction to the programming assignments, followed by independent work periods. Teaching assistants will be available in person and on the web to help with assignments. https://events.prace-ri.eu/event/731/ 2018-07-16 07:00:00 UTC 2018-07-20 15:00:00 UTC [] [] [] workshops_and_courses [] []
Ajax loader