Start: Monday, 26 November 2012 @ 08:30

End: Friday, 30 November 2012 @ 17:00

Description:

All PATC Courses do not charge fees.
Objectives:
The course starts with the objective of setting up the basic foundations related with task decomposition and parallelization inhibitors, using a tool to analyze potential parallelism and dependences. The course follows with the objective of understanding the fundamental concepts supporting shared-memory and message-passing programming models. The course is taught using formal lectures and practical/programming sessions to reinforce the key concepts and set up the compilation/execution environment. The course covers the two widely used programming models: OpenMP for the shared-memory architectures and MPI for the distributed-memory counterparts. The use of OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of current compute nodes in clustered architectures is also considered. Paraver will be used along the course as the tool to understand the behavior and performance of parallelized codes.
 Level:
For trainees with some theoretical and practical knowledge, some programming experience.Learning Outcomes:
On completion of this course students should be able to:
Understand the use of task decomposition and the inhibitors of the potential parallelism in a sequential algorithm.

    Understand the fundamentals of shared-memory and message-passing models.

    Implement simple algorithms using OpenMP for shared-memory and MPI for message-passing.

    Compile, execute and debug simple OpenMP and MPI codes.

    Understand the performance of OpenMP and MPI codes.

Prerequisites:Fortran, C or C++ programming. All examples in the course will be done in C.

https://events.prace-ri.eu/event/74/

Event type:
  • Workshops and courses
Parallel Programming Workshop @ BSC https://tess.elixir-europe.org/events/parallel-programming-workshop-bsc All PATC Courses do not charge fees. Objectives: The course starts with the objective of setting up the basic foundations related with task decomposition and parallelization inhibitors, using a tool to analyze potential parallelism and dependences. The course follows with the objective of understanding the fundamental concepts supporting shared-memory and message-passing programming models. The course is taught using formal lectures and practical/programming sessions to reinforce the key concepts and set up the compilation/execution environment. The course covers the two widely used programming models: OpenMP for the shared-memory architectures and MPI for the distributed-memory counterparts. The use of OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of current compute nodes in clustered architectures is also considered. Paraver will be used along the course as the tool to understand the behavior and performance of parallelized codes.  Level: For trainees with some theoretical and practical knowledge, some programming experience.Learning Outcomes: On completion of this course students should be able to: Understand the use of task decomposition and the inhibitors of the potential parallelism in a sequential algorithm. Understand the fundamentals of shared-memory and message-passing models. Implement simple algorithms using OpenMP for shared-memory and MPI for message-passing. Compile, execute and debug simple OpenMP and MPI codes. Understand the performance of OpenMP and MPI codes. Prerequisites:Fortran, C or C++ programming. All examples in the course will be done in C. https://events.prace-ri.eu/event/74/ 2012-11-26 08:30:00 UTC 2012-11-30 17:00:00 UTC [] [] [] workshops_and_courses [] []