Recent Advances in Parallel Virtual Machine and Message Passing Interface [electronic resource] : 11th European PVM/MPI Users’ Group Meeting Budapest, Hungary, September 19 - 22, 2004. Proceedings / edited by Dieter Kranzlmüller, Péter Kacsuk, Jack Dongarra.

Contributor(s): Kranzlmüller, Dieter [editor.] | Kacsuk, Péter [editor.] | Dongarra, Jack [editor.] | SpringerLink (Online service)Material type: TextTextSeries: Lecture Notes in Computer Science ; 3241Publisher: Berlin, Heidelberg : Springer Berlin Heidelberg, 2004Description: XIV, 458 p. online resourceContent type: text Media type: computer Carrier type: online resourceISBN: 9783540302186Subject(s): Computer science | Electronic data processing | Computer Science | Programming Techniques | Programming Languages, Compilers, Interpreters | Computation by Abstract Devices | Numeric Computing | Arithmetic and Logic Structures | Processor ArchitecturesAdditional physical formats: Printed edition:: No titleDDC classification: 005.11 LOC classification: QA76.6-76.66Online resources: Click here to access online
Contents:
Invited Talks -- PVM Grids to Self-assembling Virtual Machines -- The Austrian Grid Initiative – High Level Extensions to Grid Middleware -- Fault Tolerance in Message Passing and in Action -- MPI and High Productivity Programming -- High Performance Application Execution Scenarios in P-GRADE -- An Open Cluster System Software Stack -- Advanced Resource Connector (ARC) – The Grid Middleware of the NorduGrid -- Next Generation Grid: Learn from the Past, Look to the Future -- Tutorials -- Production Grid Systems and Their Programming -- Tools and Services for Interactive Applications on the Grid – The CrossGrid Tutorial -- Extensions and Improvements -- Verifying Collective MPI Calls -- Fast Tuning of Intra-cluster Collective Communications -- More Efficient Reduction Algorithms for Non-Power-of-Two Number of Processors in Message-Passing Parallel Systems -- Zero-Copy MPI Derived Datatype Communication over InfiniBand -- Minimizing Synchronization Overhead in the Implementation of MPI One-Sided Communication -- Efficient Implementation of MPI-2 Passive One-Sided Communication on InfiniBand Clusters -- Providing Efficient I/O Redundancy in MPI Environments -- The Impact of File Systems on MPI-IO Scalability -- Open MPI: Goals, Concept, and Design of a Next Generation MPI Implementation -- Open MPI’s TEG Point-to-Point Communications Methodology: Comparison to Existing Implementations -- The Architecture and Performance of WMPI II -- A New MPI Implementation for Cray SHMEM -- Algorithms -- A Message Ordering Problem in Parallel Programs -- BSP/CGM Algorithms for Maximum Subsequence and Maximum Subarray -- A Parallel Approach for a Non-rigid Image Registration Algorithm -- Neighborhood Composition: A Parallelization of Local Search Algorithms -- Asynchronous Distributed Broadcasting in Cluster Environment -- A Simple Work-Optimal Broadcast Algorithm for Message-Passing Parallel Systems -- Nesting OpenMP and MPI in the Conjugate Gradient Method for Band Systems -- An Asynchronous Branch and Bound Skeleton for Heterogeneous Clusters -- Applications -- Parallelization of GSL: Architecture, Interfaces, and Programming Models -- Using Web Services to Run Distributed Numerical Applications -- A Grid-Based Parallel Maple -- A Pipeline-Based Approach for Mapping Message-Passing Applications with an Input Data Stream -- Parallel Simulations of Electrophysiological Phenomena in Myocardium on Large 32 and 64-bit Linux Clusters -- Tools and Environments -- MPI I/O Analysis and Error Detection with MARMOT -- Parallel I/O in an Object-Oriented Message-Passing Library -- Detection of Collective MPI Operation Patterns -- Detecting Unaffected Race Conditions in Message-Passing Programs -- MPI Cluster System Software -- A Lightweight Framework for Executing Task Parallelism on Top of MPI -- Easing Message-Passing Parallel Programming Through a Data Balancing Service -- TEG: A High-Performance, Scalable, Multi-network Point-to-Point Communications Methodology -- Cluster and Grid -- Efficient Execution on Long-Distance Geographically Distributed Dedicated Clusters -- Identifying Logical Homogeneous Clusters for Efficient Wide-Area Communications -- Coscheduling and Multiprogramming Level in a Non-dedicated Cluster -- Heterogeneous Parallel Computing Across Multidomain Clusters -- Performance Evaluation and Monitoring of Interactive Grid Applications -- A Domain Decomposition Strategy for GRID Environments -- A PVM Extension to Exploit Cluster Grids -- Performance -- An Initial Analysis of the Impact of Overlap and Independent Progress for MPI -- A Performance-Oriented Technique for Hybrid Application Development -- A Refinement Strategy for a User-Oriented Performance Analysis -- What Size Cluster Equals a Dedicated Chip -- Architecture and Performance of the BlueGene/L Message Layer -- Special Session: ParSim 2004 -- Special Session of EuroPVM/MPI 2004. Current Trends in Numerical Simulation for Parallel Engineering Environments. ParSim 2004 -- Parallelization of a Monte Carlo Simulation for a Space Cosmic Particles Detector -- On the Parallelization of a Cache-Optimal Iterative Solver for PDEs Based on Hierarchical Data Structures and Space-Filling Curves -- Parallelization of an Adaptive Vlasov Solver -- A Framework for Optimising Parameter Studies on a Cluster Computer by the Example of Micro-system Design -- Numerical Simulations on PC Graphics Hardware.
In: Springer eBooksSummary: The message passing paradigm is the most frequently used approach to develop high-performancecomputing applications on paralleland distributed computing architectures. Parallel Virtual Machine (PVM) and Message Passing Interface (MPI) are the two main representatives in this domain. This volume comprises 50 selected contributions presented at the 11th - ropean PVM/MPI Users’ Group Meeting, which was held in Budapest, H- gary, September 19–22, 2004. The conference was organized by the Laboratory of Parallel and Distributed Systems (LPDS) at the Computer and Automation Research Institute of the Hungarian Academy of Sciences (MTA SZTAKI). The conference was previously held in Venice, Italy (2003), Linz, Austria (2002), Santorini, Greece (2001), Balatonfu ¨red, Hungary (2000), Barcelona, Spain (1999), Liverpool, UK (1998), and Krakow,Poland (1997).The ?rst three conferences were devoted to PVM and were held in Munich, Germany (1996), Lyon, France (1995), and Rome, Italy (1994). In its eleventh year, this conference is well established as the forum for users and developers of PVM, MPI, and other messagepassing environments.Inter- tionsbetweenthesegroupshaveprovedtobeveryusefulfordevelopingnewideas in parallel computing, and for applying some of those already existent to new practical?elds.Themaintopicsofthe meeting wereevaluationandperformance of PVM and MPI, extensions, implementations and improvements of PVM and MPI, parallel algorithms using the message passing paradigm, and parallel - plications in science and engineering. In addition, the topics of the conference were extended to include cluster and grid computing, in order to re?ect the importance of this area for the high-performance computing community.
Item type: E-BOOKS
Tags from this library: No tags from this library for this title. Log in to add tags.
    Average rating: 0.0 (0 votes)
Current library Home library Call number Materials specified URL Status Date due Barcode
IMSc Library
IMSc Library
Link to resource Available EBK3413

Invited Talks -- PVM Grids to Self-assembling Virtual Machines -- The Austrian Grid Initiative – High Level Extensions to Grid Middleware -- Fault Tolerance in Message Passing and in Action -- MPI and High Productivity Programming -- High Performance Application Execution Scenarios in P-GRADE -- An Open Cluster System Software Stack -- Advanced Resource Connector (ARC) – The Grid Middleware of the NorduGrid -- Next Generation Grid: Learn from the Past, Look to the Future -- Tutorials -- Production Grid Systems and Their Programming -- Tools and Services for Interactive Applications on the Grid – The CrossGrid Tutorial -- Extensions and Improvements -- Verifying Collective MPI Calls -- Fast Tuning of Intra-cluster Collective Communications -- More Efficient Reduction Algorithms for Non-Power-of-Two Number of Processors in Message-Passing Parallel Systems -- Zero-Copy MPI Derived Datatype Communication over InfiniBand -- Minimizing Synchronization Overhead in the Implementation of MPI One-Sided Communication -- Efficient Implementation of MPI-2 Passive One-Sided Communication on InfiniBand Clusters -- Providing Efficient I/O Redundancy in MPI Environments -- The Impact of File Systems on MPI-IO Scalability -- Open MPI: Goals, Concept, and Design of a Next Generation MPI Implementation -- Open MPI’s TEG Point-to-Point Communications Methodology: Comparison to Existing Implementations -- The Architecture and Performance of WMPI II -- A New MPI Implementation for Cray SHMEM -- Algorithms -- A Message Ordering Problem in Parallel Programs -- BSP/CGM Algorithms for Maximum Subsequence and Maximum Subarray -- A Parallel Approach for a Non-rigid Image Registration Algorithm -- Neighborhood Composition: A Parallelization of Local Search Algorithms -- Asynchronous Distributed Broadcasting in Cluster Environment -- A Simple Work-Optimal Broadcast Algorithm for Message-Passing Parallel Systems -- Nesting OpenMP and MPI in the Conjugate Gradient Method for Band Systems -- An Asynchronous Branch and Bound Skeleton for Heterogeneous Clusters -- Applications -- Parallelization of GSL: Architecture, Interfaces, and Programming Models -- Using Web Services to Run Distributed Numerical Applications -- A Grid-Based Parallel Maple -- A Pipeline-Based Approach for Mapping Message-Passing Applications with an Input Data Stream -- Parallel Simulations of Electrophysiological Phenomena in Myocardium on Large 32 and 64-bit Linux Clusters -- Tools and Environments -- MPI I/O Analysis and Error Detection with MARMOT -- Parallel I/O in an Object-Oriented Message-Passing Library -- Detection of Collective MPI Operation Patterns -- Detecting Unaffected Race Conditions in Message-Passing Programs -- MPI Cluster System Software -- A Lightweight Framework for Executing Task Parallelism on Top of MPI -- Easing Message-Passing Parallel Programming Through a Data Balancing Service -- TEG: A High-Performance, Scalable, Multi-network Point-to-Point Communications Methodology -- Cluster and Grid -- Efficient Execution on Long-Distance Geographically Distributed Dedicated Clusters -- Identifying Logical Homogeneous Clusters for Efficient Wide-Area Communications -- Coscheduling and Multiprogramming Level in a Non-dedicated Cluster -- Heterogeneous Parallel Computing Across Multidomain Clusters -- Performance Evaluation and Monitoring of Interactive Grid Applications -- A Domain Decomposition Strategy for GRID Environments -- A PVM Extension to Exploit Cluster Grids -- Performance -- An Initial Analysis of the Impact of Overlap and Independent Progress for MPI -- A Performance-Oriented Technique for Hybrid Application Development -- A Refinement Strategy for a User-Oriented Performance Analysis -- What Size Cluster Equals a Dedicated Chip -- Architecture and Performance of the BlueGene/L Message Layer -- Special Session: ParSim 2004 -- Special Session of EuroPVM/MPI 2004. Current Trends in Numerical Simulation for Parallel Engineering Environments. ParSim 2004 -- Parallelization of a Monte Carlo Simulation for a Space Cosmic Particles Detector -- On the Parallelization of a Cache-Optimal Iterative Solver for PDEs Based on Hierarchical Data Structures and Space-Filling Curves -- Parallelization of an Adaptive Vlasov Solver -- A Framework for Optimising Parameter Studies on a Cluster Computer by the Example of Micro-system Design -- Numerical Simulations on PC Graphics Hardware.

The message passing paradigm is the most frequently used approach to develop high-performancecomputing applications on paralleland distributed computing architectures. Parallel Virtual Machine (PVM) and Message Passing Interface (MPI) are the two main representatives in this domain. This volume comprises 50 selected contributions presented at the 11th - ropean PVM/MPI Users’ Group Meeting, which was held in Budapest, H- gary, September 19–22, 2004. The conference was organized by the Laboratory of Parallel and Distributed Systems (LPDS) at the Computer and Automation Research Institute of the Hungarian Academy of Sciences (MTA SZTAKI). The conference was previously held in Venice, Italy (2003), Linz, Austria (2002), Santorini, Greece (2001), Balatonfu ¨red, Hungary (2000), Barcelona, Spain (1999), Liverpool, UK (1998), and Krakow,Poland (1997).The ?rst three conferences were devoted to PVM and were held in Munich, Germany (1996), Lyon, France (1995), and Rome, Italy (1994). In its eleventh year, this conference is well established as the forum for users and developers of PVM, MPI, and other messagepassing environments.Inter- tionsbetweenthesegroupshaveprovedtobeveryusefulfordevelopingnewideas in parallel computing, and for applying some of those already existent to new practical?elds.Themaintopicsofthe meeting wereevaluationandperformance of PVM and MPI, extensions, implementations and improvements of PVM and MPI, parallel algorithms using the message passing paradigm, and parallel - plications in science and engineering. In addition, the topics of the conference were extended to include cluster and grid computing, in order to re?ect the importance of this area for the high-performance computing community.

There are no comments on this title.

to post a comment.
The Institute of Mathematical Sciences, Chennai, India

Powered by Koha