UPC: Distributed Shared Memory Programming
An Instructor Support FTP site is available from the Wiley editorial department.
1. Introductory Tutorial.
1.1 Getting Started.
1.2 Private and Shared Data.
1.3 Shared Arrays and Affinity of Shared Data.
1.4 Synchronization and Memory Consistency.
1.5 Work Sharing.
1.6 UPC Pointers.
2. Programming View and UPC Data Types.
2.1 Programming Models.
2.2 UPC Programming Model.
2.3 Shared and Private Variables.
2.4 Shared and Private Arrays.
2.5 Blocked Shared Arrays.
2.6 Compiling Environments and Shared Arrays.
3. Pointers and Arrays.
3.1 UPC Pointers.
3.2 Pointer Arithmetic.
3.3 Pointer Casting and Usage Practices.
3.4 Pointer Information and Manipulation Functions.
3.5 More Pointer Examples.
4. Work Sharing and Domain Decomposition.
4.1 Basic Work Distribution.
4.2 Parallel Iterations.
4.3 Multidimensional Data.
4.4 Distributing Trees.
5. Dynamic Shared Memory Allocation.
5.1 Allocating a Global Shared Memory Space Collectively.
5.2 Allocating Multiple Global Spaces.
5.3 Allocating Local Shared Spaces.
5.4 Freeing Allocated Spaces.
6. Synchronization and Memory Consistency.
6.2 Split-Phase Barriers.
6.4 Memory Consistency.
7. Performance Tuning and Optimization.
7.1 Parallel System Architectures.
7.2 Performance Issues in Parallel Programming.
7.3 Role of Compilers and Run-Time Systems.
7.4 UPC Hand Optimization.
7.5 Case Studies.
8. UPC Libraries.
8.1 UPC Collective Library.
8.2 UPC-IO Library.
Appendix A: UPC Language Specifications, v1.1.1.
Appendix B: UPC Collective Operations Specifications, v1.0.
Appendix C: UPC-IO Specifications, v1.0.
Appendix D: How to Compile and Run UPC Programs.
Appendix E: Quick UPC Reference.
WILLIAM CARLSON, PHD, is affiliated with the IDA Center for Computing Sciences. His research interests include performance evaluation of advanced computer architectures, operating systems, languages, and computers for parallel and distributed systems.
THOMAS STERLING, PHD, is a professor at Caltech and its Jet Propulsion Laboratory. His research interests include parallel computing architecture, cluster computing, petaflop computing, and systems software and evaluation.
KATHERINE YELICK, PHD, is Professor of Computer Science, University of California, Berkeley. Her research interests include parallel computing, memory hierarchy optimizations, programming languages, and compilers.