Tags: parallel programming

Description

A parallel programming model is a set of software technologies to express parallel algorithms and match applications with the underlying parallel systems. It encloses the areas of applications, programming languages, compilers, libraries, communications systems, and parallel I/O. Due to the difficulties in automatic parallelization today, people have to choose a proper parallel programming model or a form of mixture of them to develop their parallel applications on a particular platform.

Learn more about quantum dots from the many resources on this site, listed below. More information on Parallel Programming can be found here.

All Categories (1-20 of 32)

  1. Jasmin König

    https://nanohub.org/members/323796

  2. Recursive algorithm for NEGF in Python GPU version

    02 Feb 2021 | | Contributor(s):: Ning Yang, Tong Wu, Jing Guo

    This folder contains two Python functions for GPU-accelerated simulation, which implements the recursive algorithm in the non-equilibrium Green’s function (NEGF) formalism. Compared to the matlab implementation [1], the GPU version allows massive parallel running over many cores on GPU...

  3. Sebastian Jan Juchnowski

    https://nanohub.org/members/197883

  4. A Performance Comparison of Algebraic Multigrid Preconditioners on GPUs and MIC

    04 Feb 2016 | | Contributor(s):: Karl Rupp

    Algebraic multigrid (AMG) preconditioners for accelerators such as graphics processing units (GPUs) and Intel's many-integrated core (MIC) architecture typically require a careful, problem-dependent trade-off between efficient hardware use, robustness, and convergence rate in order to...

  5. Advanced Parallel CPU Programming Part 1: OmpSs Quick Overview

    29 Aug 2013 | | Contributor(s):: NanoBio Node, Xavier Teruel

    High Performance Computing --> Some basic concepts, Supercomputers nowadays, Parallel programming models OmpSs Introduction --> OmpSs main features, A Practical Example: Cholesky factorization BSC’s Implementation --> Mercurium Compiler, Nanos++ Runtime Library, Visualization Tools

  6. Intel Advisor XE 2013

    12 Mar 2013 | | Contributor(s):: Intel

    This is a presentation Intel engineer James Tullos presented at Purdue University on March 8, 2013.

  7. Intel Inspector XE 2013 An Introduction

    12 Mar 2013 | | Contributor(s):: Holly Wilper

    This is a presentation Intel engineer Holly Wilper presented at Purdue University on March 8, 2013.

  8. Intel VTune Amplifier XE 2013: An introduction

    12 Mar 2013 | | Contributor(s):: Intel

  9. Intel Xeon Phi Programming

    12 Mar 2013 | | Contributor(s):: James Tullos

    This is a presentation Intel engineer James Tullos presented at Purdue University on March 8, 2013.

  10. Edoardo Emilio Coronado

    https://nanohub.org/members/66233

  11. Mathematica for CUDA and OpenCL Programming

    07 Mar 2011 | | Contributor(s):: Ulises Cervantes-Pimentel, Abdul Dakkak

    In the latest release of Mathematica 8, a large number of programming tools for GPU computing are available. In this presentation, new tools for CUDA and OpenCL programming will be explored. Several applications, including image processing, medical imaging, multi-gpu, statistics and finance will...

  12. Illinois ECE 498AL: Programming Massively Parallel Processors, Lecture 10: Control Flow

    01 Sep 2009 | | Contributor(s):: Wen-Mei W Hwu

    Control FlowTopics: Terminology Review How Thread Blocks are Partitioned Control Flow Instructions Parallel Reduction A Vector Reduction Example A simple Implementation Vector Reduction With Bank Conflicts Vector Reduction With Branch Divergence Predicted Execution Concept Instruction Prediction...

  13. Illinois ECE 498AL: Programming Massively Parallel Processors, Lecture 9: Memory Hardware in G80

    30 Aug 2009 | | Contributor(s):: Wen-Mei W Hwu

    Memory Hardware in G80Topics: CUDA Device Memory Space Parallel Memory Sharing SM Memory Architecture SM Register File Programmer view of Register File Matrix Multiplication Example More on Dynamic Partitioning ILP vs. TLP Memory Layout of a Matrix in C Constants Shared Memory Parallel Memory...

  14. Illinois ECE 498AL: Programming Massively Parallel Processors, Lecture 6: CUDA Memories - Part 2

    20 Aug 2009 | | Contributor(s):: Wen-Mei W Hwu

    CUDA Memories Part2Topics: Tiled Multiply Breaking Md and Nd into Tiles Tiled Matrix Multiplication Kernel CUDA Code - Kernel Execution Configuration First Order Size considerations in G80 G80 Shared Memory and Threading Tiling Size Effects Typical Structure of a CUDA ProgramThese lecture were...

  15. Illinois ECE 498AL: Programming Massively Parallel Processors

    11 Aug 2009 | | Contributor(s):: Wen-Mei W Hwu

    Spring 2009 Virtually all semiconductor market domains, including PCs, game consoles, mobile handsets, servers, supercomputers, and networks, are converging to concurrent platforms. There are two important reasons for this trend. First, these concurrent processors can potentially offer more...

  16. The Multicore Era: Crisis or (and?) Opportunity?

    27 Mar 2009 | | Contributor(s):: Mithuna Thottethodi

    This talk will provide a brief overview of how we got to the multicore era, the implications and challenges for hardware/software developers and users, and some informed speculation on where the trends may be headed.

  17. Charles Taylor Patrick Gillespie

    Mr. Charles Taylor Patrick Gillespie is currently pursuing a LL.M. in Intellectual Property at Santa Clara University School of Law and focusing on Nanotechnology and the Law. He graduated from the...

    https://nanohub.org/members/33082

  18. MPI for the Next Generation of Supercomputing

    05 Dec 2008 | | Contributor(s):: Andrew Lumsdaine

    Despite premature rumours of its demise, MPI continues to be the de facto standard for high-performance parallel computing. Nonetheless, supercomputing software and the high-end ecosystem continue to advance, creating challenges to several aspects of MPI. In this talk we will review the design...

  19. OpenMP Tutorial

    25 Nov 2008 | | Contributor(s):: Seung-Jai Min

    This tutorial consists of three parts. First, we will discuss abouthow OpenMP is typically used and explain OpenMP programming model. Second, we will describe important OpenMP constructs and data enviroments. Finally, we will show a simple example to illustrate how OpenMP APIs are used to program...

  20. Purdue School on High Performance and Parallel Computing

    24 Nov 2008 | | Contributor(s):: Alejandro Strachan, Faisal Saied

    The goal of this workshop is to provide training in the area of high performance scientific computing for graduate students and researchers interested in scientific computing. The School will address current hardware and software technologies and trends for parallel computing and their...