Support

Support Options

Submit a Support Ticket

 

Illinois ECE 498AL: Programming Massively Parallel Processors, Lecture 5: CUDA Memories

By Wen-Mei W Hwu

University of Illinois at Urbana-Champaign

Published on

Abstract

CUDA Memories

Topics:

  • G80 Implementation of CUDA Memories
  • CUDA Variable Type Qualifiers
  • Where to Declare Variables
  • Variable Type Restrictions
  • A Common Programming Strategy
  • GPU Atomic Integer Operations
  • Matrix Multiplication Using Shared Memory
  • How About performance on G80?
  • IDEA: Use Shared Memory to reuse Global Memory Data
  • Tiled Multiply
  • CUDA Code - Kernel Execution Configuration

Credits

These lecture were breezed by Carl Pearson and Daniel Borup and then reviewed, edited ,and Uploaded by Omar Sobh.

Sponsored by

NCN@illinois

Cite this work

Researchers should cite this work as follows:

  • Wen-Mei W Hwu (2009), "Illinois ECE 498AL: Programming Massively Parallel Processors, Lecture 5: CUDA Memories," http://nanohub.org/resources/7243.

    BibTex | EndNote

Tags

No classroom usage data was found. You may need to enable JavaScript to view this data.

nanoHUB.org, a resource for nanoscience and nanotechnology, is supported by the National Science Foundation and other funding agencies. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.