联系方式

  • QQ:99515681
  • 邮箱:99515681@qq.com
  • 工作时间:8:00-21:00
  • 微信:codinghelp

您当前位置:首页 >> Java编程Java编程

日期:2023-11-26 09:07


Assignment 2: Parallelize What Seems Inherently Sequential: ECE1747H F LEC0101 20239:Parallel Programming

Assignment 2: Parallelize What Seems

Inherently Sequential

Introduction

In parallel computing, there are operations that, at first glance, seem inherently sequential but can

be transformed and executed efficiently in parallel. One such operation is the "scan". At its

essence, the scan operation processes an array to produce a new array where each element is

the result of a binary associative operation applied to all preceding elements in the original array.

Consider an array of numbers, and envision producing a new array where each element is the

sum of all previous numbers in the original array. This type of scan that uses "+" as the binary

operator is commonly known as a "prefix-sum".  Scan has two primary variants: exclusive and

inclusive. In an exclusive scan, the result at each position excludes the current element, while in

an inclusive scan, it includes the current element. For instance, given an array [3, 1, 7, 0] and

an addition operation, an exclusive scan would produce [0, 3, 4, 11] , and an inclusive scan

would produce [3, 4, 11, 11] .

Scan operations are foundational in parallel algorithms, with applications spanning from sorting to

stream compaction, building histograms and even more advanced tasks like constructing data

structures in parallel. In this assignment, we'll delve deep into the intricacies of scan, exploring its

efficient implementation using CUDA.

Assignment Description

In this assignment, you will implement a parallel scan using CUDA. Let's further assume that the

scan is inclusive and the operator involved in the scan is addition. In other words, you will be

implementing an inclusive prefix sum.

The following is a sequential version of inclusive prefix sum:

void sequential_scan(int *x, int *y, unsigned int N) {

 y[0] = x[0];

 for(unsigned int i = 1; i < N; ++i) {

   y[i] = y[i - 1] + x[i];

 }

}

While this might seem like a task demanding sequential processing, with the right algorithm, it can

be efficiently parallelized. Your parallel implementation will be compared against the sequential

Assignment 2: Parallelize What Seems Inherently Sequential: ECE1747H F LEC0101 20239:Parallel Programming

2/8

version which runs on the CPU. The mark will be based on the speedup achieved by your

implementation. Note that data transfer time is not included in this assignment. However, in real

world applications, data transfer in often a bottleneck and is important to include that in the

speedup calculation.

Potential Algorithms

In this section, I describe a few algorithms to implement a parallel scan on GPU, which you may

use for this assignment. Of course, you may also choose to use other algorithms. These

algorithms are chosen for their simplicity and may not be the fastest.

We will first present algorithms for performing parallel segmented scan, in which every thread

block will perform a scan on a segment of elements in the input array in parallel. We will then

present methods that combine the segmented scan results into the scan output for the entire input

array.

Segmented Scan Algorithms

The exploration of parallel solutions for scan problems has a long history, spanning several

decades. Interestingly, this research began even before the formal establishment of Computer

Science as a discipline. Scan circuits, crucial to the operation of high-speed adder hardware like

carry-skip adders, carry-select adders, and carry-lookahead adders, stand as evidence of this

pioneering research.

As we know, the fastest parallel method to compute the sum of a set of values is through a

reduction tree. Given enough execution units, this tree can compute the sum of N values in

log2(N) time units. Additionally, the tree can produce intermediate sums, which can be used to

produce the scan (prefix sum) output values. This principle is the foundation of the design of both

the Kogge-Stone and Brent-Kung adders.

Brent-Kung Algorithm

Assignment 2: Parallelize What Seems Inherently Sequential: ECE1747H F LEC0101 20239:Parallel Programming

3/8

The above figure show the steps for a parallel inclusive prefix sum algorithm based on the BrentKung

adder design. The top half of the figure produces the sum of all 16 values in 4 steps. This is

exactly how a reduction tree works. The second part of the algorithm (bottom half of the figure) is

to use a reverse tree to distribute the partial sums and use them to complete the result of those

positions.

Kogge-Stone Algorithm

The Kogge-Stone algorithm is a well-known, minimum-depth network that uses a recursivedoubling

approach for aggregating partial reductions. The above figure shows an in-place scan

Assignment 2: Parallelize What Seems Inherently Sequential: ECE1747H F LEC0101 20239:Parallel Programming

4/8

algorithm that operates on an array X that originally contains input values. It iteratively evolves the

contents of the array into output elements.

In the first iteration, each position other than X[0] receives the sum of its current content and that

of its left neighbor. This is illustrated by the first row of addition operators in the figure. As a result,

X[i] contains xi-1 +xi. In the second iteration, each position other than X[0] and X[1] receives the

sum of its current content and that of the position that is two elements away (see the second row

of adders). After k iterations, X[i] will contain the sum of up to 2^k input elements at and before the

location.

Although it has a work complexity of O(nlogn), its shallow depth and simple shared memory

address calculations make it a favorable approach for SIMD (SIMT) setups, like GPU warps.

Scan for Arbitrary-length Inputs

For many applications, the number of elements to be processed by a scan operation can be in the

millions or even billions. The algorithms that we have presented so far perform local scans on

input segments. Therefore, we still need a way to consolidate the results from different sections.

Hierarchical Scan

One of such consolidation approaches is hierarchical scan. For a large dataset we first partition

the input into sections so that each of them can fit into the shared memory of a streaming

multiprocessor (GPU) and be processed by a single block. The aforementioned algorithms can be

used to perform scan on each partition. At the end of the grid execution, the Y array will contain

the scan results for individual sections, called scan blocks (see the above figure). The second

step gathers the last result elements from each scan block into an array S and performs a scan on

these output elements. In the last step of the hierarchical scan algorithm, the intermediate result in

S will be added to the corresponding elements in Y to form the final result of the scan.

For those who are familiar with computer arithmetic circuits, you may already recognize that the

principle behind the hierarchical scan algorithm is quite similar to that of carry look-ahead adders

Assignment 2: Parallelize What Seems Inherently Sequential: ECE1747H F LEC0101 20239:Parallel Programming

5/8

in modern processor hardwares.

Single Pass Scan

One issue with hierarchical scan is that the partially scanned results are stored into global

memory after step 1 and reloaded from global memory before step 3. The memory access is not

overlapped with computation and can significantly affect the performance of the scan

implementation (as shown in the above figure).

There exists many techniques proposed to mitigate this issue. Single-pass chained scan (also

called stream-based scan or domino-style scan) passes the partial sum data in one directory

across adjacent blocks. Chained-scan is based on a key observation that the global scan step

(step 2 in hierarchical scan) can be performed in a domino fashion (i.e. from left to right, and the

output can be immediately used). As a result, the global scan step does not require a global

synchronization after it, since each segment only needs the partial sum of segments before itself.

Further Reading

Parallel Prefix Sum (Scan) with CUDA

Single-pass

Parallel Prefix Scan with Decoupled Look-back

Report

Assignment 2: Parallelize What Seems Inherently Sequential: ECE1747H F LEC0101 20239:Parallel Programming

Along with your code, you will also need to submit a report. Your report should describe the

following aspects in detail:

Describe what algorithm did you choose and why.

Describe any design decisions you made and why. Explain how they might affect performance.

Describe anything you tried (even they are not in the final implementation) and if they worked

or not. Why or why not.

Analyze the bottleneck of your current implementation and what are the potential

optimizations.

Use font Times New Roman, size 10, single spaced. The length of the report should not exceed 3

pages.

Setup

Initial Setup

Start by unzipping the provided starter code a2.zip

into a protected directory within your

UG home directory. There are a multiple files in the provided zip file, the only file you will need

to modify and hand in is implementation.cu. You are not allowed to modify other files as only

your implementation.cu file will be tested for marking.

Within implementations.cu, you need to insert your identification information in the

print_team_info() function. This information is used for marking, so do it right away before you

start the assignment.

Compilation

The assignment uses GNU Make to compile the source code. Run make in the assignment

directory to compile the project, and the executable named ece1747a2 should appear in the same

directory.

Coding Rules

The coding rule is very simple.

You must not use any existing GPU parallel programming library such as thrust and cub.

You may implement any algorithm you want.

Your implementation must use CUDA C++ and compilable using the provided Makefile.

You must not interfere or attempt to alter the time measurement mechanism.

Your implementation must be properly synchronized so that all operations must be finished

before your implementation returns.

Evaluation

Assignment 2: Parallelize What Seems Inherently Sequential: ECE1747H F LEC0101 20239:Parallel Programming

7/8

The assignment will be evaluated on an UG machine equipped with Nvidia GPU. Therefore, make

sure to test your implementation on the UG machines before submission. When you evaluate your

implementation using the command below, you should receive similar output.

ece1747a2 -g

************************************************************************************

Submission Information:

nick_name: default-name

student_first_name: john

student_last_name: doe

student_student_number: 0000000000

************************************************************************************

Performance Results:

Time consumed by the sequential implementation: 124374us

Time consumed by your implementation: 125073us

Optimization Speedup Ratio (nearest integer): 1

************************************************************************************

Marking Scheme

The total available marks for the assignment are divided as follows: 20% for the lab report, 65%

for the non-competitive portion, and 15% for the competitive portion. The non-competitive section

is designed to allow individuals who put in minimal effort to pass the course, while the competitive

section aims to reward those who demonstrate higher merit.

Non-competitive Portion (65%)

Achieving full marks in the non-competitive portion should be straightforward for anyone who puts

in the minimal acceptable amount of effort. You will be awarded full marks in this section if your

implementation achieves a threshold speedup of 30x. Based on submissions during the

assignment, the TA reserves the right to adjust this threshold as deemed appropriate, providing at

least one week's notice.

Competitive Portion (15%)

Marks in this section will be determined based on the speedup of your implementation relative to

the best and worst speedups in the class. The formula for this is:

mark = (your speedup - worst speedup over threshold) / (top speedup - worst speedup over threshold)

Throughout the assignment, updates on competitive marks will be posted on Piazza at intervals

not exceeding 24 hours.

The speedup will be measure on a standard UG machine equipped with GPU. (Therefore, make

sure to test your implementations on the UG machines). The final marking will be performed after

the submission deadline on all valid submissions.

Submission

Submit your report on Quercus. Make sure your report is in pdf format and can be viewed with

standard pdf viewer  (e.g. xpdf or acroread).

Assignment 2: Parallelize What Seems Inherently Sequential: ECE1747H F LEC0101 20239:Parallel Programming

8/8

When you have completed the lab, you will hand in just implementation.cu that contains your

solution. The standard procedure to submit your assignment is by typing submitece1747f 2

implementation.cu on one of the UG machines.

Make sure you have included your identifying information in the print team info() function.

Remove any extraneous print statements.


版权所有:编程辅导网 2021 All Rights Reserved 联系方式:QQ:99515681 微信:codinghelp 电子信箱:99515681@qq.com
免责声明:本站部分内容从网络整理而来,只供参考!如有版权问题可联系本站删除。 站长地图

python代写
微信客服:codinghelp