This is done by using the num_threads() clause. Finally, OpenMP is told to spawn 16 threads (one for each core of the processor). shared) so the shared() clause is used to accomplish that. The variable, “count” needs to be accessible to all threads (i.e. The firstprivate() clause also has the feature that will automatically initialize the variables inside its parentheses. Here the firstprivate() clause is used in the omp pragma to declare them as thread private. In order to accomplish this, OpenMP needs to be told that x, y, z, and i need to be private. Here, our code needs each thread to have its own copy of x, y, z, and i, and the variable “count” needs to be accessible by all threads so that it can be incremented when needed. Since OpenMP is a shared memory API, one needs to keep in mind what variables need to be accessible by what threads. This code is creating 16 threads one for each core of the CPU. This tells the compiler that the following block will be OpenMP parallel code. Int count=0 //Count holds all the number of how many good coordinatesĪlso, the #pragma omp parallel compiler directive needs to be added. Int i, count=0 //Count holds all the number of how many good coordinates serialpi.cĭouble x,y //x,y value for the random coordinate (Note: The more iterations of points generated (niter), the more accurate the approximation will be). To do this, P is multiplied by 4 and the result is an approximation of π. Thus we can say: P= (πr²)/4r² = π/4 and solve for π. Using this ratio, the approximation of π can be computed by assuming that P approaches the value of ratio of the area of the circle to the area of the square when the number of points, niter, is large. The ratio, P, of points inside the circle to the total number of points tried is calculated. The algorithm generates a large number of points and checks to see if the coordinates, x and y, of each point are inside the circle- x2+y2≤1. To set up the estimate, randomly located points are generated within a 2×2 square which has a circle inscribed within it– think of a game of darts. The codes use Monte Carlo methods to estimate π. The next codes are parallelized using MPI and OpenMP and then finally, the last code sample is a version that combines both of these parallel techniques. The first code is a simple serial implementation. This tutorial covers how to write a parallel program to calculate π using the Monte Carlo method.
0 Comments
Leave a Reply. |