Skip to main content
SciNet
  • Home
  • All Courses
  • Calendar
  • Certificates
  • SciNet
    Main Site Documentation my.SciNet
  • CCDB
  • More
Close
Toggle search input
English
English Français
You are currently using guest access
Log in
SciNet
Home All Courses Calendar Certificates SciNet Collapse Expand
Main Site Documentation my.SciNet
CCDB
Expand all Collapse all
  1. Dashboard
  2. PHY1610 - Winter 2024
  3. Assignment 9: Distributed random numbers

Assignment 9: Distributed random numbers

Completion requirements
Opened: Monday, 1 April 2024, 12:00 AM
Due: Monday, 8 April 2024, 11:59 PM

Consider a set of data points which are random floating point numbers between 0.0 (inclusive) and 1.0 (exclusive). We are interesting in determining their distribution.

To parallelize this potentially very large data set, we distribute the data points over MPI processes such that the MPI process with given rank in a communicator with certain size, should hold the data points whose values lie in the interval [rank/size,(rank+1)/size).

With this distributed array, we will be able to compute histograms in parallel.

Your task is to write an MPI program that performs the following:

1) First, a root process generates a total of N random numbers with  the supplied PRNG class.  However, it should not produce all numbers all at once.  Instead, these should be generated in batches of size Z.  After each batch is generated, the data points are to be distributed to the MPI processes based on the values of the numbers as described above, i.e., the MPI process with given rank in a communicator with certain size, should hold the data points whose values lie in the interval [rank/size,(rank+1)/size).

2) Once all N data points have been distributed over the MPI processes, each process should compute a histogram of its points with a spacing of dx. You may assume dx to be commensurate with the intervals.

3) The results of the distributed histograms should be collected in a single array in the root process.  The normalized histogram should then be printed out to the console.

4) Go back to the random number generation of step (1) and parallelize it over the MPI processes using the discard function of the PRNG class.

Your program should use N=1'000'000'000, batch size Z=100'000, and dx=0.015625. Write a job script to run this code for P=1, 4, 16, and 32 processes, timing the result.  Include the job script and its output.

Note that without step 4, you will not see much speedup as the random number generation would still be serial.

As before, we expect you to use make and git with have several meaningful commits.

Submit your work by April 8th, 23:59 PM. The usual late penalty applies.


  • prng_example.cpp prng_example.cpp
    1 April 2024, 5:13 PM
  • prng.h prng.h
    1 April 2024, 5:13 PM
Contact site support
You are currently using guest access (Log in)
Data retention summary


All content on this website is made available under the Creative Commons Attribution 4.0 International licence, with the exception of all videos which are released under the Creative Commons Attribution-NoDerivatives 4.0 International licence.
Powered by Moodle