Passer au contenu principal
SciNet
  • Accueil
  • Tous les cours
  • Calendrier
  • Certificats
  • SciNet
    Site principal Documentation my.SciNet
  • CCDB
  • Plus
Fermer
Activer/désactiver la saisie de recherche
Français
English Français
Vous êtes connecté anonymement
Connexion
SciNet
Accueil Tous les cours Calendrier Certificats SciNet Replier Déplier
Site principal Documentation my.SciNet
CCDB
Tout déplier Tout replier
  1. Tableau de bord
  2. PHY1610 - Winter 2024
  3. Assignment 9: Distributed random numbers

Assignment 9: Distributed random numbers

Conditions d’achèvement
Ouvert le : lundi 1 avril 2024, 00:00
À rendre : lundi 8 avril 2024, 23:59

Consider a set of data points which are random floating point numbers between 0.0 (inclusive) and 1.0 (exclusive). We are interesting in determining their distribution.

To parallelize this potentially very large data set, we distribute the data points over MPI processes such that the MPI process with given rank in a communicator with certain size, should hold the data points whose values lie in the interval [rank/size,(rank+1)/size).

With this distributed array, we will be able to compute histograms in parallel.

Your task is to write an MPI program that performs the following:

1) First, a root process generates a total of N random numbers with  the supplied PRNG class.  However, it should not produce all numbers all at once.  Instead, these should be generated in batches of size Z.  After each batch is generated, the data points are to be distributed to the MPI processes based on the values of the numbers as described above, i.e., the MPI process with given rank in a communicator with certain size, should hold the data points whose values lie in the interval [rank/size,(rank+1)/size).

2) Once all N data points have been distributed over the MPI processes, each process should compute a histogram of its points with a spacing of dx. You may assume dx to be commensurate with the intervals.

3) The results of the distributed histograms should be collected in a single array in the root process.  The normalized histogram should then be printed out to the console.

4) Go back to the random number generation of step (1) and parallelize it over the MPI processes using the discard function of the PRNG class.

Your program should use N=1'000'000'000, batch size Z=100'000, and dx=0.015625. Write a job script to run this code for P=1, 4, 16, and 32 processes, timing the result.  Include the job script and its output.

Note that without step 4, you will not see much speedup as the random number generation would still be serial.

As before, we expect you to use make and git with have several meaningful commits.

Submit your work by April 8th, 23:59 PM. The usual late penalty applies.


  • prng_example.cpp prng_example.cpp
    1 avril 2024, 17:13
  • prng.h prng.h
    1 avril 2024, 17:13
Contacter l’assistance du site
Vous êtes connecté anonymement (Connexion)
Résumé de conservation de données


All content on this website is made available under the Creative Commons Attribution 4.0 International licence, with the exception of all videos which are released under the Creative Commons Attribution-NoDerivatives 4.0 International licence.
Fourni par Moodle