2.2 Point to Point Communication

The fundamental basis of coordination between independent processes is point-to-point communication between processes through the communication links in the MPI.COMM_WORLD. The form of communication is called message passing, where one process sends data to another one, who in turn must receive it from the sender. This is illustrated as follows:

../_images/send_recv.png

Point-to-point communication is a way that pair of processors transmits the data between one another, one processor sending, and the other receiving. MPI provides Send() and Receive() functions that allow point-to-point communication taking place in a communicator. For general Python objects, use the lowercase send() and receive() functions instead.

Send(message, destination, tag)

  - message:  numpy array containing the message
  - destination:  rank of the receiving process
  - tag:    message tag is a way to identify the type of a message (optional)

Recv(message, source, tag)
  - message: numpy array containing the message
  - source: rank of the sending process
  - tag: message tag is a way to identify the type of a message

Note

The send() and recv() functions have identical form, except that the message parameter could be a standard Python type (e.g. int, string, float etc.). Be sure to use the uppercase form and numpy arrays whenever dealing with list-like data!

Send and Receive Hello World

Here is a simpler example where strings are sent from a conductor to the workers.

This MPI program illustrates the use of send() and recv() functions. Notice that we use the lowercase version of the Send/Receive functions due to the fact that the message being sent is of type String.

Integration - First Attempt

The code listing below illustrates how the trapezoidal integration example from Chapter 1, section 1.3, could be implemented with point-to-point communication:

The trapSum() function computes and sums up a set of n trapezoids with a particular range. The first part of the main() function follows the SPMD pattern. Each process computes its local range of trapezoids and calls the trapSum() function to compute its local sum.

The latter part of the main() function follows the conductor-worker pattern. Each worker process sends its local sum to the conductor process. The conductor process generates a global array (called results), receives the local sum from each worker process, and stores the local sums in the results array. A final call to the sum() functon adds all the local sums together to produce the final result.

In later sections, we will see how to improve this example with other communication constructs. For now, ensure that you are comfortable with the workings of this program.

Exercise - Populate an Array

As an exercise, let’s use point-to-point communication to populate an array in parallel. The algorithm is as follows:

  1. Each process computes its local range of values, and then calls a function that generates an array of just those values.
    1. The final array has N values, simply holding all values from 1 to N.

    2. Each process populates an array of N/p values, where p is the number of processes. In other words, the number of elements (nElems) in each local array is N/p.

    3. Each process determines its start and end values and length of their array:

    4. Process 0 generates values 1 through nElems,

    5. Process 1 generates values nElems+1 through 2*nElems + 1

    6. etc. for each process chosen

  2. Each worker process sends its array to the conductor process.

  3. The conductor process generates a global array of the desired length, receives the local array from workers, and then populates the global array with the elements of the local arrays received.

The following program is a partially filled in solution, with the algorithm shown in comments.

Fill in the rest of the program, save as test your program using 1, 2, and 4 processes.

Remember, in order for a parallel program to be correct, it should return the same value with every run, and regardless of the number of processes chosen. Click the button below to see the solution.

Ring of passed messages

Another pattern that appears in message passing programs is to use a ring of processes: messages get sent in this fashion:

../_images/ring.png

When we have 4 processes, the idea is that process 0 will send data to process 1, who will receive it from process 0 and then send it to process 2, who will receive it from process 1 and then send it to process 3, who will receive it from process 2 and then send it back around to process 0.

Program file: 07messagePassing5.py

from mpi4py import MPI

def main():
    comm = MPI.COMM_WORLD
    id = comm.Get_rank()            #number of the process running the code
    numProcesses = comm.Get_size()  #total number of processes running
    myHostName = MPI.Get_processor_name()  #machine name running the code

    if numProcesses > 1 :

        if id == 0:        # conductor
            #generate a list with conductor id in it
            sendList = [id]
            # send to the first worker
            comm.send(sendList, dest=id+1)
            print("Conductor Process {} of {} on {} sent {}"\
            .format(id, numProcesses, myHostName, sendList))
            # receive from the last worker
            receivedList = comm.recv(source=numProcesses-1)
            print("Conductor Process {} of {} on {} received {}"\
            .format(id, numProcesses, myHostName, receivedList))
        else :
            # worker: receive from any source
            receivedList = comm.recv(source=id-1)
            # add this worker's id to the list and send along to next worker,
            # or send to the conductor if the last worker
            sendList = receivedList + [id]
            comm.send(sendList, dest=(id+1) % numProcesses)

            print("Worker Process {} of {} on {} received {} and sent {}"\
            .format(id, numProcesses, myHostName, receivedList, sendList))

    else :
        print("Please run this program with the number of processes \
greater than 1")

########## Run the main function
main()

Exercise:

Run the above program varying the number of processes for N = 1 through 8/

python run.py ./07messagePassing3.py N

Compare the results from running the example to the code above. Make sure that you can trace how the code generates the output that you see.

Exercise:

System Message: ERROR/3 (/srv/web2py/applications/runestone/books/PDCBeginners2e/_sources/2-messagePassing/sendAndReceive.rst, line 345)

Duplicate ID – see 2-messagePassing/firststeps, line 117

.. mchoice:: mc-mp-ring
   :answer_A: The last process with the highest id will have 0 as its destination because of the modulo (%) by the number of processes.
   :answer_B: The last process sends to process 0 by default.
   :answer_C: A destination cannot be higher than the highest process.
   :correct: a
   :feedback_A: Correct! Note that you must code this yourself.
   :feedback_B: Processes can send to any other process, including the highest numbered one.
   :feedback_C: This is technically true, but it is important to see how the code ensures this.

   How is the finishing of the 'ring' completed, where the last process determines that it should send back to process 0?
You have attempted of activities on this page