Q.1
They would be in serial program.
  • . nowait
  • . ordered
  • . collapse
  • . for loops
Q.2
Let S and Q be two semaphores initialized towhere P0 and P1 processes the following statements wait(S);wait(Q); ---; signal(S);signal(Q) and wait(Q); wait(S);---;signal(Q);signal(S); respectively. The above situation depicts a                .
  • . livelock
  • . critical section
  • . deadlock
  • . mutual exclusion
Q.3
Programming. The issues related to synchronization include the followings, EXCEPT:
  • . deadlock
  • . livelock
  • . fairness
  • . correctness
Q.4
Takes the data in data to be packed and packs it into
  • . mpi unpack
  • . mpi_pack
  • . mpi_datatype
  • . mpi_comm
Q.5
Has in its stack.
  • . terminated
  • . send rejects
  • . receive rejects
  • . empty
Q.6
Addition to the cost of the communication, the packing and unpacking is very                            .
  • . global least cost
  • . time- consuming
  • . expensive tours
  • . shared stack
Q.7
User program and buffersize is its size in bytes.
  • . tour data
  • . node tasks
  • . actual computation
  • . buffer argument
Q.8
Could possibly lead to a least-cost solution.
  • . depth-first search
  • . foster‘s methodology
  • . reduced algorithm
  • . breadth first search
Q.9
Parallelizing them using OpenMP.
  • . thread‘s rank
  • . function loopschedule
  • . pthreads
  • . loop variable
Q.10
Workload in the computation of the forces.
  • . cyclic distribution
  • . velocity of each particle
  • . universal gravitation
  • . gravitational constant
Q.11
Interface (MPI)?
  • . a specification of a shared memory library
  • . mpi uses objects called communicators and groups to define which collection of processes may communicate with each other
  • . only communicators and not groups are accessible to the programmer only by a "handle"
  • . a communicator is an ordered set of processes
Q.12
Is applicable in ___________________.
  • . amdahl\s law
  • . gustafson-barsis\s law
  • . newton\s law
  • . pascal\s law
Q.13
The                           operation similarly computes an element-wise reduction of vectors, but this time leaves the result scattered among the processes.
  • reduce-scatter
  • reduce (to-one)
  • allreduce
  • none of the above
Q.14
is an object that holds information about the received message, including, for example, it’s actually count.
  • buff
  • count
  • tag
  • status
Q.15
The easiest way to create communicators with new groups is with                      .
  • mpi_comm_rank
  • mpi_comm_create
  • mpi_comm_split
  • mpi_comm_group
Q.16
Them in this case and returning the result to a single process.
  • mpi _ reduce
  • mpi_ bcast
  • mpi_ finalize
  • mpi_ comm size
Q.17
Selectively screen messages.
  • dest
  • type
  • address
  • length
Q.18
CSMulti Core Architecture and Programming CSE - Regulations 2017
  • scatter
  • gather
  • broadcast
  • allgather
Q.19
generate log files of MPI calls.
  • mpicxx
  • mpilog
  • mpitrace
  • mpianim
Q.20
Global_count += 5;
  • . 4 instructions
  • . 3 instructions
  • . 5 instructions
  • . 2 instructions
0 h : 0 m : 1 s