MCQGeeks
0 : 0 : 1
CBSE
JEE
NTSE
NEET
English
UK Quiz
Quiz
Driving Test
Practice
Games
Quiz
Computer Science
Parallel Computing
Quiz 2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Q.1
These applications typically have multiple executable object files (programs). While the application is being run in parallel, each task can be executing the same or different program as other tasks. All tasks may use different data
Single Program Multiple Data (SPMD)
Multiple Program Multiple Data (MPMD)
Von Neumann Architecture
None of these
Q.2
In the threads model of parallel programming
A single process can have multiple, concurrent execution paths
A single process can have single, concurrent execution paths.
A multiple process can have single concurrent execution paths.
None of these
Q.3
It distinguishes multi-processor computer architectures according to how they can be classified along the two independent dimensions of Instruction and Dat(A) Each of these dimensions can have only one of two possible states: Single or Multiple.
Single Program Multiple Data (SPMD)
Flynn’s taxonomy
Von Neumann Architecture
None of these
Q.4
Non-Uniform Memory Access (NUMA) is
Here all processors have equal access and access times to memory
Here if one processor updates a location in shared memory, all the other processors know about the updat
Here one SMP can directly access memory of another SMP and not all processors have equal access time to all memories
None of these
Q.5
Cache Coherent UMA (CC-UMA) is
Here all processors have equal access and access times to memory
Here if one processor updates a location in shared memory, all the other processors know about the updat
Here one SMP can directly access memory of another SMP and not all processors have equal access time to all memories
None of these
Q.6
Coarse-grain Parallelism
In parallel computing, it is a qualitative measure of the ratio of computation to communication
Here relatively small amounts of computational work are done between communication events
Relatively large amounts of computa- tional work are done between communication / synchronization events
None of these
Q.7
Granularity is
In parallel computing, it is a qualitative measure of the ratio of computation to communication
Here relatively small amounts of computational work are done between communication events
Relatively large amounts of computa- tional work are done between communication / synchronization events
None of these
Q.8
Asynchronous communications
It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collectiv
It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.
It allows tasks to transfer data independently from one another.
None of these
Q.9
Uniform Memory Access (UMA) referred to
Here all processors have equal access and access times to memory
Here if one processor updates a location in shared memory, all the other processors know about the updat
Here one SMP can directly access memory of another SMP and not all processors have equal access time to all memories
None of these
Q.10
Point-to-point communication referred to
It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collectiv
It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.*
It allows tasks to transfer data independently from one another.
None of these
Q.11
Collective communication
It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collectiv
It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.
It allows tasks to transfer data independently from one another.
None of these
Q.12
Synchronous communications
It require some type of “handshaking” between tasks that are sharing dat(A) This can be explicitly structured in code by the programmer, or it may happen at a lower level unknown to the pro- grammer.
It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collectiv
It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.
It allows tasks to transfer data independently from one another.
Q.13
Functional Decomposition:
Partitioning in that the data associated with a problem is decompose(D) Each parallel task then works on a portion of the dat(A)
Partitioning in that, the focus is on the computation that is to be performed rather than on the data manipulated by the computation. The problem is decomposed according to the work that must be don Each task then performs a portion of the overall work.
It is the time it takes to send a minimal (0 byte) message from point A to point (B)
None of these
Q.14
Domain Decomposition
Partitioning in that the data associated with a problem is decompose(D) Each parallel task then works on a portion of the dat(A)
Partitioning in that, the focus is on the computation that is to be performed rather than on the data manipulated by the computation. The problem is decomposed according to the work that must be don Each task then performs a portion of the overall work.
It is the time it takes to send a minimal (0 byte) message from point A to point (B)
None of these
Q.15
Latency is
Partitioning in that the data associated with a problem is decompose(D) Each parallel task then works on a portion of the dat(A)
Partitioning in that, the focus is on the computation that is to be performed rather than on the data manipulated by the computation. The problem is decomposed according to the work that must be don Each task then performs a portion of the overall work.
It is the time it takes to send a minimal (0 byte) message from one point to other point
None of these
Q.16
In designing a parallel program, one has to break the problem into discreet chunks of work that can be distributed to multiple tasks. This is known as
Decomposition
Partitioning
Compounding
Both A and B
Q.17
In shared Memory
Changes in a memory location effected by one processor do not affect all other processors.
Changes in a memory location effected by one processor are visible to all other processors
Changes in a memory location effected by one processor are randomly visible to all other processors.
None of these
Q.18
Fine-grain Parallelism is
In parallel computing, it is a qualitative measure of the ratio of computation to communication
Here relatively small amounts of computational work are done between communication events
Relatively large amounts of computational work are done between communication / synchroni- zation events
None of these
Q.19
Massively Parallel
Observed speedup of a code which has been parallelized, defined as: wall-clock time of serial execution and wall-clock time of parallel execution
The amount of time required to coordinate parallel tasks. It includes factors such as: Task start-up time, Synchronizations, Data communications.
Refers to the hardware that comprises a given parallel system - having many processors
None of these
Q.20
Parallel Overhead is
Observed speedup of a code which has been parallelized, defined as: wall-clock time of serial execution and wall-clock time of parallel execution
The amount of time required to coordi- nate parallel tasks. It includes factors such as: Task start-up time, Synchro- nizations, Data communications.
Refers to the hardware that comprises a given parallel system - having many processors
None of these
0 h : 0 m : 1 s
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Report Question
×
What's an issue?
Question is wrong
Answer is wrong
Other Reason
Want to elaborate a bit more? (optional)
Support mcqgeeks.com by disabling your adblocker.
×
Please disable the adBlock and continue.
Thank you.
Reload page