1. <language, parallel> Never implemented, but influenced the design of
2. C for the transputer by 3L.
3. (PC) Extensions to C developed at the University of Houston providing a
shared memory SIMD model on message passing computers.
E-mail: Ridgway Scott <email@example.com>.
Paralation LISP « ParAlfl « Parallaxis « Parallel
C » parallel computer » parallel computing »
ParAlfl « Parallaxis « Parallel C « parallel
» parallel computing » Parallel FORTH » Parallel
Parallaxis « Parallel C « parallel computer «
parallel computing » Parallel FORTH » Parallel
Fortran » Parallel Haskell
Forth For the MPP.
Parallel C « parallel computer « parallel computing
Parallel FORTH » Parallel Fortran » Parallel
Haskell » parallelism
<language> (Pfortran) Extensions to Fortran by Ridgway Scott
<firstname.lastname@example.org> of Houston University. Pfortran provides a shared memory
SIMD model on message passing computers.
It was under development in 1994.
["Pfortran: A Parallel Dialect of Fortran", L.R. Scott, Fortran Forum
11(3):20-31, Sep 1992].
parallel computer « parallel computing « Parallel
Parallel Fortran » Parallel Haskell »
parallelism » Parallel Pascal
<language, parallel> (pH) A parallel variant of Haskell incorporating
ideas from Id and Sisal. pH is under development.
Mailing list: pH@abp.lcs.mit.edu.
parallel computing « Parallel FORTH « Parallel
Parallel Haskell » parallelism » Parallel Pascal
» parallel port
1. parallel processing.
2. <parallel> The maximum number of independent subtasks in a given task
at a given point in its execution. E.g. in computing the expression
(a + b) *
(c + d) the expressions a, b, c and d can all be calculated in parallel
giving a degree of parallelism of (at least) four.
Once they have been evaluated then the expressions a
+ b and c + d can be calculated as two independent
The Bernstein condition states that processes P and Q can be executed in
parallel (or in either sequential order) only if:
(i) there is no overlap between the inputs of P and the outputs of Q and vice
(ii) there is no overlap between the outputs of P, the outputs of Q and the
inputs of any other task.
If process P outputs value v which process Q reads then P must be executed
before Q. If both processes write to some variable then its final value will
depend on their execution order so they cannot be executed in parallel if any
other process depends on that variable's value.
Parallel FORTH « Parallel Fortran « Parallel Haskell
parallelism » Parallel Pascal » parallel port »
Parallel Presence Detect
<language> A data-parallel language, similar to Actus and Glypnir.
["Parallel Pascal: An Extended Pascal for Parallel Computers", A. Reeves, J
Parallel Dist Computing 1:64-80 (1984)].
Parallel Fortran « Parallel Haskell « parallelism «
Parallel Pascal » parallel port » Parallel
Presence Detect » parallel processing
<hardware> An interface from a computer system where data is transferred
in or out in parallel, that is, on more than one wire. A parallel port carries
one bit on each wire thus multiplying the transfer rate obtainable over a single
wire. There will usually be some control signals on the port as well to say when
data is ready to be sent or received.
The commonest kind of parallel port is a printer port, e.g. a Centronics port
which transfers eight bits at a time. Disks are also connected via special
parallel ports, e.g. SCSI or IDE.
Parallel Haskell « parallelism « Parallel Pascal «
parallel port » Parallel Presence Detect »
parallel processing » parallel processor
Parallel Presence Detect
parallelism « Parallel Pascal « parallel port «
Parallel Presence Detect » parallel processing »
parallel processor » parallel random access machine
<parallel> (Or "multiprocessing") The simultaneous use of more than one
computer to solve a problem. There are many different kinds of parallel computer
(or "parallel processor"). They are distinguished by the kind of interconnection
between processors (known as "processing elements" or PEs) and between
processors and memory. Flynn's taxonomy also classifies parallel (and serial)
computers according to whether all processors execute the same instructions at
the same time ("single instruction/multiple data" - SIMD) or each processor
executes different instructions ("multiple instruction/multiple data" - MIMD).
The processors may either communicate in order to be able to cooperate in
solving a problem or they may run completely independently, possibly under the
control of another processor which distributes work to the others and collects
results from them (a "processor farm"). The difficulty of cooperative problem
solving is aptly demonstrated by the following dubious reasoning:
If it takes one man one minute to dig a post-hole
then sixty men can dig it in one second.
Amdahl's Law states this more formally.
Processors communicate via some kind of network or bus or a combination of both.
Memory may be either shared memory (all processors have equal access to all
memory) or private (each processor has its own memory - "distributed memory") or
a combination of both.
Many different software systems have been designed for programming parallel
computers, both at the operating system and programming language level. These
systems must provide mechanisms for partitioning the overall problem into
separate tasks and allocating tasks to processors. Such mechanisms may provide
either implicit parallelism - the system (the compiler or some other program)
partitions the problem and allocates tasks to processors automatically or
explicit parallelism where the programmer must annotate his program to show how
it is to be partitioned. It is also usual to provide synchronisation primitives
such as semaphores and monitors to allow processes to share resources without
Load balancing attempts to keep all processors busy by allocating new tasks, or
by moving existing tasks between processors, according to some algorithm.
Communication between tasks may be either via shared memory or message passing.
Either may be implemented in terms of the other and in fact, at the lowest
level, shared memory uses message passing since the address and data signals
which flow between processor and memory may be considered as messages.
The terms "parallel processing" and "multiprocessing" imply multiple processors
working on one task whereas "concurrent processing" and "multitasking" imply a
single processor sharing its time between several tasks.
See also cellular automaton,symmetric multi-processing.
Usenet newsgroup: comp.parallel.
Parallel Pascal « parallel port « Parallel Presence
parallel processing » parallel processor »
parallel random access machine » parallel reduction
<parallel> A computer with more than one central processing unit, used
for parallel processing.
parallel port « Parallel Presence Detect « parallel
processing « parallel processor » parallel
random access machine » parallel reduction »
Parallel Server Option
parallel random access machine
<parallel> (PRAM) An idealised parallel processor consisting of P
processors, unbounded shared memory, and a common clock. Each processor is a
random access machine (RAM) consisting of R registers, a program counter, and a
read-only signature register. Each RAM has an identical program, but the RAMs
can branch to different parts of the program. The RAMs execute the program
synchronously one instruction in one clock cycle.
See also pm2.
Parallel Presence Detect « parallel processing «
parallel processor « parallel random access
machine » parallel reduction » Parallel Server
Option » Parallel SML
A form of applicative order reduction in which all redexes in an expression are
reduced simultaneously. Variants include parallel outermost reduction and
lenient reduction. See normal order reduction.
parallel processing « parallel processor « parallel
random access machine « parallel reduction »
Parallel Server Option » Parallel SML » Parallel
Parallel Server Option
Oracle Parallel Server
parallel processor « parallel random access machine
« parallel reduction « Parallel Server Option
» Parallel SML » Parallel Sysplex » Parallel Virtual
["Parallel SML: A Functional Language and its Implementation in Dactl", Kevin
Hammond, Pitman Press 1990].
parallel random access machine « parallel reduction
« Parallel Server Option « Parallel SML »
Parallel Sysplex » Parallel Virtual Machine » param
<operating system> A Sysplex that uses one or more coupling facilities.
parallel reduction « Parallel Server Option «
Parallel SML «
Parallel Sysplex » Parallel Virtual Machine »
param » parameter
Parallel Virtual Machine
<parallel, networking, tool> (PVM) 1. A software system designed to allow
a network of heterogeneous machines to be used as a single distributed parallel
PVM was developed by the University of Tennessee, The Oak Ridge National
Laboratory and the Emory University.
Usenet newsgroup: comp.parallel.pvm.
2. The intermediate language used by the Gambit compiler for Scheme.
Parallel Server Option « Parallel SML « Parallel
Parallel Virtual Machine » param » parameter »