On 12/07/2018 18:27, Anton Ertl wrote:
> You have to be yet clearer. This claim is still nonsense. If you
> neglect individual CPU cycles such that each individual CPU takes,
> say, twice as many cycles the whole application will take twice as
> long, on the same parallel machine.
No it wont.
> The parallel machine guys buy
> these expensive parallel boxes so that their applications run faster,
> so they certainly would not appreciate this kind of waste.
Perhaps they know more about them than you do.
Massively parallel systems have a huge theoretical performance from
thousands interconnected systems each with their own CPUs and memory.
However any real computational problem can only use a faction of that
theoretical performance due to the issue of transferring data over the
slow interconnects.
To be able to scale a problem to work efficiently on an MPP system, the
most important issue is to work out how to partition the data and code
the algorithm to minimise the need to transfer data across the
interconnects. This can lead to code being very different and much
slower than that which would be use on a conventional single unified
memory machine.
To take your example, the code on each machine might take twice a many
individual cycles as it could do, but if it avoids even a small amoint
of inter machine data transfer, it may run many hundreds of times faster
on a massively parallel system.
---druck
--- SoupGate-Win32 v1.05
* Origin: Agency HUB, Dunedin - New Zealand | FidoUsenet Gateway (3:770/3)
|