On 12-Jul-18 13:55, Ahem A Rivet's Shot wrote:
> On Thu, 12 Jul 2018 11:47:00 +0100
> The Natural Philosopher wrote:
>
>> On 12/07/18 08:50, Ahem A Rivet's Shot wrote:
>>> On Thu, 12 Jul 2018 08:27:24 +0100
>>> It looked that way just before NVMe, I saw some discussions
>>> around the possibility that DRAM may become obsolete with NVMe SSDs
>>> filling cache lines directly.
>>
>> Yes. That makes sense BUT it wont actually make e.g. my desktop any
>> faster because it spends almost no time now on IO wait
It does spend a lot of time. It's still worth doing a processor context
switch (at around 500ns) than it is waiting for the I/O, even from an
NVMe device; the I/O software stack is becoming a major component of the
delay, but it's still less than the device respnse time.
>
> It will if the NVMe SSD can fill the cache lines faster than DRAM
> can (which is not yet the case).
And never will be. The next technology wave is persistent memory (PM),
not NVMe. NVMe sits on a PCIe bus, which is bandwidth and latency
limited. PCIe was always about building to a price, not for performance.
There are other much faster bus technologies in the offing; CCIX, GEN-Z,
OpenCAPI for example. A good overview of these;
https://www.csm.ornl.gov/workshops/openshmem2017/presentations/Benton%20-%20Ope
nCAPI,%20Gen-Z,%20CCIX-%20Technology%20Overview,%20Trends,%20and%20Alignments.p
df
PM sits directly on the memory bus. In other words, you don't do block
IO to these new memory devices; you do loads and stores. Currently PM
latency is in the high hundreds of nS to single digit uS, so it sits
between DRAM (which is not persistent, single didgit ns to low 10s of
ns) and block based devices like SSDs (persistent, hundreds of µs to low
ms depndant on the software stack and bus).
SNIA (a storage standards organisation) covers a lot of the background
https://www.snia.org/PM.
>
>>> Nope there's still power consumption to attack or rather being
>>> attacked. Run your mind back to the time a bleeding edge PC was about as
>>> powerful as a RPi3 and consumed the thick end of a hundred watts.
>>>
>>
>> Yes, but that has not much further to go either. It is an expression
>> somewhat of morres law. As gate size gets smaller so too dpoes te power,
>> exspecially if the clock rate is lowered
Smaller gate sizes paradoxically increases the power consumption and
hence heat generated per volume of silicon.
>
> True but it is the fastest moving development today.
>
>> ARM is economical because there wasn't a lot in it, because Acorn could
>> not afford a bigger gate array to build it on
>
> There's quite a lot in an A72 core but I can still run two of them
> and eight A53 cores in my phone.
>
>>> Big data centres don't particularly need faster processors or
>>> faster memory they already hold tens of thousands of processors and
>>> hundreds of thousands of discs - they do need denser storage and lower
>>> power consumption.
I disagree; big data centers definitely need faster everything. The more
data you have and need to number crunch, the harder it becomes to move
it around, and the big push right now is providing very high speed RDMA
(memory to memory through smart network cards that don't involve the
CPU) type links between storage and processors to make this easier.
>>>
>>
>> And they will get that a bit.
>
> They're getting it quite a lot at the moment - drives and SSDs are
> still getting bigger, faster and lower power at quite a rate. This is more
> visible at the 'enterprise' end of the business rather than the 'consumer'
> end.
>
--
Alex
--- SoupGate-Win32 v1.05
* Origin: Agency HUB, Dunedin - New Zealand | FidoUsenet Gateway (3:770/3)
|