Knute Johnson wrote:
> Following a discussion on another thread about the life limits of an
> SDCard I thought I would write a program to write one to death and see
> just how long it lasts.
>
> I'll post the code below but the general idea is that I have a random
> number generator that is creating file names. I calculate the number of
> possible files by taking the Java FileStore usable space, taking 90% of
> that and dividing it by the file size, 409,600 bytes. That is 10 blocks
> of 4096 bytes the block size from the Java FileStore. So a randomly
> selected file name is checked for existence, if it doesn't exist I write
> 409,600 bytes of 1s to the file. If it does exist and it has 1s in it I
> write 409,600 0s to the file. If it has 0s in it, I delete it. I do
> this 1,000,000 times then I delete all the files in the directory and
> start over. I'm having the program send me statistics every hour so it
> should be fairly obvious when it dies or the usable space gets really
> small because of marked off blocks.
I'm not convinced by this methodology:
- you're writing files not blocks. The filesystem is almost certainly doing
things behind the scenes (eg caching data, coalescing writes, updating
metadata when it feels like it) that mean you can't see what's
really going on.
- I don't see you syncing, forcing cached writes to complete.
(particularly an issue if the size of writes is less than the memory size)
- write amplification will expand writes to the native block size (some
power of two). 409,600 bytes isn't a power of two, so you might end up
actually writing (for instance) 512KiB. So in that instance your write
count would be low by 20%. If your writes aren't aligned with blocks, you
could actually be writing 1MiB.
- the data is eminently compressible. You didn't tell us the FS, but some
will compress behind the scenes (not if it's FAT though).
- some bad SD cards increase wear levelling for the area where the FAT is
stored. You won't observe the effects of that, or conversely wearing out
the FAT faster.
- the usable space doesn't shrink due to the number of dead blocks. You
formatted the thing as a 32GB partition, and it'll stay as a 32GB partition,
even if some of those writes eventually fail. It might get marked as
read-only eventually, but it'll never shrink to a 31GB partition. I'm not
sure if there's a way to read the number of dead blocks like there is on a
SATA device.
Practically, to be useful something like this needs to work at the block not
file level.
Theo
--- SoupGate-Win32 v1.05
* Origin: Agency HUB, Dunedin - New Zealand | FidoUsenet Gateway (3:770/3)
|