From: Russ
Subject: Re: random file record size
David Kiesling wrote:
> Why would one have records in a random access file 1024 bytes each?
> A program I use has records like that. The data for each record only
> makes up 468 bytes and the extra unused 556 bytes goes into an unused
> string variable, making the records 1024 bytes each.
>
> If there's a reason for this, is the same true for records of 2048,
> 4096, etc. bytes each? How about 512, 256, 128, etc. bytes?
The usual reason for this is that it's easy to remember where to
place pointers from an index. If you use an offset of 1024 and SEEK
you don't need to read through the records. I personally find 1000
to be easier to remember, but to each his own.
The leftover string space is there so that it can be used later
without converting the entire record system. Simply make that
string space smaller by the amount you need, and add in your
variable of the corresponding size.
Another reason it may be this way is that if a program uses many
small files rather than one large file, the files will take up
less allocation space on the HDD. This isn't much of a problem
anymore, with multi-gig drives, but many of us who have been
programming since 40 meg drives were considered big, retain the
old habits, because we learned that way.
It's mostly leftover habits from when this little bit of speed
or HDD space really meant something.
Russ (russ@spinward.com)
*** QwkNews (tm) v2.1
* [TN71] Toast House Import
--- GEcho 1.20/Pro
---------------
* Origin: Toast House Remote (1:100/561)
|