JC> I've yet to see anything that was even roughly similar to C or C++ that
JC> didn't have the OS read a block at a time, and simply retrieve a
JC> character from a buffer most of the time. OTOH, this is an area where C
How many layers of caching does one need? get?
It seems that most likely the drive does some caching/readahead
and then OS does the same, and then the compiler does the same.
It would seem that there is quite a bit of overkill if a modern
compiler assumes that its own caching will improve upon the caching
up the ladder.
JC> will be a macro that's expanded inline, so the majority of the time,
JC> reading a character takes around a half dozen instructions or so. If
JC> you do processing in a reasonably tight loop, this will typically all be
JC> in cache, and will reference memory only once, to read the character
JC> itself.
While I do agree with the use of inline for some functions, I've
never been a believer in the use of macros. I find macros in a high
level language deconstructive to the ideas behind strong typecasting
JC> This adds considerable overhead and uses enough more data that it's far
JC> less likely that everything will be in cache. Empirical testing
JC> indicates that it's often two to three times slower than using C style
JC> I/O to do the same job.
A nasty little give and take. You get reusable code via polymorphic
calls but OO practices can bogg down execution time. Its the same
ideas behind virtual calculations of memory addresses at
compile time
load time
run time
Each is exceedingly slower that the previous but have some major
advantages. You get mutli-tasking with run time address calculations
and some safeguards against corrupting the OS. You also get paging
access with run time: allowing for only the OS and a few drivers
to ever really be loaded into memory. So far, paging is about the
most modern memory management model there is (ugh).
--- GEcho 1.00
---------------
* Origin: Digital OnLine Magazine! - (409)838-8237 (1:3811/350)
|