| TIP: Click on subject to list as thread! | ANSI |
| echo: | |
|---|---|
| to: | |
| from: | |
| date: | |
| subject: | Re: Microsoft to pull out of China? |
From: "Antti Kurenniemi"
Most modern database engines (that I know anything of) don't really need
the databases to be compacted. The empty space inside the database is
recycled all the time, and when reading or writing to the database you
never "seek" through the database, or at least shouldn't (that's
one thing indexing is for). So, when you read a record, you first find it
from the index which then points to the page inside the database where the
actual data is. The compacting thingamajick makes the file(s) smaller, but
doesn't mean much more as you still have to make the physical hard-disk
jumps from the index to the data page.
There is some performance hit if the indexes become "fragmented"
over multiple pages in the database, but most systems take care of that
automatically, so no worries. Where it really does affect performance is if
you do un-indexed searching over a badly defragmented database, because
then the engine has to jump here and there to look for data - but
compacting (re-ordering) the database does not improve the speed nearly as
much as proper indexing would.
There is actually even a case where a "messed up" database can be
faster than a nicely compacted one: when adding multiple records in a
sequence rather than at one go as a batch, a well-compacted database might
need to ask the OS to increase the file size many times, which is a
question mark as per how long it takes because the OS might be busy doing
something completely different. Whereas an uncompacted database might
already have enough empty space inside it to just put the data in and be
done with it. I think this is more a theoretical case than a real one,
though.
Me, I run some databases that are several years old (I've got one Interbase
database on a customer that dates back to 1998 I think), and some have been
rebuilt every now and then and some haven't. None seem to have any sort of
performance difficulties that would be because of fragmentation - many
other problems though, such as crappy client apps doing idiotic things like
committing after every insert or update, or doing "select *"
queries over network on customer tables that have attachment fields when
only address field is needed, and so on. None of the databases are very
big, though. The biggest ones are in the range of 60-100 or so users, and
running well below 30 Gb in size. Most are way smaller.
So, my quesstimate would be somewhere between "very little" and
"dick all".
Antti Kurenniemi
(for some weird, perverted reason, I like to read up on database engines -
yes, I was the "odd" kid at school...)
"Frank Haber" wrote in message
news:4551f8eb$1{at}w3.nls.net...
> Hey, here's my chance. I know you guys are SQL/XML developers, and only
> rarely use a production database under load. On tho rare occasions you
> do, have you developed any impression on how much difference a
> compaction/reindex makes on EXT3? On NTFS? My ignorant impression is
> "very little," but I haven't played with a
transaction/data-entry system
> under load (they won't let me, and they took away my crayons, too).
>
> (I know on big stuff you'd have to discount front end diffs, net load,
> muddleware - in other words, half the system, but I'm looking for general
> impressions. Rumors from your sysadmins will be gratefully accepted,
> too.)
--- BBBS/NT v4.01 Flag-5
* Origin: Barktopia BBS Site http://HarborWebs.com:8081 (1:379/45)SEEN-BY: 633/267 270 @PATH: 379/45 1 633/267 |
|
| SOURCE: echomail via fidonet.ozzmosis.com | |
Email questions or comments to sysop@ipingthereforeiam.com
All parts of this website painstakingly hand-crafted in the U.S.A.!
IPTIA BBS/MUD/Terminal/Game Server List, © 2025 IPTIA Consulting™.