TIP: Click on subject to list as thread! ANSI
echo: os2prog
to: Gord Mc.Pherson
from: Peter Fitzsimmons
date: 1996-01-27 09:35:08
subject: pdpclib

PF> malloc/realloc/free (these can become an 
PF> especially nasty bottleneck in a
PF> multithread program;

 GM>     Your above statement caught my eye. Could you elaborate a little? 

Since malloc (hence new) return data that has to be useable from any
thread,  a global heap (pool of memory) is used.

All heap OS/2 managers, except for the new IBM VAC++ 3.0, to my knowledge,
require that the entire heap be locked (protected by a Mutex semahpore)
when any heap function (malloc/realloc/free) is used (this can be extended
to fopen,  some of the *rintf() family,  and any other CRT function that
may use heap services).

If you have 20 threads,  and they all regularly call malloc() (even if
indirectly),  there is a bottleneck there,  since every call is serialized.

In my own programs,  if I have a thread that is running "real
time",  I never use dynamic memory allocation because of this.  I
allocate everything up front,  and use buffer pools if necessary.

One of the main new features (of the compiler portion) of IBM VAC++ v3.0 is
a new heap manager that does not require the whole heap to be locked. It
also avoids touching very many heap pages during a malloc/free (which cuts
down on swapping).   You can also create your own heap (so that it can be
free'ed in one fell swoop; but I don't recommend anyone actually use it for
this feature) so that you can create a heap in named shared memory.


--- Maximus/2 3.00
* Origin: Sol 3 * Toronto * V.32 * (905)858-8488 (1:259/414)
SEEN-BY: 50/99 270/101 620/243 711/401 409 410 413 430 808 809 934 955
SEEN-BY: 712/407 515 517 628 713/888 800/1 7877/2809
@PATH: 259/414 400 99 250/99 3615/50 396/1 270/101 712/515 711/808 809 934

SOURCE: echomail via fidonet.ozzmosis.com

Email questions or comments to sysop@ipingthereforeiam.com
All parts of this website painstakingly hand-crafted in the U.S.A.!
IPTIA BBS/MUD/Terminal/Game Server List, © 2025 IPTIA Consulting™.